Installing Debian Wheezy (testing) With debootstrap From A Grml Live Linux

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Tue, 2013-01-15 20:16. :: Debian | Other

Installing Debian Wheezy (testing) With debootstrap From A Grml Live Linux

Version 1.0
Author: Falko Timme <ft [at] falkotimme [dot] com>
Follow me on Twitter
Last edited 01/08/2013

This tutorial explains how to install Debian Wheezy (testing) with the help of debootstrap from a Grml Live Linux system (like it is used as a rescue system at Webtropia). This should work - with minor changes - for other Debian and Ubuntu versions as well. By following this guide, it is possible to configure the system to your needs (OS version, partitioning, RAID, LVM, etc.) instead of depending on the few pre-configured images that your server provider offers.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

The server I'm using in this tutorial has two hard drives. I want to use software RAID1 and LVM for this server.

Before you boot the system into rescue mode, you should take note of its network settings so that you can use the same network settings for the new system. For example, if you use Debian or Ubuntu on the system, take a look at /etc/network/interfaces:

cat /etc/network/interfaces

It could look as follows:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
     address <ip_address>
     netmask <netmask>
     broadcast <broadcast>
     gateway <gateway>
iface eth0 inet6 static
  address <ipv6_address>
  netmask 64
  up ip -6 route add <ipv6_gateway> dev eth0
  down ip -6 route del <ipv6_gateway> dev eth0
  up ip -6 route add default via <ipv6_gateway> dev eth0
  down ip -6 route del default via <ipv6_gateway> dev eth0

 

2 Partitioning

Now boot into the Grml rescue system.

In the rescue system, let's check of software RAID and LVM are in use for the hard drives (if so, we need to remove them before we partition the hard drives):

The commands

lvdisplay

vgdisplay

pvdisplay

tell you if there are logical volumes, volume groups, and block devices being used for LVM. If so, you can remove them as follows (make sure you use the correct data, as displayed by the three above commands):

lvremove /dev/vg0/root

vgremove vg0

pvremove /dev/md1

Let's check if software RAID is in use:

cat /proc/mdstat

If the output is as below, you have to RAID devices, /dev/md0 and /dev/md1 which have to be removed:

root@grml ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [multipath]
md0 : active raid1 sda1[0] sdb1[1]
      530048 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      104856192 blocks [2/2] [UU]

unused devices: <none>
root@grml ~ #

Remove them as follows:

mdadm --stop /dev/md0
mdadm --stop /dev/md1

Double-check that no RAID devices are left:

cat /proc/mdstat

root@grml ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [multipath]
unused devices: <none>
root@grml ~ #

Let's take a look at the current partitioning:

fdisk -l

root@grml ~ # fdisk -l

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x957081d4

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          66      530112+  fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
Partition 1 does not start on physical sector boundary.
/dev/sdb2              66       13120   104856256   fd  Linux raid autodetect

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000cdcbd

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          66      530112+  fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
Partition 1 does not start on physical sector boundary.
/dev/sda2              66       13120   104856256   fd  Linux raid autodetect
root@grml ~ #

Because I have two identical hard drives, I want to use software RAID1 plus LVM (because of its felxibility). Because LVM cannot be used on the boot partition, I have to create a separate /boot partition where I use RAID1 without LVM.

To do this, I delete the existing partitions from /dev/sda, create two new ones (a small one of about 512MB for /boot and a large one; you could of course use all remaining space for the large partition, but I tend to leave some space unused because bad sectors tend to appear in the outer areas of a spinning disk - if you use an SSD, it's ok to use the whole disk) with Linux raid autodetect as the partition type:

root@grml ~ # fdisk /dev/sda

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- d
Partition number (1-4): <-- 1

Command (m for help): <-- d
Selected partition 2

Command (m for help):
 <-- n
Command action
   e   extended
   p   primary partition (1-4)

<-- p
Partition number (1-4): <-- 1
First cylinder (1-243201, default 1): <-- ENTER
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-243201, default 243201):
 <-- +512M

Command (m for help): <-- t
Selected partition 1
Hex code (type L to list codes):
 <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- n
Command action
   e   extended
   p   primary partition (1-4)

<-- p
Partition number (1-4): <-- 2
First cylinder (66-243201, default 66): <-- ENTER
Using default value 66
Last cylinder, +cylinders or +size{K,M,G} (66-243201, default 243201):
 <-- 240000 (I tend to leave some space unused here)

Command (m for help): <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: If you have created or modified any DOS 6.x
partitions, please see the fdisk manual page for additional
information.
Syncing disks.
root@grml ~ #

Next I run...

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

... to copy the partitioning scheme from /dev/sda to /dev/sdb so that they are identical on both disks.

Run...

mdadm --zero-superblock /dev/sda1
mdadm --zero-superblock /dev/sda2
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

... to remove any remainders from previous RAID arrays from the partitions.

Now we create the RAID array /dev/md0 from /dev/sda1 and /dev/sdb1...

mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1

... and /dev/md1 from /dev/sda2 and /dev/sdb2:

mdadm --create /dev/md1 --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2

Let's check with:

cat /proc/mdstat

As you see, we have two new RAID1 arrays:

root@grml ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [multipath]
md1 : active raid1 sdb2[1] sda2[0]
      1927269760 blocks [2/2] [UU]
      [>....................]  resync =  0.0% (347136/1927269760) finish=185.0min speed=173568K/sec

md0 : active raid1 sdb1[1] sda1[0]
      530048 blocks [2/2] [UU]

unused devices: <none>
root@grml ~ #

Let's put an ext4 filesystem on /dev/md0:

mkfs.ext4 /dev/md0

Prepare /dev/md1 for LVM:

pvcreate /dev/md1

Create the volume group vg0:

vgcreate vg0 /dev/md1

Create a logical volume for / with a size of 100GB:

lvcreate -n root -L 100G vg0

Create a logical volume for swap with a size of 10GB:

lvcreate -n swap -L 10G vg0

Run:

lvscan

If all logical volumes are shown as ACTIVE, everything is fine:

root@grml ~ # lvscan
  ACTIVE            '/dev/vg0/root' [100,00 GiB] inherit
  ACTIVE            '/dev/vg0/swap' [10,00 GiB] inherit
root@grml ~ #

If not, run...

vgchange -ay

... and check with lvscan again.

Next create filesystems on /dev/vg0/root and /dev/vg0/swap:

mkfs.ext4 /dev/vg0/root
mkswap /dev/vg0/swap

Mount the root volume to /mnt, create a few diretories and mount /dev/md0 to /mnt/boot:

mount /dev/vg0/root /mnt
cd /mnt
mkdir boot
mkdir proc
mkdir dev
mkdir sys
mkdir home
mount /dev/md0 boot/

Create an fstab for the new system:

mkdir etc
cd etc
vi fstab

proc /proc   proc   defaults 0 0
/dev/md0 /boot   ext4   defaults 0 2
/dev/vg0/root /              ext4   defaults 0 1
/dev/vg0/swap          none      swap  defaults,pri=1 0 0

Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Aoli (not registered) on Fri, 2013-01-18 20:13.

Why not also use the full Grml-live on a bootable USB drive with one of the wheezy Beta4 netinst CD images at the Debian Installer site  http://www.debian.org/devel/debian-installer/ saved on the same drive?? These would be used together for installing Debian Wheezy (testing) onto the software RAID1 and LVM server setup.

This way, one could save a bit of bandwidth by not having to run the full 'debootstrap' command

debootstrap --arch amd64 testing /mnt ftp://ftp.de.debian.org/debian/

 

Knoppix http://www.knopper.net/knoppix/index-en.html already has this capability in a more convoluted fashion for installing its customized mix of Wheezy and Sid.

 Just a suggestion here :)