View Single Post
Old 5th August 2008, 11:48
Bart van Kleef Bart van Kleef is offline
Junior Member
Join Date: Aug 2008
Posts: 15
Thanks: 4
Thanked 0 Times in 0 Posts
Default VMware use hda5 instade of hda6 for the VM's

Firstly, I'll try to make my question as clearly as possible, despite my poor English and thanks for making the great howto's! After strictly follow this and this howto, saves VMware Server (2.0 Release Candidate 1) the VM's on hda5 as I set hda6. And I see nowhere (with fdisk-l or df -h) /dev/drbd0...

Here what I've done:
Befor the installation of VMware, I have created a new directory as in the aforementioned how to was described by:
mkdir /var/vm
To save the VM's on that location. During the installation of VMware Server I also changed the Datastore default in /var/vm. (So from here it is obvious that the VM's to be saved on hda5.)
But because DRBD is set at hda6 (see drbd.conf) and I enter
mount-t ext3 /dev/drbd0 /var/vm
what the location changes from /var/vm on hda5 to /var/vm on hda6, right?

My partition scheme is as follows:
/dev/hda1 | 0.01 GB | boot (primary, ext3, bootable flag: on)
/dev/hda5 | 3.80 GB | / (logical, ext3)
/dev/hda6 | 25.4 GB | unmounted (logical, ext3, will contain the /var/vm directory)
/dev/hda7 | 1.00 GB | swap (logical, swap)

But hda6 should be well formatted with ext3 or just created?

The output from fdisk -l shows:
Disk /dev/hda: 30.7 GB, 30750031872 bytes
255 heads, 63 sectors/track, 3738 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          12       96358+  83  Linux
/dev/hda2              13        3738    29929095    5  Extended
/dev/hda5              13         474     3710983+  83  Linux
/dev/hda6             475        3611    25197921   83  Linux
/dev/hda7            3612        3738     1020096   82  Linux swap / Solaris
And df -h:
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda5             3.5G  3.2G  197M  95% /
tmpfs                 380M     0  380M   0% /lib/init/rw
udev                   10M   44K   10M   1% /dev
tmpfs                 380M     0  380M   0% /dev/shm
/dev/hda1              92M   12M   75M  14% /boot
My drbd.conf:
resource vm1 {
  protocol C;
  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f";
  startup {
    wfc-timeout 10;             # 10 seconds
    degr-wfc-timeout 30;        # 30 seconds
  disk {
    on-io-error detach;
  net {
    max-buffers 20000;          # Play with this setting to achieve highest possible performance
    unplug-watermark 12000;     # Play with this setting to achieve highest possible performance
    max-epoch-size 20000;       # Should be the same as max-buffers
  syncer {
    rate 10M;           # Use more if you have a Gigabit network, speed is in Kylobytes. e.g.: 10M = 10Megabytes
    group 1;
    al-extents 257;
  on server1.home {                   # Use the exact hostname of your server as give by the command "uname -n"
    device     /dev/drbd0;              # Drbd device ID
    disk       /dev/hda6;               # physical disk device, check your partitioning scheme!
    address;       # Fixed IP address of Sproetjuh.home
    meta-disk  internal;                # I use internal metadata storage
  on server2.home {
    device     /dev/drbd0;
    disk       /dev/hda6;
    meta-disk  internal;
I hope you can help me because I really do not know where I should start right now..

Last edited by Bart van Kleef; 5th August 2008 at 17:48. Reason: To clarify my problem
Reply With Quote
Sponsored Links