Recovering VM from server Proxmox HD RAID1 LVM2

Discussion in 'Technical' started by BrainyForge, Aug 2, 2012.

  1. BrainyForge

    BrainyForge New Member

    Hello, I need your help to recover a virtual machine on the what's left of a Proxmox installation.
    The whole was mounted on two 1GB drives in software raid.
    The virtual machine is called vm-101

    Currently with LVM2 sysresccd recognized me, this is the current situation
    Code:
     fdisk -l
    
    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00083042
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1              63  1953520064   976760001   fd  Linux raid autodetect
    
    Disk /dev/md127: 1000.2 GB, 1000202174464 bytes
    2 heads, 4 sectors/track, 244189984 cylinders, total 1953519872 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md127 doesn't contain a valid partition table
    
    Disk /dev/mapper/xen-vm--101--disk--1: 343.6 GB, 343597383680 bytes
    255 heads, 63 sectors/track, 41773 cylinders, total 671088640 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0008bf81
    
                                Device Boot      Start         End      Blocks   Id  System
    /dev/mapper/xen-vm--101--disk--1p1   *          63      208844      104391   fd  Linux raid autodetect
    /dev/mapper/xen-vm--101--disk--1p2          208845   668994794   334392975   fd  Linux raid autodetect
    /dev/mapper/xen-vm--101--disk--1p3       668994795   671083244     1044225   fd  Linux raid autodetect
    
    I would like to possibly recover the virtual machine disk files to restart it on another server Proxmox, or at least be able to access the partition data and restore the contents.

    Unfortunately I am unsure how to proceed, I would be grateful for your help.

    Thank you in advance for your attention.
    Sorry for my English.
     
  2. Mark_NL

    Mark_NL New Member

    Hey,

    Does the system see the LVM setup?

    What's the output of:
    Code:
    pvdisplay; vgdisplay; lvdisplay
     
  3. BrainyForge

    BrainyForge New Member

    Thanks for your attention, :):):)

    pvdisplay
    Code:
     pvdisplay
      --- Physical volume ---
      PV Name               /dev/md127
      VG Name               xen
      PV Size               931.51 GiB / not usable 3.12 MiB
      Allocatable           yes
      PE Size               4.00 MiB
      Total PE              238466
      Free PE               156546
      Allocated PE          81920
      PV UUID               Pdbauu-ZWjr-teaD-avSY-yk8t-tdp1-2NSk0m
    vgdisplay
    Code:
     vgdisplay
      --- Volume group ---
      VG Name               xen
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  10
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               931.51 GiB
      PE Size               4.00 MiB
      Total PE              238466
      Alloc PE / Size       81920 / 320.00 GiB
      Free  PE / Size       156546 / 611.51 GiB
      VG UUID               Jjxorc-S1H4-K4Nq-AV32-8KvQ-Oqlz-dbK2b0
    lvdisplay
    Code:
    lvdisplay
      --- Logical volume ---
      LV Path                /dev/xen/vm-101-disk-1
      LV Name                vm-101-disk-1
      VG Name                xen
      LV UUID                gmtmyz-WveX-ZdHA-0rKp-MS5p-l5uk-nz9iJJ
      LV Write Access        read/write
      LV Creation host, time ,
      LV Status              available
      # open                 0
      LV Size                320.00 GiB
      Current LE             81920
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:0
     
  4. Mark_NL

    Mark_NL New Member

    hmm oke, so the logical volume group is partitioned as 3 linux raid partitions.

    You need to use mdadm to create a md* device which can be mounted.

    it should all be autodetectable, so try and run /etc/init.d/mdadm start
     
  5. BrainyForge

    BrainyForge New Member

    Starting mdadm monitor ... [ ok ]

    Code:
    root@sysresccd /root % mdadm --examine --scan
    ARRAY /dev/md127 UUID=cb2edada:f56c2836:9822ee23:9b948649
    root@sysresccd /root % mdadm --query --detail /dev/md0
    mdadm: md device /dev/md0 does not appear to be active.
    root@sysresccd /root % mdadm --query --detail /dev/md127
    /dev/md127:
            Version : 0.90
      Creation Time : Sat Mar 19 21:26:50 2011
         Raid Level : raid1
         Array Size : 976759936 (931.51 GiB 1000.20 GB)
      Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
       Raid Devices : 2
      Total Devices : 1
    Preferred Minor : 127
        Persistence : Superblock is persistent
    
        Update Time : Thu Aug  2 08:17:38 2012
              State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
      Spare Devices : 0
    
               UUID : cb2edada:f56c2836:9822ee23:9b948649
             Events : 0.44802
    
        Number   Major   Minor   RaidDevice State
           0       8        1        0      active sync   /dev/sda1
           1       0        0        1      removed
    
     
    Last edited: Aug 2, 2012

Share This Page