Recovering a Synology SHR-RAID 1 array with only one disk

We’ve been looking at an issue with a two-bay Synology NAS where the second drive has been fried and the first one seems to be on the blink. We’ve removed the first drive from the Synology and plugged it into a PC running an Ubuntu Live CD to try to recover the data.

Synology utilises a combination of mdadm and LVM to create a create a flexible partition arrangement.

Basically we want to get to the LVM Logical Volumes, and to do that we need to get to everything else!! (Hard disk -> physical partitions -> raid arrays -> LVM Volume Groups -> LVM Logical Volumes)
Physical hard disk we can see – tick – /dev/sdb
physical partitions we can see – tick – /dev/sdb5
raid array – not yet…

lets have a look at the software raid instance:

[email protected]:/dev# mdadm --examine /dev/sdb5
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 1ec21baa:ad871faa:8775b362:fa5ab5e4
Name : DiskStation:2
Creation Time : Sat Mar 26 06:41:30 2011
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
Array Size : 2925531648 (2790.00 GiB 2995.74 GB)
Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : e3c4380e:05918049:c43ff76d:1b2b75c8

Update Time : Fri Oct 23 19:48:51 2015
Checksum : ca734edd - correct
Events : 27340
Device Role : Active device 32769
Array State : .. ('A' == active, '.' == missing)

awesome, we can see the partition, so… lets create a read only loop of the partition (stripping the RAID bit)…

losetup --find --show --read-only --offset $((2048*512)) /dev/sdb5

now we have the read only partition on /dev/loop1
lets check <a href="">Physical Volumes</a>...

[email protected]:~# pvdisplay
--- Physical volume ---
PV Name /dev/loop1
VG Name vg1000
PV Size 2.72 TiB / not usable 4.50 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 714240
Free PE 0
Allocated PE 714240
PV UUID 1lferB-uxUj-aBx2-w4bI-0AcY-mgog-ZyhuJa

lets check Volume Groups

[email protected]:~# vgdisplay
--- Volume group ---
VG Name vg1000
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.72 TiB
PE Size 4.00 MiB
Total PE 714240
Alloc PE / Size 714240 / 2.72 TiB
Free PE / Size 0 / 0
VG UUID q1jgmZ-7Rg5-n6o6-Q8b4-gDfR-J3ce-1zZR6O

lets check Logical Volumes

[email protected]:~# lvdisplay
--- Logical volume ---
LV Path /dev/vg1000/lv
LV Name lv
VG Name vg1000
LV UUID qQB4MX-z29D-FMF3-mICy-mn9t-WcA8-a0cs9H
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 2.72 TiB
Current LE 714240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

All looking good and now we know where the Logical Volume lives we can mount it

mount /dev/vg1000/lv /mnt/test

job done!

Apologies for the format of this post, I’ve pulled it together from notes I made as I went along down various blind alleys when attempting to recover the data.

Photo by Mika Baumeister on Unsplash

10 thoughts on “Recovering a Synology SHR-RAID 1 array with only one disk”

  1. Thanks a lot! The missing bit for me was the losetup with the offset. However, I did not get a /dev/vg1000/lv, instead I need a
    sudo vgchange -ay vg1000
    to get a device /dev/dm-0 that i could mount and examine

  2. It all ok until the mounting point. mount: /mnt/test: wrong fs type, bad option, bad superblock on /dev/mapper/vg1000-lv, missing codepage or helper program, or other error. Please help. I am using ubuntu on usb drive. Thank u a lot.

    1. It sounds like the partition superblock is corrupt. Luckily the superblock gets backed up and *usually* it’s possible to restore a previous superblock to recover the data. I’ve done it before but can’t find my notes. Have a look at the two pages linked below. I’d suggest you start by taking an image of the whole disk you’re trying to recover before you start and work off that as restoring the superblock modifies the file system and could make it better or worse.

  3. Made it to the very end and get a “mount point does not exist” notice under LVSCAN that lvstatus is “Not Available”

    Any thoughts? My Linux-Fu is weak…

  4. Thank you so much for this article!! Was a big big help!! The only thing that was missing for being able to mount the disk was the commande “vgchange -a y” because my LV was in status “NOT available”

  5. Thank you for this. I’d have never thought to create a loop device to skip the MDADM RAID – and now I’m in the process of recovering my data.

  6. Great and explicative post!!!

    However, I got stuck on losetup. It gave me a warning like this:

    losetup: /dev/sda1: warning: file smaller than 512 bytes, the loop device maybe be useless or invisible for system tools.

    And it prevented me to do the next step, (pvdisplay)

    but the cause WAS NOT losetup, but rather that it was needed to do first pvscan.

    After that:

    Everything was ok, but lvscan said the logical volumes were inactive, so i had to do

    lvchange -a y /dev/vg1/myvolume

    lvscan (now show it as active)

    After that, you can mount /dev/vg1/myvolume /mnt/my_mount_point

    Hope it helps as it helped me

  7. Hi,
    Good playbook, however this doesn’t seem to work any more.
    I tested with a good HD removed from a working Syno 216+ II with raid 1. ( my remote backup )
    I upgraded the HD 3 times, therfore I got 3 partitions so I needed to create 3 loops. This is fine, just something that needed updated on your playbook.
    However, after creating the 3 loops I get:
    [email protected]:/# mount /dev/vg1000/lv /mnt
    [email protected]:/# ls /mnt/
    ls: cannot access ‘/mnt/books’: Input/output error
    ls: cannot access ‘/mnt/@sharesnap’: Input/output error
    ls: cannot access ‘/mnt/music’: Input/output error
    ls: cannot access ‘/mnt/dropbox’: Input/output error

Comments are closed.