Recoving a Synology SHR-RAID 1 array with only one disk

We’ve been looking at an issue with a two-bay Synology NAS where the second drive has been fried and the first one seems to be on the blink. We’ve removed the first drive from the Synology and plugged it into a PC running an Ubuntu Live CD to try to recover the data.

Synology utilises a combination of mdadm and LVM to create a create a flexible partition arrangement.

Basically we want to get to the LVM Logical Volumes, and to do that we need to get to everything else!! (Hard disk -> physical partitions -> raid arrays -> LVM Volume Groups -> LVM Logical Volumes)
Physical hard disk we can see – tick – /dev/sdb
physical partitions we can see – tick – /dev/sdb5
raid array – not yet…

lets have a look at the software raid instance:

root@ubuntu:/dev# mdadm --examine /dev/sdb5
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 1ec21baa:ad871faa:8775b362:fa5ab5e4
Name : DiskStation:2
Creation Time : Sat Mar 26 06:41:30 2011
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
Array Size : 2925531648 (2790.00 GiB 2995.74 GB)
Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : e3c4380e:05918049:c43ff76d:1b2b75c8

Update Time : Fri Oct 23 19:48:51 2015
Checksum : ca734edd - correct
Events : 27340
Device Role : Active device 32769
Array State : .. ('A' == active, '.' == missing)

awesome, we can see the partition, so… lets create a read only loop of the partition (stripping the RAID bit)…

losetup --find --show --read-only --offset $((2048*512)) /dev/sdb5

now we have the read only partition on /dev/loop1
lets check <a href="">Physical Volumes</a>...

root@ubuntu:~# pvdisplay
--- Physical volume ---
PV Name /dev/loop1
VG Name vg1000
PV Size 2.72 TiB / not usable 4.50 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 714240
Free PE 0
Allocated PE 714240
PV UUID 1lferB-uxUj-aBx2-w4bI-0AcY-mgog-ZyhuJa

lets check Volume Groups

root@ubuntu:~# vgdisplay
--- Volume group ---
VG Name vg1000
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.72 TiB
PE Size 4.00 MiB
Total PE 714240
Alloc PE / Size 714240 / 2.72 TiB
Free PE / Size 0 / 0
VG UUID q1jgmZ-7Rg5-n6o6-Q8b4-gDfR-J3ce-1zZR6O

lets check Logical Volumes

root@ubuntu:~# lvdisplay
--- Logical volume ---
LV Path /dev/vg1000/lv
LV Name lv
VG Name vg1000
LV UUID qQB4MX-z29D-FMF3-mICy-mn9t-WcA8-a0cs9H
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 2.72 TiB
Current LE 714240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

All looking good and now we know where the Logical Volume lives we can mount it

mount /dev/vg1000/lv /mnt/test

job done!

Apologies for the format of this post, I’ve pulled it together from notes I made as I went along down various blind alleys when attempting to recover the data.

Docker and Ubuntu UFW

I was playing with Docker last week and wanted to add a firewall in front of it. This was to allow me to expose some docker containers so they could only be accessed from a restricted set of IP addresses. There are quite a few posts around that that outlines some of the deficiencies of Docker with UFW and make partial suggestions to work around this.

The main issue is that by default Docker will add iptables rules to allow external access to any ports that are exposed on docker containers. Most information I found focuses on adding –iptables=false to the docker daemon which stops Docker from allowing access to all ports through iptables. This is only half the story as there is then no info on how to enable access on a case-by-case basis. The most useful piece of information I found is a comment in an issue raised against Docker which is what the instructions below are based on.


  • Secure Docker behind Ubuntu’s Uncomplicated Firewall (UFW)
  • Avoid directly using iptables (iptables make my head hurt)


  • Basic understanding of Ubuntu UFW (this link is useful too)
  • Basic understanding of Docker (all info here relates to Docker 1.8)
  • Basic Linux skills
  • Docker containers live on the subnet (this is the default)


1. start docker with –iptables=false (edit DOCKER_OPTS within /etc/default/docker – on Ubuntu and add –iptables=false)
2. reboot the server to pick up the Docker changes
3. Enable UFW and configure rules:

ufw allow ssh/tcp
ufw allow from # allow containers to talk to each other through exposed ports on the main server
ufw enable
sudo ufw status

4. add the following iptables rules to ufw and change the default forward policy for ufw

#if running at the command line
iptables -P FORWARD ACCEPT # allow containers to talk to each other directly
iptables -t nat -A POSTROUTING ! -o docker0 -s -j MASQUERADE # allow outbound connections to the internet from containers

# otherwise
## add the following to /etc/ufw/after.rules :
# NAT table rules
-A POSTROUTING ! -o docker0 -s -j MASQUERADE # allow outbound connections to the internet from containers

## add (actually, change DROP to ACCEPT) the following to /etc/default/ufw

5. reload ufw to capture the iptables rules
6. start any docker containers
7. add rules to UFW to allow access to the ports if appropriate (ufw allow 8080; ufw reload)

That’s it, your firewall configuration will persist between restarts keeping your docker host and containers secure.