Recently we’ve been chewing over what a successful patch/upgrade cycle looks like for hardware appliances. We’re anchoring round Synology DSM here because it underpins stuff SMEs do on a daily basis, but the ideas are transferable to other platforms.

Managed well, patching takes more TLC than many folks are able to give. Having discovered this, we’ve found that approaching updates as an ‘artform not a science’, helps us balance out the competing forces behind our patching strategy. Pendulum swinging between a ‘ticking time bomb’ of known security holes and the ‘want latest feature set yesterday’ mentality isn’t really sustainable – especially when you consider most appliances support virtualised estates, which want consistency from their underlying storage.

So, how frequently should you update DSM? What about minor & build updates? And all those wonderful new features?

It really depends on the services a particular DS is running, and what’s likely to be affected by the upgrades. This can be hard to forecast, particularly with a vendor focused on marketplace trends, like Syno. Reading the release notes thoroughly, ahead of time, can really help (these are published for individual packages nowadays as well as DSM itself).

Experientially we’ve developed the following guidance, so for a major version upgrade (e.g. DSM 5.2.x -> 6.x) it looks something like this:

  1. Dedicated test appliances : upgrade them immediately if we have them. Even a 4 year old single bay DS is worth sticking v.next on if you’ve got nothing else 😄
  2. Home or residential appliances: 1-4 months from general availability. Your home NAS/your friend’s/another co-worker’s home Syno all fit in this category.
  3. Secondary appliances (non-mission crit) : 6-9 months from RTM, and there’s probably been a point version DSM increase by this stage as well. A typical candidate here would be a DS which just stores backups or mirror downloads. NFPs, who are more flexible than enterprises, are good candidates to upgrade at this stage.
  4. Primary appliances (mission-crit storage) : 12months+ from RTM, inside a maintenance window, or quiet period commercially. Example would be a RackStation backing a main VM repo or production LUN/NFS data store.

These timelines are flexible, and emphasise the value of treating Synology products as an ecosystem of devices. The method allows you to do trickle down feedback collation – see how things go on smaller boxes before going for the big boys, which helps derisk things. As the codebase is described as the same for each appliance, it’s a decent test m/o, with the proviso that the edge feature set varies slightly as you move up the range with Syno (e.g. you don’t tend to find SSD cache & or iSCSI in production use below the +/xs series).

The other factor which influences the speed of patch deployment is whether the box faces the public Internet or not. That’s a biggie as it decreases the window for deferring updates much beyond 3-6 months. With Synology, this can mean a major version upgrade rather than a build update, sooner than you’d prefer; for example, if security updates have been mothballed for DSM v.previous, which you are still running, you might be forced to go straight to v.next for net connected boxes.

The situation has complexified further since the release of DSM 6.x, with breakout packages for modules previously part of the core OS. There’s pluses and minuses here, but either way there’s an additional relationship between the core DSM version and the individual package version, which is (largely) undocumented and needs to be ‘managed’.

In an age of metrics, all this patching intuition feels slightly odd – but I’d suggest these ideas make for a more stable environment overall. There are plenty of other ways of approaching it though, as it’s not one size fits all.

I should add this is based on our experience of DSM since version 4.0 went RTM in 2012. Most of the time the updates and upgrades have been reliable, but we’ve had our fingers burnt before even with ‘minor’ dot version upgrades; DSM 4.3.x destabilised iSCSI in our environment.