Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The latest DSM update makes btrfs drives unavailable on budget Synology models (synology.com)
254 points by dsego on April 13, 2021 | hide | past | favorite | 159 comments


Issue: drives become inaccessible on upgrade.

Issue closed: intended behaviour, won't fix.

I can't think of a worst act by a company that makes data storage devices than to intentionally and deliberately break data stored by their customers.

What a stupid and nasty action by a terrible company. They deserve to be sued into oblivion


The only thing worse is the tone deaf customer service responses. Such as: - Wanting root SSH access to someone's device to investigate? - Suggesting getting a higher spec Synology device would be able to fix the problem

I use a DS918+ at home which I got after having deployed Clustered RS units for VM storage in a previous role. The recent moves from the company (not certifying their devices with non first party hard disks, pushing more user licensed features (VPN/Email)) and a feeling that software quality is in decline (acknowledged bug in Apple TV app hasn't been fixed in at least 8 months).

It a real shame that both Synology seems to be following the ubiquiti route to smash their reputation to increase revenue.


Oh I hate that “grant access” so much, at least to date they have never pushed back when I’ve said no thank you and it’s not been required


You must have lucked out. I tried getting them to fix hyperbackup issue and sent them detailed logs of exactly what was breaking and how. It wasn't honoring the port setting from what I recall.

They refused to even look at the logs if I wasn't willing to give them root access to the primary system. I told them that wasn't an option but I was happy to provide any log they wanted and they just never responded to the case again.


At the same time the security advisor in Synology DSM spams you with notifications to disable the admin user account after creating another account with admin rights first


I’ve used synology for 10 years and really liked them. But there’s quite a few annoying things popping up more frequently.

This is an example of a stupid error that I can’t turn off.

They’re starting to assume that all devices are internet connectable and subject to brute force. That seems insane to me and my synology is on a private network with no inbound connectivity. I don’t want to change from admin.


Your Maginot line is only providing imaginary security.

Actual IT security is in depth, and the simple task of having an admin account that is not the default user ID makes sense to me.

What zero days are out there that rely on a known user ID, and a known process ID for that user’s first process? You could defeat that by just following the advice from people that know better.


I’m not trying to provide absolute security as it’s for my home network.

If there’s a zero day ravaging my kids iPad then that’s the risk I take. And it’s acceptable given my situation.

“People that know better” for my situation seems pretty hard to qualify. I’m glad it makes sense to you, but since you have zero insight to my config and the changes required to implement I think it says more about the quality of your recommendations that you can give a recommendation without understanding the usability trade offs.

Maginot lines suck for protecting countries, but simple, clear defenses are pretty darn handy for protecting home networks. I’ll take a disconnected NAS over a wonderfully synced, fully compliant NAS following every recommendation (that also includes making it network accessible).


Isn’t there a Config page somewhere the let’s you pick what warnings to show?


Not that I’ve found through searching their docs and walking through the UI, but I haven’t really dig deeply into the internals.


If Synology doesn’t resolve this PR drama soon, affected customers should indeed try the legal route.

Not sure if Synology has their ass covered via terms of service. Even if they did, it’s their fault people lost access to their data.

Interesting to see how this unfolds.


What kind of legal route is there though? The NASes were explicitly sold without support for btrfs (when I was looking at options, they were clear about that). They explicitly did not allow creation of btrfs arrays and did not expose any settings related to them.

To get btrfs, you had to use another btrfs capable NAS and then move the drives over to use that unsupported functionality (which was, again not documented and clearly marked as unsupported) and this patch has now removed a module that allowed for accidental use of that unsupported functionality.

What legal grounds do you have here?


> They explicitly did not allow creation of btrfs arrays and did not expose any settings related to them.

The problem here is that they actually did allow it, and did expose those settings.

Seeing that the GUI allows BTRFS, even if it was supposedly sold without support for it, might lead people to believe that the specs or documentation were outdated especially since the current documentation is actually outdated on many other matters and cannot be relied to 100% accurately reflect the current usage of the product.


Oh, so all those car commercials of a normal econobox showing high performance, zooming around hairpin turns - I should expect that from my own car because it was in the commercial?

Yeah, assume documentation is outdated. That makes sense.


I don't know what your issue is apart from maybe being a Synology shill.

Features change in software all the time.

What's so outlandish about someone getting a new unit, it upgrades the software automatically on first run, and then the BTFRS filesystem is available to them use for the array.

You'd either not think twice about it or thank Synology for opening up the new feature for these lower end units.

Taking the feature away like this is the absolute trashiest behaviour.


Your analogy is all wrong, since they advertised these as NOT supporting BTRFS, but they did anyway.

So your analogy should be "Car commercial showed a slow but practical car, you buy the car and it turns out to have high performance and can zoom around corners, then a year later a software update forbids you from driving on the road to your house that has a high speed limit so now you can't get home"


I still don’t think the analogy captures the reality of the situation. I’d argue it’s more like buying a car that advertised room for five passengers, but when it’s delivered your discover a third row giving you enough room for your entire family of seven. Until you take it in for an oil change, when they lock your kids in the trunk and try to sell you an SUV.


There's an underlying assumption that btrfs support on those devices was actually production level stable and ready for use.

(I know btrfs on ARM wasn't really all that great at times.)


How horrible it would be if commercials had to be truthful.


There's a theory that all adverts, that are not simply information are economic loss. This presumes that vanity has no economic value. How can you legislate when people want to be lied to.


A few people seem to have confirmed this isn't actually true - at some point, it was possible to set up at least some of the affected NAS devices using btrfs via the standard GUI on the device. It's possible there were some intermediate software releases which disabled that but still allowed existing btrfs volumes to be used.


The way they have gone out of their way to deliberately remove the functionality could be argued to cause damages to people who are using the unsupported functionality.

There is no "downgrade" path from btrfs to ext4. Users have to move all their data off the NAS to some other storage, delete the btrfs volume, create a new ext4 volume and then copy the data back on.

These alleged damages are measurable in both time and the cost of storage rental.

Even worse, if you apply this upgrade without knowing that you need to do this procedure, your data is inaccessible until you contact customer support.


Wasn’t supported. There’s no standing here. If I find out my Tesla can change traffic lights for me due to a bug, and then Tesla fixed the bug, it doesn’t mean I get to sue Tesla for that.


Nope, this is like if higher models of Teslas had a more in-depth mode on the display for a couple of years. When you first buy your lower model, during the on-boarding process with the dealer, they walk you through the setup process and ask if you'd like the standard view or the more detailed one. You say you want the more detailed one.

Then six months later Tesla pushes out a software update overnight that removes that view and offers no way to restore the old one, or start your car, or unlock it.


Sue them for damages and see what sticks. Causing data loss like this is inexcusable.

As a customer in Germany I sure hope the consumer protection agency (Verbraucherzentrale) could make them sing.


Way too technical, will never get anywhere. Have better backups and don’t play cute games with important data storage.


No, the outcome is really simple. You had your DS420j humming along yesterday, then this DSM update comes around and installs overnight, and this morning you have no access to your files.

Synology provides no recourse, effectively bricking the device you own. See how Consumer protection agency likes that.

You being unable to access files miss out on some important work, or deadline. See how courts like that if you decide to bring a civil suit against Synology.


Judge: Where's your backup?


Is there any technical reason why it is not supported?

Also, just for curiosity, and without condoning what the company is doing, is there a good reason to use btrfs for a device like that? I was thinking that good filesystem repair facilities are important once you store a lot of data, and btrfs is inferior to ext4 on this? I find btrfs is fantastic if you have, say, a bunch of Yocto images, or a large video you are working on, and you want to quickly create and erase snapshots of many gigabytes of data, but what is the advantage here?


These "j" devices have like 512 MB of RAM and an ARM CPU. I believe the devices they do support BTRFS on all have 1-2 GB+ RAM and x86 CPUs.


Then how come it did work before? BTRFS as used in DSM is not particularly more memory intensive than EXT4 (which is why it did work before).


It works, but there is a risk (exacerbated if you use other features) that the device will become unstable. I believe depending on features used in btrfs it can use considerably more RAM for caching filesystem structures. While this cache can be freed, performance drops as the system need to retrieve this from the disk again. Considering reliability / reputation / performance it makes sense to not allow it as a supported configuration. However many vendors do this softly, and not as a hard lock. i.e. we won't let you create it (or we do and warn you), but if you complain / log a support ticket it will be ignored until you can show it on a supported configuration.

Synology can easily allow access to the volume. I would think the smartest decision when removing a feature (even a feature that was never intended to exist) to instead make the unit at least mount the volumes as read-only and allow the data to be exported to the network / cloud / USB drive.


I think it's RAM, not CPU. My DS418 has 2GB of RAM and an ARM CPU and supports BTRFS.


Compare that to the 4GB of a Raspberry Pi 400.


We already store a lot of data at home. And ssd and hdd storage has sometimes read errors. With terabytes of data we come across a handful error s per year, most of them will nobody notice. I use btrfs everywhere I can and see sometimes errors, most times on heavy written files like databases(sqlite files) or virtual harddisks, sometimes even pictures or logfiles. Even without RAID you could restore the file from a backup (If you try to backup a broken file, the computer won’t let you).


> Is there any technical reason why it is not supported?

No familiarity with these devices, but there is at least a slim chance if it was an embedded device, they needed the flash or RAM for something else. Seen this happen elsewhere before


Not enough RAM, possibly the CPU they chose wasn’t fully tested.


Is it? They explicitly did never support the fileysystem on the mentioned device and it was only possible because of a bug in their implementation.

I don‘t want to be on any side. But is it wrong to fix a bug?


Seems like it was a marketing bug ("spec says: only provide this feature for people who paid us more money", implementation says: "everyone gets this feature"), but the "fix" is for the affected people to move everything out of the affected devices, reset them, and copy the data back.


Legal ransomware. Nice.


I've had a less than stellar experience with their support too. I'm the author of a reasonably popular WireGuard plugin (https://github.com/runfalk/synology-wireguard) and in the next DSM version they are basically breaking all kernel module plugins without a migration path. A kind user has found a workaround and made a PR, but Synology have no interest in helping out with this (https://github.com/runfalk/synology-wireguard/issues/66#issu...).


Do I understand this right, DSM v7 intends to take away root access in some way? Ugh.

Another negative to add to the list. For me Synology has mostly been a disappointment, I'm sorry to say. Seems their software has been generally in decline for a while. DSMv7 was delayed over a year ago [1], and now they don't expect to ship until summer 2021. v7 doesn't seem to bring anything interesting to the table IMO, and now they want to take away root? Meanwhile the only thing I saw from v6 updates was a request for spying/telemetry [2]. Wanted to like Syno but it's too locked down, too expensive, too annoying.

My position [3] hasn't changed from last year - I don't see myself buying a Syno product again, nor recommending them to most of my technical friends.

[1] https://www.snbforums.com/threads/dsm-7-preview-delayed-unti...

[2] https://imgur.com/a/Yzgw7hq

[3] https://news.ycombinator.com/item?id=23738360


Reading that it's pretty obvious they're preventing modules to try to kill of XPEnology. Sometimes Synology baffles me, the users that are part of the XPEnology project were never going to be paying customers anyway, and they seem to go to extraordinary efforts to try to stop them.

The "this is for security reasons" is just silly, none of the exploits they've been hit by to date that I'm aware of utilize or require inserting a kernel module into the system. And if they have the infrastructure to sign modules to allow insertion, they could extend you whatever signing they utilize.

It's not like a nation state actor couldn't hack Syno corporate and get access to signing keys if they were after a targeted attack...


I'm terrified reading this, as I've set up your wonderful plugin not long time ago and it is working flawlessly. Very bad, Synology, very sad ..


Glad you like it! Don't worry too much as we currently have a workaround. Their SDK still allows us to build kernel modules but the installation experience now requires manually switching the package post installation to allow root access .

This of course assumes that Synology doesn't remove this option before DSM 7 is released.


The title is hugely misleading though - the j model documentation was always very specific that they can't support btrfs and you could never initialize the array into that mode. (At least it was when I was buying my Synology - and opted to + model which specifically supports it. This is the list devices Syno has with btrfs support: https://www.synology.com/en-global/knowledgebase/DSM/tutoria...)

The RAID array had to be moved from another device to work. Running your RAID array on a device that explicitly doesn't support it feels like a really risky behaviour.


They still should've handled it differently. Show a warning to the user encouraging them to migrate, give a reasonable window before the feature is actually disabled, and don't let the users upgrade to a version which will break their storage.

Synology made a mistake which they admit:

> DSM incorrectly allowed administrators to migrate from a BTRFS-capable device and utilize previously created BTRFS volumes on the affected devices

And they're punishing users for it


Yeah, that's the crappy part. My intuition says that they didn't deliberately plan to remove it and it was a side effect of some firmware update cleanup.


What is the reason it cannot be 'supported' anyway? It clearly was possible. BTRFS has been mainlined for nearly as long as EXT4, and incurs no particular overheads compared to EXT4.


Whatever reason they chose to not support btrfs on their cheap ARM models. Might be stability, might just be pure market segmentation.


Cheapest models have insufficient RAM for btrfs.


If that were the case, then why did btrfs work perfectly on them all until this update?


It does not require more ram than ext4 in the configurations that DSM provides.


Source? How about cpu capabilities? Were they tested?


You're making the claim ;) The main thing an fs is, is a layout for data on disk after all, no mainline fs will do this in a way that is particularly consumptive of resources.

Specific features like RAID and deduplication are known to add to RAM and CPU requirements. Synology uses BTRFS for neither, so any performance differences are almost certainly not due to the specific fs.

I also have not seen any evidence of perf diffs.


> the j model documentation was always very specific that they can't support btrfs and you could never initialize the array into that mode

The problem was that the GUI didn't reflect that, which was the bug. And then they "fixed the bug" the same way that they "fixed the glitch" with Milton in Office Space


The GUI never allowed you to create such an array in the first place. If a user goes out of their way to put their storage array into an unsupported configuration, I get a lot less upset when the vendor makes that harder to do.


Even if it’s true, that GUI never allowed it (there’re people disputing it), migrating disks between different products from the same NAS manufacturer isn’t what I’d call going out of your way. It’s one of the reasons to stay loyal to the brand when you upgrade. If it’s incompatible, it should be rejected at the start, or changed later with tons of communication and proper checks (like blocking you from further sw upgrades until you migrate your array). Randomly breaking it is Synology going out of their way to hurt users.


What a stupid change. They claim support can help temporarily, but still.

Let's assume for a moment they want to go ahead and disable this accidentally enabled feature, OK, but they should have changed it such that - On existing installs: You could mount existing volumes, but not create new ones - Start showing a warning to users asking them to migrate - If they are desperate, on new installs, you could potentially disable mounting existing volumes or at least and a stern warning. (The downside is for example, if your NAS died, you got a new one, migrated the drives, you can't access them again. So this is a hit and miss idea, depending on how keen they are to push disabling the feature)

The good and bad news is that in all liklihood they didn't intentionally brick existing users but instead some braindead engineer went "oh that's not supposed to be enabled it, let's disable it" and didn't think about or realise how it would affect existing users until it was rolled out. Good because less malice, bad because they should ideally have smarter engineers reviewing such things.


Ouch, this is terrible. I have been happy with my Synology NASes over the last decade or so, and their tech support, but this really, really, rubs me the wrong way. I am not affected by this act of deliberate vandalism, but Synology's reputation has been very seriously tarnished; this is close to the worst possible thing you can do as a storage provider. I cannot in good conscience recommend their products after this. Very disappointing.


[flagged]


That's not true. I have a j and remember seeing the btrfs option when creating my volume. I remember it because I was not familiar with that file system and took the time to read up on it. Decided on ext4 anyway because it's what I was knew. So at least from my POV, the UI did allow creating a volume and let you get into this state.


That's awful behaviour! I've been researching NAS devices and until now, Synology has been on my preferred list. Reconsidering that position now - I've never heard of a company breaking something so significant with a minor version upgrade and without warning.


I’ve been thinking it was about time to replace my current Synology device, and until seeing this was planning on sticking with them. Now I’m hoping to see recommendations for an equally low maintenance replacement in the comments.


Wow, this beyond unacceptable and borderline illegal. Even tho I’m very happy owner of Synology, my next NAS won’t be it for sure. What are good alternatives for home usage? QNAP?


I built a DIY NAS in a small cube case (Fractal Node 304), which can house a mini-ITX motherboard and six 3.5" drives, and has pretty good (and quiet!) cooling for a case that size (two 92mm fans in front and a 140mm in back). The drive bays are not hot swappable, but are very easy to replace when the top is taken off the case.

For the OS, I put openSUSE on it, set up a btrfs pool and configured SMB, NFS, FTP and SSH/SCP manually. It also functions as my HTPC, with Kodi as the frontend.

The Celeron J4105 is passively cooled and rarely sees 50C even under full load, plus it has hardware video decoding, so it can play back 4K video all day with no strain.

It's not a turnkey solution like a Synology or QNAP box, but it's a lot more flexible.


Interesting. What is roughly the total price, the average power consumption, and what is the main advantage compared to ARM systems? Am I correct to assume the Intel chips still draw a lot more power than the single-digit Watt numbers of ARM systems, which would become a substantial accumulated cost over the years?


The price for the hardware was DKK 2,500 or $400. Synology doesn't have a 3.5" 6-bay NAS, but their cheapest 4-bay is the DS420j at DKK 2300, or around $370. So while that is slightly less expensive and probably uses less power, it also has less room for disks and is significantly less flexible. If you have modest demands and just need something to store your files, it'll work.

Or you could go for the DS1612+ to have room for 6 disks like in my build, which will set you back DKK 6,900 or $1,100. And that uses an AMD Ryzen CPU, not ARM. On the upside, it also has transcoding and media playback functionality like my build, and some features my build doesn't have, like an upgrade to 10gbit ethernet.

I'm not counting disks in these prices, since I used disks I already had, which I would also have done if I had bought a prebuilt NAS.

The TDP for the Celeron J4105 in my build is 10W, representing "the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload". It very rarely runs at anything approaching a high-complexity load, even when transferring files over gigabit ethernet. It idles at around 30-35C passively cooled aside from the case fans, which run inaudibly at 500rpm.

Obviously the chipset and RAM also draw some power, as do the disks and fans and so on, but it compares very favorably to the quoted 51W under load/25W idle for the DS1612+. That is why I specifically chose a motherboard with that class of CPU.

Let's say the 51W figure is the same for my build and assume a worst-case load, running at full power 24/7. With the current electricity prices here, that's DKK 773 or $125/year.

Compare to the DS420j, which is quoted at 22W, DKK 335 or $54/year, for a device that is significantly less capable.

And my build is not just a NAS. It also serves as my HTPC and couch gaming machine for retro and emulated games, among other things. Plus it's 100% under my control, I get to decide what's installed and which features are enabled/disabled. For me, the choice is obvious.


> It's not a turnkey solution like a Synology or QNAP box, but it's a lot more flexible.

That is not even the biggest advantage. The biggest advantage, in my opinion, is that you will always get updates and can run that for an unlimited time, without breaking your setup.


Building your own is a cool thing, and gives you many benefits. But updates and Linux are very prone to breakages - there’s too much hw and sw combinations to expect solid experience.


That is why I chose very commodity hardware and run openSUSE Leap 15.2 on it. I wanted a solid distro with a reasonable update policy, not something that pushes 100+ new updates every week. Debian Stable or perhaps Testing would also have been good choices, but I like YaST.

In the interest of full disclosure, I have had one issue with my build. The motherboard has four SATA connectors; Two from the Intel chipset, which work flawlessly, and two from an additional ASM1061 chipset, which don't. They seem to work initially, but when you start pushing larger amounts of data through them, errors start to pop up.

As far as I can tell, it's a either a driver issue or a BIOS issue, I can't remember whether I got those same errors on an ASM1061-based 2-port SATA PCIe expansion card or not. Either way, I replaced it with Marvell-based 4-port card instead, and just avoid using the ASM1061 ports for now.

Next time I add a disk to the system, I'll connect it to the potentially troublesome ports and run some tests, to see whether it was a driver issue that has been cleared up.

So yeah, not perfect, but good enough for me :-)


My home "NAS" running opensuse has been continually upgraded since something like OpenSuse 12. I have data drives and I have an OS ssd that's only 50GB. It's rock solid.


> But updates and Linux are very prone to breakages - there’s too much hw and sw combinations to expect solid experience.

That seems like FUD to me. I am running Linux since over 20 years. A NAS's hardware interfaces are mostly HDD and network drivers, and I have not seen them breaking in many years.

And especially for PC-platform hardware, or ARM SoC in which Linux is the dominant OS. I could understand if NAS vendors would say that in their advertisement, but what is the substance of claims like this? Especially since these things runs Linux as well.


There are way more things that can go wrong, not just drivers.

I've run a ThinkServe machine with CentOS and ZFS for several years. Kernel and ZFS updates would break it regularly, requiring tinkering to make it work again (my favorite: zfs#8885).

Nowadays I prefer when somebody else did the integration work.


Well, I see better what point of view this is coming from.

However, ZEF is not part of the standard kernel, IIRC there are still license issues and it needs to be built separately, which is not necessary with any normal file system. I enjoy some amount of tinkering, too, but I believe it is expecting too much that everything works out of the box with non-standard kernel modules.

In the same vein, I also do not understand entirely why people want to have something still experimental like btrfs precisely on their NAS, and apparently use that without an extra load of backups. btrfs has many qualities, but it is still not considered as robust as ext4 (the current standard Linux file system), and for a long time it had practically no recovery option if file system data became corrupt. For me, it is very clearly not the first choice for a NAS or server which should run completely hassle-free. IMO, everything one needs for that is on Debian stable, so why not use that? And why in heaven should one make oneself dependent on what a specific hardware vendor does?


The nice think about RHEL/CentOS is, that their kernels have stable ABIs (kABI). That allows for third parties to provide binary kernel modules, and the OpenZFS project actually provides the kABI-kmod tracking repo, so you can have an installation without any developer tools installed (which I prefer). So the problem wasn't really with kernel modules, the problem was the integration with the system, and nobody really cared - the above mentioned issue is still open, 2 years later.

Another nice thing about CentOS is, that it was supported for 10 years. Debian is supported only for 3. It tkaes 5 years for Synology to get a new release done, so they also do have kinda-long term support.


Btrfs is stable and has been widely used in production for years. Facebook uses it, SUSE CaaS uses it, the officially recommended platform for SAP is SLES, which has defaulted to btrfs for the root filesystem since 2014.

Btrfs brings snapshots, rollback, deduplication, storage pools, checksumming, send/receive and a whole host of other features. It's a completely different type of filesystem to ext4.

Calling btrfs "experimental" at this point is like calling electric cars "experimental".


I whish I could do the same with a CPU that supports ECC ram. Unfortunately, a fanless CPU with ECC support (the Atom x6000 for example) is almost a chimera, or really really expensive.


I built a custom home server using Mini-ITX form factor parts and I would have to agree with this conclusion. ECC ram in that form factor is very expensive. The biggest issue I had was that fanless ATX power supplies are not reliable. There was also a hardware bug that was bricking Atom C2xxx processors.

I think that if form factor is not an issue, one of the i3's that support ECC ram are low enough TDP to be cooled passively. Especially if it is not expected to run at full load for long periods. However I simply switched to using a big case with low RPM, good quality FDB fans. It hasn't been a problem since, and is much less cramped when I do want to tinker with it.


I would absolutely go for a micro-ATX or even full ATX board if that allows you better choices in hardware to get ECC or other features.

My goal was to build something not too much larger than a 4-6 disk NAS, that would also function as a HTPC, sitting under my TV. That meant small size and quiet operation were the main priorities. As none of the popular consumer NAS devices use ECC RAM, I reasoned that I would do without it in my build, too.

The fans are running on the lowest variable setting controlled by the BIOS, it's whisper-quiet even with your ear next to it, but I've got some even quieter fans from be quiet, that I'm planning to swap in. The PSU is also from be quiet!, and with the placement deep in the case, with the 120mm fan oriented downwards, it's completely inaudible.

Had I not been constrained by space and noise concerns in my small apartment, I would have build a larger, more powerful and ECC-equipped machine to be hidden away in an equipment rack in the basement.


Bingo, this is what I recommend. It's much more flexible.


I switched from Synology to QNAP about 5 years ago (after my Synology NAS decided to stop reading disks using their proprietary RAID, and Synology support said "sorry, we don't know how to fix it". Lost 8tb of data that day).

Two lessons learned: 1) Don't use proprietary raid, 2) Don't ever use Synology devices.

QNAP has been great though, I've been very happy so far and zero issues.


Synology uses bog-standard mdraid. You can read the volumes on any linux machine.


Maybe I'll have to take another look at it when I have some free time, although I vaguely remember something not working right with the standard mdraid tools. According to this post, it seems to be a slightly customized version: https://serverfault.com/questions/568166/how-to-recover-an-m...

Thinking back on this situation made me remember the terrible thermals in that old NAS. I had to replace the (fanless) PSU 2 times because it was always blowing up and creating sparks/smoke. Lol good times.


They have it customized, they have also btrfs integrated with md, which is not in the stock kernel, but these are runtime things. Not to say there are not a ways to shoot yourself in the foot (e.g. being careful when creating ext4 volume with external machine, target nas might have older kernel that doesn't support half of the newer ext4 features and 64 bits).

I never had any problem with thermals or psu, but I've heard other people had psu problems.


On their older models, the webUI did something funky with LUKS encryption passwords meaning you couldn't actually just mount the disks in another Linux computer as you coukd never unlock them. I have memories of us having to do the encryption setup and unlock via ssh so we could mount the drives in something else in the event of a failure (though we retired that box over 5 years ago so my memory might be rusty).

edit: my memory _is_ rusty; it was a QNAP box we had to do that with not Synology.


Last I looked into it, QNAP had some serious outstanding security issues that they weren't acknowledging and weren't fixing, that turned me off of them fast. If Synology isn't to be trusted (this plus starting to lock down part support like allowing only their own HDDs,) the only thing that remains seems to be building your own and running TrueNAS or Unraid.


I’ve come to the same conclusion. Once my DS415+ dies, I’ll build one on my own. Synology won’t see another dime from me.


DIY is the best route. Anything proprietary is going to end in disaster when they just decide they don't feel like supporting it anymore.

It can be cheaper than off the shelf NAS units too.


QNAP isnt as popular as synology, but I manage several from 2 to ~20 disks and they gave been great.


I have always used Linux PCs. The tricky thing is finding a good small case that can support enough drives without breaking the bank. Currently I have an old Haswell i3 in a MiniITX Fractal Design node 304 case - it's fine, but it only supports 4 drives.


I'm also running a Debian-based NAS at home. If you want to expand beyond 4 drives I can recommend the 8-bay U-NAS chassis: https://www.u-nas.com/xcart/cart.php?target=product&product_...

Same Mini-ITX form factor, 8 hot-swappable drives, pretty solid build qualtiy in my experience.


So did I, until I’ve decided I’m too old for dealing with it. I’ve been working professionally with Linux systems for 20 years and counting, and finally decided that at home I just want an appliance that just works, even if it doesn’t give me all the features/flexibility I’d want.


I have been working professionally with Linux since 22 years, and I have tried the same with a NAS - and found out that it has just so many annoying quirks and shortcoming that I am definitively far better off, and waste far less time if I am just using standard Debian on standard hardware. No need to work around the shortcomings of a prehistoric NFS implementation or things like that... no need to set up things from scratch because some hardware breaks or there are no updates for the system it runs.


Doesn't the Node 304 support up to six 3.5" drives? I have one myself.


unraid seems to be popular, it’s where I’d look but I’m Not quite ready to jump off the syno ship yet.


I've been very happy with my Asustor so far, although I don't have anything to compare it to.


TrueNAS?


I built a 6 disk TrueNAS Raid-Z2 box two years ago and though it was inconvenient to build vs buying something off the shelf, TrueNAS has been excellent. Does what it says on the tin and continually improving with each release.

Their cloud sync and UPS functionality have been great (flaky power around here). Haven't dug into the TrueNAS' ZFS pool/snapshot management and jails management (FreeBSD's containers) functionality, but both are compelling once I get time and a good project for them. The NAS performing so well has me itching to upgrade to 10 gigabit ethernet around the house once prices come down a bit.

Also I ran the OS off mirrored thumb drives until recently. That was pretty nifty.


I've been thinking about going down the TrueNAS route (the TrueNAS Mini series looks really nice), but it seems a little expensive. Is the extra cost worth the convenience of having a machine that you know will stay compatible with FreeNAS?


Here's a detailed review that answers that question: https://www.servethehome.com/ixsystems-truenas-mini-x-zfs-na... (the video linked there is also good)

On page 5 they tally it up as a $400 difference vs building your own.


A custom case, labeled drive cages (with software identification), integrated and tested component bom, 1 year system warranty... for a $400 premium (that's 8 hours if I value my time at $50/hr). Worth imo.


You can use a Raspberry Pi (perhaps a Pi 400) if you work around the fact that the internal flash memory is not made for constant writing. Thus, you have to arrange the system so that it is read-only in normal state - which is not difficult. The Pi 400 also has an advantage that it has a keyboard, so you can cleanly shut it down without logging in, which makes a lot of sense for a storage device.

Also, for RAID1 you need an extra powered USB hub. The normal power supply of the Pi 400 is not sufficient for more than one disk.

Then just install a software RAID, NFS and possibly rsync and Samba. A simple NAS built in this way will come at about 240€, depending on disk size and will be mostly equivalent to a commercial one for twice the price. As an advantage, you can run well-known, standard software which is completely under your control. And since it is Debian-based Raspbian, it can be upgraded easily without breaking stuff.


what ever you do, dont do this.

The Pi400 is a brilliant machine, but this is not a way to make networked storage for data you want to keep.

if you want to use a raspberry pi for network storage, and you want to keep your data, make sure you have n+1 pi & disk machine.

Raid isn't going to help you here, you need redundant nodes. for the same €240 you could get two minimum spec pi4s PSU and usb HDDs


> make sure you have n+1 pi & disk machine.

Why? The thing that could fail are the disks. So, one can use a RAID1 with two disks and this should, for normal applications be enough. I had a DNS-232 before as a NAS, it had 32 MB RAM, a tiny ARM CPU, and two large disks. It was working nicely over about ten years, and one could even run a web server plus a (slow) wiki on it (I had to replace the casing once, for a botched firmware update). The main problem I had with it that it was that it was impossible to update (I was running Alt-F firmware, which was developed by the community after the original firmware had constant problems with unneeded RAID synchronization, and the vendor had to release its code due to being bound to the GPL).

The Pi400 is wildly more powerful, has a much faster CPU which only draws a few Watt, and can easily be connected to several USB disks.

> Raid isn't going to help you here, you need redundant nodes.

Why exactly? My set-up is working well.... did I miss it needs to be somehow lighter than air (because heavier-than air flying machines are not possible, y'know)?


> Why exactly? My set-up is working well

congratulations I am very happy for you!

The issue is not speed, its the USB disks. firstly you are reliant on the disks presenting themselves in the same order at the same time. (historically that was challenging) ZFS can cope with this quite well, MDADM, not so much.

You can use device IDs, which gives better results. Another issue I've bumped into is that mdadm has come up before the USB disk is initialised, causing all sorts of issues. in short raid over USB is just plain fragile.

> Raid isn't going to help you here, you need redundant nodes.

This setup will have four common failure modes:

o power issue leading to corrupt "/" or any other SD card failure

o usb disk breaking

o OS crash borking everything

o Accidental deletion

Raid might protect against a usbdisk breaking, assuming you spot it in time (most people don't have monitoring.) its not even going to give you a performance boost either, its USB3.

This is why n+1 is better. assuming you're not keeping each node next to each other then you are much less likely to experience a simultaneous outage.


> firstly you are reliant on the disks presenting themselves in the same order at the same time.

In Linux, which is what Raspbian is, it is standard nowadays to use UUID disk labels.

> Another issue I've bumped into is that mdadm has come up before the USB disk is initialised, causing all sorts of issues. in short raid over USB is just plain fragile.

No issue with that. I guess systemd handles this. Only thing I found is that the hub needs to provide enough power on booting up.

> power issue leading to corrupt "/" or any other SD card failure

For that, I have a backup of the sd card. I am using an "industrial" SD card by the way which costs a few bucks more. Also, the card is running in read-only mode, using unionfs. Logs are written to a partition of the RAID disks.

> usb disk breaking

That's handled by RAID1 and backups.

> OS crash borking everything

I've never seen that on Linux, in 22 years using it. The closest I came to was botched EFI parameters from an Ubuntu install gone wrong, on a Desktop PC. Wait, I also had a problem in about 2003 with a broken ISA disk controller. Ah, and I am using EXT3 which is a journaling file system.

> Raid might protect against a usbdisk breaking

Yeah, that is what it is for. It is not a substitute for backups (but nobody said something like that, I think). In fact, it is easier to make backups from a Pi, using the standard Linux tools like tar, ssh and rsync.

> its not even going to give you a performance boost either, its USB3

For my purposes, it does not need to be faster (and it is much, much faster than the old DNS-232). The Pi 400 has Gigabit Ethernet and plenty of RAM. I am running a MoinMoin wiki instance on it, and it is more than fast enough for personal information management.

It can well be that a small thing like that is not fast enough for a medium-scale business with, say, a dozen clients. But this was not what my requirements were. I wanted to have a solution which is very low-power. And the Pi 400 is probably one or two Watt if idle, and the disks are spun down:

https://en.wikipedia.org/wiki/Raspberry_Pi#Specifications

https://www.raspberrypi-spy.co.uk/2018/11/raspberry-pi-power...

> This is why n+1 is better

So, what kind of set-up are you exactly talking about, a distributed system of some kind? A storage cluster? Ceph? DynamoDB?


Just to complete this:

As said, it is not a good idea to leave the root file system of the Raspberry, which is on an SD card, writable for heavy-duty, long-term tasks where reliability is important. It is not made for that. Among other things, this includes logging.

Instead, I am using this script:

https://github.com/fitu996/overlayRoot.sh

which mounts an overlay-filesystem with the bottom layer being the root-fs, the upper layer a new fs on writable media, which is a RAID partition in my case, and swaps the overlayfs with the root fs. That means that from then, the sd card of the Raspberry is not modified, and all changes go to the RAID disk. If you want to modify the system configuration, you need to re-boot with a different kernel parameter, which is detailed on the linked page.


> > OS crash borking everything

> I've never seen that on Linux, in 22 years using it

Its surprisingly common, especially when a machine is underload and spending ages flushing to disk. A kernel oops happens and bam, broken partition.

> So, what kind of set-up are you exactly talking about, a distributed system of some kind? A storage cluster? Ceph? DynamoDB?

Noooooope!

just a second machine, with a copy of whatever you want to keep. when you are doing backups, have each one as a round robin target.

Ceph is not something you'd want to run for redundancy/reliability. I've spent many years supporting "HA" filesystems. The simpler the better (unless its GPFS, that's actually good.)


> Its surprisingly common, especially when a machine is underload and spending ages flushing to disk. A kernel oops happens and bam, broken partition.

1. That is what journaling file systems like ext4 are for. These are exactly to prevent that unfinished writes of metadata corrupt the file system.

2. Of course, it is possible that the Linux kernel goes all bonkers, writing random stuff to you disk and firing up nuclear missiles. But I have never seen that. I also have never seen a kernel Oops on a NAS. Kernel crashes are usually caused by faulty device drivers.

3. Also, in this aspect, there is no technical difference at all to a commercial NAS since they use Linux as well. They also use ARM CPUs, like the Raspberry, since they save power and do not need a fan (which in turns makes them less prone to failures).

> just a second machine, with a copy of whatever you want to keep.

Keeping backups is something I would seriously recomment for any kind of NAS or server.

I do really not see how your argument applies in any way to a NAS based on a Raspberry Pi. The main difference I see is that it is substantially cheaper, and better suited to people who want to install extra software and occasionally like to tinker a bit. And I am obviously not recommending that the Central Bank of Canada or an institution like that uses it.


> The issue is not speed, its the USB disks. firstly you are reliant on the disks presenting themselves in the same order at the same time. (historically that was challenging) ZFS can cope with this quite well, MDADM, not so much.

I have checked about that and mdadm uses uuids to identify partitions on USB disks:

https://serverfault.com/questions/739580/mdadm-can-i-reconne...

https://serverfault.com/questions/460138/mdadm-disk-configur...

https://www.linuxquestions.org/questions/linux-software-2/md...

Here is how it looks on my array:

   root@arau:~# mdadm --detail /dev/md1
   /dev/md1:
              Version : 1.2
        Creation Time : Thu Dec 10 08:56:22 2020
           Raid Level : raid1
           Array Size : 4821105600 (4597.76 GiB 4936.81 GB)
        Used Dev Size : 4821105600 (4597.76 GiB 4936.81 GB)
         Raid Devices : 2
        Total Devices : 2
          Persistence : Superblock is persistent
   
        Intent Bitmap : Internal
   
          Update Time : Thu Apr 15 11:18:54 2021
                State : clean 
       Active Devices : 2
      Working Devices : 2
       Failed Devices : 0
        Spare Devices : 0
   
   Consistency Policy : bitmap
   
                 Name : arau2:1
                 UUID : 7c1e57ea:1f7ebff2:415c1b0a:8f959031
               Events : 21463
   
       Number   Major   Minor   RaidDevice State
          0       8        2        0      active sync      /dev/sda2
          1       8       18        1      active sync     /dev/sdb2
   
   root@arau:~#

   root@arau:~# mdadm --examine /dev/sda2
   /dev/sda2:
             Magic : a92b4efc
           Version : 1.2
       Feature Map : 0x1
        Array UUID : 7c1e57ea:1f7ebff2:415c1b0a:8f959031
              Name : arau2:1
        Creation Time : Thu Dec 10 08:56:22 2020
        Raid Level : raid1
      Raid Devices : 2
   
    Avail Dev Size : 9642249512 (4597.78 GiB 4936.83 GB)
        Array Size : 4821105600 (4597.76 GiB 4936.81 GB)
     Used Dev Size : 9642211200 (4597.76 GiB 4936.81 GB)
       Data Offset : 264192 sectors
      Super Offset : 8 sectors
      Unused Space : before=264112 sectors, after=38312 sectors
             State : clean
       Device UUID : 52594d2c:77f9c0cc:0b4243f6:6a8e6bcb
   
   Internal Bitmap : 8 sectors from superblock
       Update Time : Thu Apr 15 11:18:54 2021
     Bad Block Log : 512 entries available at offset 32 sectors
          Checksum : ca81c1ad - correct
            Events : 21463
   
   
      Device Role : Active device 0
      Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
   root@arau:~# 
You see the second UUID entry? It is an USB disk.

And I looked into the madm changelog and git log and I did not find any reference to problems as you describe since 2014. I am a bit confused why you refer to hypothetical problems or ones which do not exist anymore for years?


Clean shutdowns are indeed preferred, but synology devices will do a clean shutdown (delayed as required) if you press and hold the power button until it beeps and starts flashing, then release.


Yeah. That is a problem with older Raspberry Pis, which do not have a keyboard. It is possible to use a GPIO port for this, and shut it down with an attached button. But one has to be careful not to use the port unless it is configured properly, because that could cause a short-circuit, and the R-Pi is not protected against that. Having a keyboard on which you can type Ctrl-Alt-Del is much easier to use and will work on most relevant circumstances. In modern Raspian editions, this can be configured in systemd.


I'd be really curious to know why people are downvoting this. I can't see any technical reason against it. Of course, for some people this might be too complicated or too much hassle in comparison to buying a more expensive commercial thing. But for other people, the cost savings might be worth the work. It is also easier to keep an almost completely open source solution fully up-to-date.


I bought a DSXXX+ instead of a DSXXXj last year specifically to _get_ btrfs based on this

https://www.synology.com/en-us/products/compare/DS220+/DS220...

I guess at some point they forgot to turn the feature flag for btrfs off on "j" models and now they are "fixing" that?

Seems doable but with a lot of communication & control for customers and at a very minimum it should go read-only. Super bad for their brand and their customers.

Synology users, consider disabling automatic DSM updates. I have mine set to email me when there's a new version. Works fine.


No, it was never possible to create btrfs volumes on the j-models.

What was possible, was to put the drives into plus model, create the volume, and migrate the drives into j model. Even migrations that are not supported are pretty much possible. Or were, with the uodate not anymore.


Wow, the bots are out in full force on this thread.

I've seen the proof on twitter and elsewhere it was 100% an option in the GUI. Enough of your gaslighting, Synology.


Yeah, bots, everyone who doesn't subscribe to hivemind or doesn't join the mob is a bot, sigh...

I can show you similar screenshot. Does not make it genuine. I used to own a value series Synology NAS, and it was not possible to create btrfs volume. For extra fun, it was also possible to create only 32-bit ext volume (i.e. limited to 16TB; if your pool had more, you had to create separate volumes) and it didn't support some ext4 features, so you had to be very careful when creating volume on external device.


Generally should always disable auto updates


Seems a rather flagrant disregard for users/customers. The number one priority for any NAS is to never lose a user's files.

So what's a good alternative, preferably with ZFS support.


TrueNAS (formerly FreeNAS but got a rebrand). Built around ZFS and FreeBSD, it's a really solid and more focused NAS product. DSM has more general features like their whole surveillance software and such, but with a touch of irony NAS for important data is the main thing I didn't ever trust Synology for given the lack of ZFS. TrueNAS can do some other stuff but primarily it's about doing NAS right and fast, and if you want something dedicated to that role and more turnkey (obviously you could just set up any system with Linux and ZoL or FreeBSD naked and NFS yourself) it's well worth a look.

It's BSD license, and iXsystems is the primary developer behind it. They do offer some of their own hardware, so you can get a similar "commercially supported buy and turn on" experience to Synology if you'd like, though I don't think the value proposition from them right now is the best for home users or certain SMB. But it's there. And unlike DSM there's no workarounds needed to run it wherever which is another huge tick in its favor IMO. With something as critical as a NAS I don't mind paying a company for support or hardware, but lack of official easy exit options makes me nervous. For reasons like, well, this?


Wow, OK, I'll never buy anything from Synology.


Stupid question:

Does dedicated NAS have any advantages for power user? Buy some cheap Ryzen box, stuff it with harddrives, install Centos and forget about it....


For engineers with enough time (and able to do troubleshooting), no, no advantages.

For everyone else, they do have advantage, most thing you wanted is 1 click away, including something tedious for average "power users", like "using btrfs safely with RAID-5-like disk array" (so that you can take a snapshot of your drive) mentioned here.


Electrical power consumption is a point worth looking at.


It's actually very user friendly.. You can buy a DSM license to install on a normal machine


> You can buy a DSM license to install on a normal machine

Do you have a link to a licence sales page?


I'm quite positive I've seen it for custom hardware.. but it seems VDSM is only available (officially) for Synology devices.

https://www.synology.com/en-global/products/VDSM_License_Pac...

I know xpenology uses the official software, but I'm not sure if that is legal.


What's my best option if I want to buy a compact, energy-efficient NAS on which I can install Linux? I just want the hardware, and not pay for the vendor's crappy, inflexible software.


I got a Helios64. It's a little rough around the edges but it works. Also runs FreeBSD.


That looks nice! I see the HDD trays are secured with screws though, does it support hot-swapping disks?


Yes, I've certainly done it.

The slots are pretty stiff so you don't massively need the screws - I didn't put them in until I was sure everything was assembled and working correctly.


I don't think there's many options left. I still have a QNAP TS-419P here running mainstream 5.10 kernel (it's no longer Debian since they dropped support for it), but I don't think newer models are as easily reinstalled with a custom distro.

A custom-built mini-ITX machine or a Proliant Microserver would be my next recommendations. But they're not nearly as energy efficient as a consumer-grade NAS.


> But they're not nearly as energy efficient as a consumer-grade NAS.

I agree that over multiple years, electrical power becomes a substantial part of the cost (and also the environmental footprint).

Small ARM systems with sufficient memory and fast interfaces are perfect for that.


Why do you need a Web GUI or even a NAS OS?

I got a NAS but found it easier to simply mount the remote system via a Samba share or SSH and process data on local machine. For RAID, I use a HDD backup. For snapshots, back up software already have snapshot feature (eg, Borg).

With this use case, any thin client works.


> Why do you need a Web GUI or even a NAS OS?

Who said anything about Web GUIs? I can't make sense of the rest of your comment. What does that have to do with my original question?


In that case, please ignore my comment!

(NAS could be DIY or a prepared solution. I assumed the latter, one selling point of which is pretty GUI).


> For RAID, I use a HDD backup.

A reliable backup is way more important than RAID. Unless you run some kind of emergency service.


I already discussed that a bit here:

https://news.ycombinator.com/item?id=26804794

So, some NAS vendors seem to spread a bit of FUD that their hardware is purportedly better or more robust than a self-built Linux system. It isn't - they are very likely using exactly the same chips as you. What you get when you buy a ready-made NAS is that it is quicker to initially set up and less work to configure, that is what you pay a good chunk of money for. On the other hand, you are not unlikely to later end up with a system without security fixes or upgrades, or which can't be moved to newer hardware, as was my experience. Also, the easy initial configuration is nice, but it won't help that much if you need troubleshooting or a more tailored set-up.

I had similar requirements as you, specifically:

- not too expensive

- plenty of space

- very energy-efficient, so I needed an ARM CPU

- fast Gigabit Ethernet

- needs Windows file sharing for my partners' Mac

- flexible enough to set up some home services

- needs to have updates and standard Linux software support in the long run.

- easy to back up

- some protection against sudden disk failure, using a RAID

Here is my solution:

- a Raspberry Pi 400 with keyboard: https://www.raspberrypi.org/products/raspberry-pi-400/ . It has a quad-core Cortex-A72 and four GB of RAM - it is likely much better than a low-cost NAS. Also, it runs from an SD card, which makes it much easier to install, update, or revert the system, you do not need to be afraid of bricking your device because that can't happen. It also comes with a nice and comprehensive printed manual if you buy the kit.

- Two external 2.5´´ HDD drives, 5 TB each, with USB3 interface (I took a Western Digital one and an Intenso one, as it is tradition to use different makes in a RAID, in order to avoid correlated failures)

- One powered USB 3 Hub with sufficient power for all of them. These can be relatively expensive, so leave room for it on your budget, you need it.

- an industrial-grade SD card for the Raspberry's system

- you also might want some kind of organizer to keep the devices together while providing for good passive cooling, like that: https://www.amazon.com/DecoBros-Cabinet-Basket-Organizer-Sil...

That might sum up to 250 - 300 €. If you want it cheaper, you could skip the RAID, allowing to omit the second HDD and the USB hub.

My set-up includes:

- standard Raspbian for the software

- I use the root-fs overlay provided here https://github.com/fitu996/overlayRoot.sh , in order to protect the SD card from constant writing

- /var (and with it the system logs) go to a different partition of the RAID

- RAID is set up with mdadm . There are good instructions on the web.

- I use ext3 or ext4 file systems for data. btrfs is in theory easier to extend but it has worse robustness characteristics than Ext4, and for a long time had no well-working filesystem repair tool if metadata gets corrupted. Some people might use ZFS but I think it is overkill and over-complicated for this case, it is not part of the standard kernel, and it might make kernel updates more difficult or impossible.

- also I set up openssh-server, Samba, Linux NFS server, a MoinMoin Wiki, rsync.

- I configured systemd so that Ctrl-Alt-Del on the keyboard will shut down the system, even if it is disconnected.

- I configured the disks to shut down when unused for some hours, using hdparm - that saves power, too.

The overlayfs needs to be activated when everything is finished. It has the effect that changes and logs are stored on the RAID. If you update the system, it needs to be deactivated.

Of course, back-up your SD card.

Also, please always back up your data - RAID is never a substitute for backups. What RAID is good for is that it increases availability of your device in case one of your disks fails. But there are many errors which it cannot help against, so you need regular full backups (like you need for any Synology or whatever NAS as well!!).


Since it seems like some folks are responding to the headline here, not the actual article:

Btrfs was never supported nor advertised to be supported on these lower end devices. But, it seems like if you created the btrfs drives on another device that did officially support them, you could then import them onto the unsupported hardware platform.

Still, its a bad look for Synology. Seems like the lack of btrfs on the lower end devices was merely artificial product segmentation, not an actual hardware limitation.


From comments:

> I set up a new device and the DSM software itself gave me the possibility to create a BTRFS volume from scratch.

> (...)

> It was there, for over a year, and now my data is kept ransom by Synology because they decided I need to buy an upgraded model.


Crap like This is why I don’t apply updates to my own laptop or the power stations I work on. If a system is working properly the last thing I want is changes I can’t audit. Keep them off the internet and it is amazing what works can continue working in the same way.

I would have way more outages, breakage, and features gone missing due to updates than missing out on “security patches” and getting targeted by hackers


This is absurd, how did nobody in charge realize that a storage/backup device which intentionally renders the data unavailable after any update, not just a patch, is absolutely crazy? And now I'm anxious to update my DS920+ which it seems is not affected by this change but am I willing to risk _any_ updates at this point?


For anyone else confused, in this case DSM is DiskStation Manager. (I initially thought it was diagnostic criteria.)


NAS users do things explicitly not supported, working around built in enforcement in some cases. Then, when Synology blocks the unsupported actions to prevent having even more things to check and test, users complain. It may be no more nefarious than some thing they did want to add would create more problems if they left these holes in the code.

These actions were always unsupported. It's like complaining using unpublished APIs does not work in future versions.

It's also surprising how many people, likely not even Synology NAS users, misread the article, and jump into full outrage mode.


>These actions were always unsupported.

That's not true. At one time, their DSM gave you the option of creating a btrfs filesystem when setting up from scratch. It may have been "unsupported" but their own setup process allowed you to continue.


Are you sure? Sounds like on here and on Reddit people were creating volumes outside of the affected devices or DSM and mounting them on the affected devices. For example, [1] "From my reading, you could never take new drives, stick them in a budget Synology NAS and get BTRFS. The thing is, you could set up those new drives in a higher end Synology NAS, then stick them into a budget Synology NAS and it would work with BTRFS enabled."

This type of comment seems repeated in various outlets.

Have you personally created a btrfs filesystem on one of these 5 affected devices?

I've bought Synology NAS for a long time. Each time I buy one for myself or others, I read the spec sheet and specifically buy one that supports btrfs. It's clearly listed for each model what it supports.

If it is as they state, this unintentional bug was allowing people to do things that are not validated to run reliably, then leaving the bug unfixed is also a bad idea.

Relying on bugs to do your work is a bad plan. This shows why.

[1] https://www.reddit.com/r/synology/comments/mqks2q/the_latest...


The most constrained thing on these models seems to be RAM. The listed model all have either 512MB or 1GB RAM. Is that the limitation being cited as a problem for btrfs?


Meh, this and all processes running as root I might just nuke the native OS and just use a Linux distro. Any suggestions?


Although it looks bad, I'm actually curious how many people turned this on. I have barely any experience with Synology; when you start using one, does it actively ask you to use btrfs?


Not on the models listed, since they did not support btrfs on j models at any time.


I wonder, can they be sued for this...?


[flagged]


The problem is nothing to do with btrfs itself, synology intentionally disabled its usage on specific models in a firmware update (supposedly because it wasn't supposed to be enabled in the first place - but had let or allowed people to do so for some time).


And there was never a problem with ZFS apart from the kernel team intentionally re-licensing symbols to block ZFS once they saw the ZFS team using them ;)


Yes, but they disable btrfs in firmware upgrade, without providing any ways to migrate the data.... This is not good customer experience, don't you agree?


That is not the context I have in my head for DSM.


Right? I was thinking, “What does the DSM, a medical manual, have to do with hard drives?”


diamond star motors?


I would hypothesize that the poster, like me, thinks of the "Diagnostic and Statistical Manual of Mental Disorders"


You can get an idea of what people think of by Googling. All but one of the first page of results for DSM is the same thing. None of them are what this article is about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: