Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a pretty poor article. Consumers haven't bought SLC in years and MLC is much less reliable (although the actual reliability of flash is still higher than the rating). It doesn't take write amplification or any kind of real workload into account. And it may contribute to the alarmism that it claims to dispel; at worst your SSD will last 172 days? Yikes!


Not like magnetic is necessarily immune. WD greens used to have astoundingly bad spin down logic. I killed a bunch of them with default Linux settings and near idle servers in about that time frame.


Greens are not enterprise drives and shouldn't be used in RAID arrays.

Their logic makes perfect sense for single disk and "green" (power saving.) Its just incredible what people will do to save a little cash on servers. Your company's data depends on them and you put in the cheapest Best Buy inventory you can find?


I'm not sure who you are replying to. Your fictitious account does not match my reality.

A drive that is accessed a couple times a day seems like the perfect use case for a green. Sadly, it head parks after 8 seconds of idle time, and Linux was doing one IO every 30 seconds, I believe as part of the S.M.A.R.T. monitoring (its been a while), the drive is rated for 300000 head parks in its lifetime, so in 105 days the drive has thrashed away its specified lifetime because you were monitoring the drives' health, and head parks wasn't one of the parameters tracked, so it didn't even help to monitor!

This is not because the Green is not Black. It is not the grade of bearing, or the balance tolerance of the platters. This is because WD wrote ridiculously silly firmware for their drives and they self destructed on one of the most common computing platforms.

Back to the article topic: The similar workload machines I deployed 14 months ago have an SSD for their root drive and newer greens (tweaked to not load cycle) for their bulk. None of the SSDs have even reach 1% of their media lifetime count in their S.M.A.R.T. data. I'm not worried about wearing out SSDs.


AFAIK the RE-GP series have the same issue and are advertised as server/enterprise grade.


I have a Scorpio Blue that parked its heads incessantly until I deployed a utility to force better ACPI settings.

Not write endurance, per se. But you only get to park the heads so many times, on average...


Yeah, statistics work. I had greens fail at 30k parks and survive over 1M parks. 300k was the published number for parks. I think they screwed up by allowing the firmware to thrash them to death, but I think they got the 300k number right.


Do you have any concrete/verified descriptions of the firmware problems? I ran across plenty of people commenting on the problem, and my own intuition (or deduction, based on listening to the "click... click... click..." and eventually noting the escalating number of parks reported) plus the fix that worked for me strengthened this intuition. But I never saw a documented analysis let alone an admission on WD's part. (Admittedly, I may have missed one or both.)


While I completely agree, I think it is important to point out that SLC is still used pretty widely in Enterprise-grade SSDs. Standard configuration on these drives is 100% over-provisioning, so lifespan is likely 20+ years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: