Misleading headline since after testing eight more drives, none more failed.
2/12 is not nearly as dramatic as “half”, and the ones that lost data are the cheap brands as one would expect.
There is a flood of fake SSDs currently, mostly big brands. I've recently purchased counterfeit 1TB. It passes all the tests, performance is ok, it works... except it gets episodes where ioping would be anything between 0.7 ms and 15 seconds, that is under zero load. And these are quality fakes from a physical appearance perspective. The only way I could tell mine was fake is that the official Kingston firmware update tool would not recognize this drive.
Under long term heavy duty, I've routinely seen cheap modern platter outperform cheap brand name NVME.
There's some cost cutting somewhere. The NVMEs can't seem to sustain throughput.
It's been pretty disappointing to move I/O bound workloads over and not see notable improvements. The magnitude of data I'm talking about is 500-~3000GB
I've only got two NVME machines for what I'm doing so I'll gladly accept that it's coincidentally flaky bus hardware on two machines, but I haven't been impressed except for the first few seconds.
I know Everyone says otherwise which is why I brought it up. Someone tell me why I'm crazy
Edit: no, I'm not crazy. https://htwingnut.com/2022/03/06/review-leven-2tb-2-5-sata-s... this is similar to what I'm seeing with Crucial and Adata hardware, almost binary performance
Writes are completed to the host when they land on the SSD controller, not when written to Flash. The SSD controller has to accumulate enough data to fill its write unit to Flash (the absolute minimum would be a Flash page, typically 16kB). If it waited for the write to Flash to send a completion, the latency would be unbearable. If it wrote every write to Flash as quickly as possible, it could waste much of the drive's capacity padding Flash pages. If a host tried to flush after every write to force the latter behavior, it would end up with the same problem. Non-consumer drives solve the problem with back-up capacitance. Consumer drives do not have this. Also, if the author repeated this test 10 or 100 times on each drive, I suspect that he would uncover a failure rate for each consumer drive. It's a game of chance.
Twitter yuk, can somebody just post the names of the four tested drives and which passed/failed please?
Does advertising a product as adhering to some standard, but secretly knowing that it doesn't 100%, count as e.g. fraud? I.e., is there any established case law on the matter?
I'm thinking of this example, but also more generally USB devices, Bluetooth devices, etc.
Previous Discussion: https://news.ycombinator.com/item?id=30419618
This is (2022).
Wondering if anything changed since the original tests...
Meanwhile I'm over here jamming Micron 7450 pros into my work laptop for better sync write performance.
I have very little trust in consumer flash these days after seeing the firmware shortcuts and stealth hardware replacements manufacturers resort to to cut costs.
Losing flushes is obviously bad.
I wonder how much perf is on the table in various scenarios when we can give up needing to flush. If you know the drive has some resilience, say, 0.5s of time it can safely writeback during, maybe you can give up flushes (in some cases). How much faster is the app then?
It's be neat to see some low-cost improvements here. Obviously in most cases, just get an enterprise drive with supercapa or batteries onboard. But an ATX power rail that has extra resilience from the supply, or an add-in/pass-through 6-pin sata power supercap... that could be useful too.
I guess it's time for `fsync_but_really_actually_sync_it_please(2)` (and the lower level equivalents in SATA, NVMe etc.)?
Flushing in this case is from the SSDs internal DRAM cache to the actual NAND flash?
It'd be nice if there were a database of known bad/known good hardware to reference. I know there's been some spreadsheets and special purpose like the USB-C cables Benson Leung tested.
Especially for consumer hardware on Linux--there's a lot of stuff that "works" but is not necessarily stable long term or that required a lot of hacking on the kernel side to work around issues
Well, yes, but which were those 2 out of 4 vendors?
The model I’d be interested in would be the SK Hynix/Solidigm P44 Pro, as that model competes w the Samsung 9xx evo and pro models
I am a bit annoyed that everyone here takes this at face value. There's 0 evidence given, not even the vendors and models are named to confirm this.
On a related note I tested 4 DDR5 Ram kits from major vendors - half of them corrupt data when exposed to UV light.
This has always been the case? At least it was a course learning when we wrote our own device drivers for minux, even the controllers on spinning metal fib about flush.
At this point, any storage vendor should be required to pass the Sqlite test suite before they can sell their product.
Also…would modern journaling file systems protect against this sort of data loss?
If you need PLP use an enterprise drive. That's what they're for.
Cheap drives don't include large dram caches, lack fast SLC areas, and leave off super-capacitors that allow chips to drain buffers during a power-failure.
"Buy cheap, buy twice" as they say... =)
Without any more information this post is just bullshit. For example, it's not documented how the flushing has been done. On Linux, even issuing 'sync' is not enough: https://unix.stackexchange.com/questions/98568/difference-be...
The bottom answer especially states that "blockdev --flushbufs may still be required if there is a large write cache and you're disconnecting the device immediately after"
The hpdarm utility has a parameter for syncing and flushing device buffers themselves. Seems like all three should be done for a complete flush at all levels.
That's what PLP is for.
Don't use home-grade SSDs for storing anything that is considered critical.
The rule is not that hard to remember.
2022
Brands please. It’s time they have some pressure to fix these data corruption issues
[flagged]
this is from Feb 2022
Name the offenders please.
I am sure it might be easy to see visually - a lack of substantial capacitor on the board would indicate a high likelihood of data loss.
That is unfortunate, but I guess those SSDs performed really well and outclassed all others in performance benchmarks? lol
The posting is from Feb 2022, nearly 2 years ago. How is this suddenly trending on Hacker News?
We shipped a shader cache in the latest release of OBS and quickly had reports come in that the cached data was invalid. After investigating, the cache files were the correct size on disk but the contents were all zero. On a journaled file system this seems like it should be impossible, so the current guess is that some users have SSDs that are ignoring flushes and experience data corruption on crash / power loss.