Imaging a Hard Drive with Non-ECC Memory – What Could Go Wrong?

  • Bit flips are totally real, at scale you will definitely see them on large queries. There was a fun talk at DEFCON on bitsquatting, the process of buying 1 bit off domain names and then accepting all incoming connections. Attacks like rowhammer similarly abuse erroneous bit flips. Supposedly microsoft can detect solar activity based on the number of windows crash logs they receive.

    DEFCON Talk: https://www.youtube.com/watch?v=aT7mnSstKGs

    https://en.wikipedia.org/wiki/Bitsquatting

    https://en.wikipedia.org/wiki/Row_hammer

  • ECC is good, and I genuinely wish it were more common. Thankfully, Ryzen CPUs support ECC by default (except for pre-7000 series with integrated graphics that aren't "Pro" versions), so long as the motherboard does, too (like all ASRock that I've seen). I'm running several Ryzen servers with ECC.

    On the other hand, there are many, many systems out there that don't have ECC, nor do they have the option to have ECC. While every video on Youtube wants us to believe that the difference between 580 and 585 frames per second in some silly game or another makes all the difference in the world, for me the difference between a system that runs 10% slower and one that crashes in the middle of the night is actually significant. I test all my systems at a certain memory frequency, then back off to the next slower frequency just to be sure.

    That doesn't stop memory errors from happening, but most systems have lived their entire lives without having random crashes or random segfaulting. I consider that worthwhile.

  • A bit over 20 years ago I had a PC with a memory stick that had gone bad, but not bad enough that it was crashing all the time ... it crashed often enough running windows 98 apps that I attributed all crashes to software nonsense.

    Back then it was recommended to run a defragger every so often, so I set up a cron job to run it every Saturday night or something like that. The net result was that every file block that got moved made a trip through memory with some small probability of getting corrupted. Often the errors were in files that weren't used that often so I didn't immediately notice. The net result is that after many months of this, I started noticing PDF files that were corrupted, or mp3 files that would hiccup in the middle even though it used to play perfectly before. Sadly, I had ripped my 500-ish CD collection and then had gotten rid of the physical CDs.

  • That reminds me of how I accidentally tracked memory issue to the failing power supply.

    I noticed (after some windows bluescreen) on memtest that the memory is showing some errors. Ordered another 16GB pair, replaced it and.... the problem persisted.

    Suspecting something with motherboard I just chalked it to something with mobo and pretty much said "well I'm not replacing mobo now, it will have to wait for next hardware refresh. Gaming PC so no big deal. And now I had 32 GB of RAM in PC.

    Weirdly enough, problem only happened when running on multi-core memory test.

    Cue ~1 year after and my power supply just... died. Guessing bad caps I just ordered another and thought nothing of it. On a whim I ran memtest and....

    nothing. All fixed. Repeated few times and it was just fine, no bluescreen for ~ 2 years now too.

    I definitely want to get next machine with ECC but the DDR4 consumer ECC situation looks... weird. I'm not sure whether I should be happy with on-chip ECC, I'd really prefer to have whole CPU-memory pipe ECCed

  • Two things. Firstly, I don't think any conclusions can be made about whether dd or dd-rescue is more susceptible to bit flips. It could be that both allocated a buffer, and dd-rescue just happened to be handed the area of memory with the fault in it, which it reused multiple times, where when dd was run that area of memory was used by something else. Memory mapping and usage in a real operating system is highly non-deterministic due to the sheer amount of things that affect it.

    Secondly, once a good list of known faulty memory addresses had been created by memtest, one can tell the operating system not to use them. Then you can keep using your old hardware without the reliability problems. Although, it is possible that further areas of memory will subsequently fail, and without ECC, you'll still be vulnerable to random (cosmic ray-induced) bit flips.

  • I ran a cluster of ~30k blade based computers booting entirely off iPXE. They didn't have any onboard ssd/disk storage or ECC memory. Every day, a few of them would randomly lock up, they'd reboot with a fresh network image and keep on humming.

  • I've had a lot of really strange bugs and data loss with my current build (Ryzen with Gskill memory). After running a memtest for 24h i finally saw that two of the four ram sticks were faulty (two bit flips on each only rarely and on a specific test). The company changed them but now a year later without any issues I have another one that failed in exactly the same way. This is the last time I build a non-ECC system for myself.

  • Amazing technical write up. But if there's no cause for alarm based on SMART, I would just do the memtest right then because that's always my goto for weird undiagnosed problems. I find it's usually not the problem, although when it has been I've ended up wasting a silly amount of time on it(just like this case!).

    And if there was cause for alarm, I would think long and hard about imaging from the original computer at all. With certain failure modes in drives, just reading could cause more corruption; each failed attempt could lose data.

    But yeah, happy you did it this way in the end, because I learned a ton from the resulting blog post!

  • AFAICT, no current Mac comes with ECC - do they have the same issues? If so, one doesn't hear about them too often.

  • https://youtu.be/aPd8MaCyw5E ("ShmooCon 2014: You Don' Have The Evidence - Forensic Imaging Tools") was quite an eye-opening talk about common tools, like the article-mentioned `dd` (and its cousin `ddrescue`) and how they deal with I/O errors.

    To be clear, I do not believe that the tools are at fault - rather, the SATA/SAS/IDE controllers have a different design goal, and software tools can only do so much.

    Tools like DeepSpar (HW+SW), PC-3000 (also HW+SW) allow for a scary level of nitty-gritty access to HW, including flashing SSD/HDD controller FW in case in went pear-shaped), but for data recovery - be it in a forensic context, or in a context of retrieving important irreplaceable data, I have always had a nerd-lust for those tools. Used them at a previous job, but can't ever justify the price for personal and very infrequent use. :)

  • >Does increased heat increase the likelihood of memory errors? I think it does.

    I just got through a round of overclocking my memory. Yes, heat does.

    >tRFC is the number of cycles for which the DRAM capacitors are "recharged" or refreshed. Because capacitor charge loss is proportional to temperature, RAM operating at higher temperatures may need substantially higher tRFC values.

    https://github.com/integralfx/MemTestHelper/blob/oc-guide/DD...

  • This reminds me of a bug in Google Chrome that was attributed to flipped bit.

    If anyone has the link, it's missing from my collection...

  • This wasn't run with a large enough sample size to be statistically valid.

  • Moral of the story?

    Upgrade to DDR5 ram the latest standard which has on-die ECC memory but is not as good at spotting bit flips unlike proper ECC memory with a separate extra data correction chip.

    https://en.wikipedia.org/wiki/DDR5_SDRAM#:~:text=Unlike%20DD....

    Whilst Proper ECC ram chips and motherboards exist, I'm surprised that a cheaper but equally as good as Proper ECC solution doesn't exist although I know some would argue that DDR5 is a step in the right direction of a marathon.

    I guess the markets know best and chase the numbers, assuming they are also using Proper ECC memory, binary coded decimal and not floating point arithmetic which introduces errors, something central banks have been using for decades?

    https://en.wikipedia.org/wiki/Floating-point_error_mitigatio...

  • > To even detect this, I needed the patience and discipline to verify the checksum on a 500GB file! Imagine how much more time I could have wasted if I didn't bother to verify the checksum and made use of an important business document that contained one of the 14 bit flips?

    Unpopular-opinion counterpoint - the odds of this actually happening are vanishingly unlikely. Many file formats have built-in integrity checks and tons of redundancies and waste. I wouldn't want to risk handling extremely valuable private keys or conducting high value cryptocurrency transactions or something, I suppose, on a machine without ECC memory, but that just doesn't really come up in most knowledge worker or end consumer scenarios.

    The odds of actually getting bit by this in a way that matters to you are really low, which is why nobody cares.