Note: I do not in any way believe this. It’s just a “what if” crackpot thought I had when I woke up this morning to a dead, unrecognizable SSD in my “beater” laptop and filled out the warranty claim.
Cheap SSDs that have generous 3-5 year warranties (3 years in my case) are designed to “fail” sometime within that period so that you’ll send them in for RMA. They don’t fail due to any component failure, just a pseudo-randomly timed soft brick that can only be reactivated by the manufacturer. When you RMA it, they reset the killswitch and harvest your data. The replacement you get is just one that someone else has RMA’d that they have un-bricked.
That’s why I would rather spend on a pricier one. Also I have no idea what an RMA is and I would rather blowtorch the SSD than do that.
Was this beater laptop ever dropped? I have 9-year-old SSDs going strong.
My HP branded SSD died exactly at 5 years. No warnings. I attribute it to backing up the drive in prep for installing into a new machine. It got hot during prolonged data transfer and cooked itself.
Nope. Lived on the coffee table and was mostly (almost exclusively) used for IMDB lookups when we’re watching a movie or something and one of us is like “is that…?”
I’ve got other SSDs that are 10+ years also fine. And I’ve had some last less than a month (note: never buy Silicon Power brand drives).
Woke up the laptop this morning and there were a bunch of kernel messages about the root volume being inaccessible. Power off and back on: BIOS doesn’t even detect the drive. Pulled the drive and USB->NVMe adapter also doesn’t recognize it from my main laptop.
This SSD was bought in July and had otherwise been performing great. Luckily still had the old one (it didn’t fail, just upgraded from 256 to 500 GB) and threw it back in and re-installed Ubuntu.
:shrug: You win some you lose some lol.
Bricking is pretty much how SSDs fail IME. There’s no warning. There might be errors in the system while it fails, but then it’ll just be completely dead and not detected anymore.
I recommend always usung full disk encryption. Returns or no. I also generally never return or allow data storage to go in for repair as a policy.
Scary. Do you have any protocols for HDD or to prolong their longevity, like a minimum or maximum powering frequency?
Not the person you asked, but heat is the killer of SSDs. As well as unplugged (unpowered) and hot room for bit-rot data loss.
Edit, just realized you said HDD. In that case avoid power cycles. Those are the major contributer to mechanical failure, and use q filesystem that repairs itself from bit-rot so you don’t lose data.
I still have a 14 year old HDD that survived being spinning in a NAS for 10 years, SMART still is OK, but I am seeing bit-rot crop up in files so that now sits as a extra backup, should my good backup drives fail.
Interesting, I didn’t know that; what temp should dormant SSDs be kept in? And by not power-cycling, do you mean to just keep HDDs on 24/7?
There is a white paper on SSD and heat popping the electrons out of there trapped memory spots. I will have to search it. But an unpowered SSD in 40C will start showing signs of data loss in a week, hotter temps and your a could be gone in a month.
HDD, yes run it always instead of stop start
I’ve done a quick search but have not located the science article yet, I will check later. However the ssd standards have this. The device should meet that, but as you can see if you had a hot closet in India your data could be gone in 3 weeks.
Per JESD218, a client class SSD must maintain its data integrity at the defined BER for only 500 hours at 52°C (less than 21 days) or 96 hours at 66°C (only four days).


