2.5PB = 2500TB = 2,500,000 GB
2,500,000 GB / (80MB /s typical HDD Speed ) = 31,250,000 seconds = 8680 Hours = 361 days.
It will take HDD 361 days to write 2.5PB at 80MB/s.
I wonder how many HDD can survive 361 days of 80MB/s non stop?
EG, datacentre-grade SATA and near-line SAS drives like the WD RE (https://www.wdc.com/en-um/products/business-internal-storage...) and Seagate Enterprise Capacity (http://www.seagate.com/au/en/enterprise-storage/hard-disk-dr...) are rated for 550TB/year.
Lower-end drives (NAS, cold-storage, desktop models) are rated less.
Seagate's overall Enterprise/Datacentre lineup (http://www.seagate.com/au/en/enterprise-storage/hard-disk-dr...) ranges from 180TB/year to 550TB/year, and elsewhere on Seagate's site they indicate that a 550TB/year is "10x more than desktop drives".
These are all just ratings though. The theory is that over a population of drives, you'll see a higher failure rate than predicted if you do higher than the rated workload per year. WDC used to have a whitepaper on it called "Why Specify Workload", but it's no longer on their site.
I have in some cases seen enterprise sata drives pushed to the kinds of workload you're talking about - 2.5PB in a year - and seen in the order of 10% fail over that time, with a drive that normally has a ~0.5% AFR.
I don't understand why Intel wouldn't just configure these drives to go into read-only mode permanently. If I realized my hard drive had become read-only and didn't suspect hard drive failure, my first inclination would be to reboot my computer, not immediately back up all data.
I have a lot of experience with long-running Intel SSDs of various models, including pushing them to the same kinds of extreme that the SSD endurance experiment did, and I have never observed them to self-brick simply because they reached their flash endurance point.
What I have observed is a number of firmware bugs (or possibly just the supernova feature) that caused the drive to brick on power cycle, even for drives in perfect health.
I liked the SSD endurance articles, because they went a long way to allaying fears about SSDs, but I think it's a shame they've left this point in.
SSDs though, they just disappear from the bus when they fail; so I haven't been able to look at a dead one and see what looks like a useful predictor. I have seen some ssds reallocating a big block, which kills performance while its going on...
This isn't always true, and actually shouldn't ever be true - it's a particular failure mode you're seeing, and while it appears to be one common across a number of SSD controllers, it's still a pretty sorry fact that it happens.
All SSDs (at least all not-complete-rubbish ones) report some kind of flash/media wearout indicator via SMART, which isn't necessarily an imminent failure indicator (SSDs will generally continue to work long past the technical wearout point), but is a very strong indicator that you should replace it soon and should probably buy a better one next time.
SSDs do suffer from sector reallocations in the normal way, and the same kind of metric monitoring can be done. It's pretty vendor-specific as to what SMART attributes they report, but attributes like available reserved space, total flash writes, flash erase and flash write failure counts and so on are pretty common.
I get it, he called it a "spy-chip" but the BMC really did have exploitable attack surface.