SSD speeds are nothing short of miraculous in my mind. I come from the old days of striping 16 HDDs together (at a minimum number) to get 1GB/s throughput. Depending on the chassis, that was 2 8-drive enclosures in the "desktop" version or the large 4RU enclosures with redundant PSUs and fans loud enough to overpower arena rock concerts. Now, we can get 5+GB/s throughput from a tiny stick that can be used externally via a single cable for data&power that is absolutely silent. I edit 4K+ video as well, and now can edit directly from the same device the camera recorded to during production. I'm skipping over the parts of still making backups, but there's no more multi-hour copy from source media to edit media during a DIT step. I've spent many a shoot as a DIT wishing the 1s&0s would travel across devices much faster while everyone else on the production has already left, so this is much appreciated by me. Oh, and those 16 device units only came close to 4TB around the time of me finally dropping spinning rust.
The first enclosure I ever dealt with was a 7-bay RAID-0 that could just barely handle AVR75 encoding from Avid. Just barely to the point that only video was saved to the array. The audio throughput would put it over the top, so audio was saved to a separate external drive.
Using SSD feels like a well deserved power up from those days.
The latency of modern NVMe is what really blows my mind (as low as 20~30 uS). NVMe is about an order of magnitude quicker than SAS and SATA.
This is why I always recommend developers try using SQLite on top of NVMe storage. The performance is incredible. I don't think you would see query times anywhere near 20uS with a hosted SQL solution, even if it's on the same machine using named pipes or other IPC mechanism.
Meanwhile a job recently told me they are on IBM AS400 boxes “because Postgress and other sql databases can’t keep up with the number of transactions we have”… for a company that has a few thousand inserts per day…
Obviously not true that they’d overwhelm modern databases but feels like that place has had the same opinions since the 1960s.
Then there's optane that got ~10us with. The newest controllers and nand is inching closer with randoms but optane is still the most miraculous ssd tech that's normally obtainable
It's not really the SSDs themselves that are incredibly fast (they still are somewhat), it's mostly the RAM cache and clever tricks to make TLC feel like SLC.
Most (cheap) SSDs their performance goes off a cliff once you hit the boundary of these tricks.
This hits home even more since I started restoring some vintage Macs.
For the ones new enough to get an SSD upgrade, it's night and day the difference (even a Power Mac G4 can feel fresh and fast just swapping out the drive). For older Macs like PowerBooks and classic Macs, there are so many SD/CF card to IDE/SCSI/etc. adapters now, they also get a significant boost.
But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
> But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
I remember this being a key troubleshooting step. Listen/feel for the hum of the hard drive OR the telltale click clack, grinding, etc that foretold doom.
I've just finished CF swapping a PowerBook 1400cs/117. It's a base model with 12MB RAM, so there are other bottlenecks, but OS 8.1 takes about 90 seconds from power to desktop and that's pretty good for a low-end machine with a fairly heavy OS.
Somehow the 750MB HDD from 1996 is still working, but I admit that the crunch and rumble of HDDs is a nostalgia I'm happy to leave in the past.
My 1.67 PowerBook G4 screams with a 256GB mSATA SSD-IDE adapter. Until you start compiling code or web surfing, it still feels like a pretty modern machine. I kind of wish I didn't try the same upgrade on a iBook G3, though...
I had a 2011 MBP that I kept running by replacing the HDD with an SSD, and then removed the DVD-ROM drive with a second SSD. The second SSD had throughput limits because it was designed for shiny round disc, so it had a lower ability chip. I had that until the 3rd GPU replacement died, and eventually switched to second gen butterfly keyboard. The only reason it was tolerable was because of the SSDs, oh and the RAM upgrades
While these are geared toward retrocomputing, there are things that attempt to simulate the sound based on activity LEDs: https://www.serdashop.com/HDDClicker
I'm running 12 of them for ZFS cache/log/special, and they are fast/tough enough to make a large array on a slow link feel fast. I shake my fist at Intel and Micron for taking away one of the best memory technologies to ever exist.
I have (stupidly) used a too small Samsung EVO drive as a caching drive, and that is probably the first computer part that I've worn out (bar a mouse & keyboard).
Totally. I spent a lot of time 15-20 years ago building out large email systems.
I recently bought a $17 SSD for my son’s middle school project that was speced to deliver like 3x what I needed in those days. From a storage perspective, I was probably spending $50 GB/mo all-in to deploy a multi million dollar storage solution. TBH… you’d probably smoke that system with used laptops today.
If you're interested in some hard data, Backblaze publishes their HD failure numbers[1]. These disks are storage optimized, not performance optimized like the parent comment, but they have a pretty large collection of various hard drives, and it's pretty interesting to see how reliability can vary dramatically across brand and model.
Depending on the HDD vendor/model. We had hot spares and cold spares. On one build, we had a bad batch of drives. We built the array on a Friday, and left it for burn-in running over the weekend. On Monday, we came in to a bunch of alarms and >50% failure rate. At least they died during the burn-in so no data loss, but it was an extreme example. That was across multiple 16-bay rack mount chassis. It was an infamous case though, we were not alone.
More typically, you'd have a drive die much less frequently, but it was something you absolutely had to be prepared for. With RAID-6 and a hot spare, you could be okay with a single drive failure. Theoretically, you could lose two, but it would be a very nervy day getting the array to rebuild without issue.
The article speculates on why Apple integrates the SSD controller onto the SOC for their A and M series chips, but misses one big reason, data integrity.
About a decade and a half ago, Apple paid half a billion dollars to acquire the patents of a company making enterprise SSD controllers.
> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention. In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.
> The article speculates on why Apple integrates the SSD controller onto the SOC for their A and M series chips, but misses one big reason, data integrity.
If they're really interested with data integrity they should add checksums to APFS.
If you don't have RAID you can't rebuild corrupted data, but at least you know there's a problem and perhaps restore from Time Machine.
For metadata, you may have multiple copies, so can use a known-good one (this is how ZFS works: some things have multiple copies 'inherently' because they're so important).
Edit:
> Apple File System uses checksums to ensure data integrity for metadata but not for the actual user data, relying instead on error-correcting code (ECC) mechanisms in the storage hardware.[18]
> If they're really interested with data integrity they should add checksums to APFS.
Or you can spend half a billion dollars to solve the issue in hardware.
As one of the creators of ZFS wrote when APFS was announced:
> Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data.
I use zfs where I can (it has content checksums) but it sucks bad on macOS, so I wrote attrsum. It keeps the file content checksum
in an xattr (which APFS (and ext3/4) supports).
I use it to protect my photo library on a huge external SSD formatted with APFS (encrypted, natch) because I need to mount it on a mac laptop for Lightroom.
Note that this isn't too long after Apple abandoned efforts to bring ZFS into Mac OS X as a potential default filesystem. Patents were probably a good reason, given the Oracle buyout of Sun, but also a bit of "skating to where the puck will be" and realizing that the spinning rust ZFS was built for probably wasn't going to be in their computers for much longer.
> Patents were probably a good reason, given the Oracle buyout of Sun
There is no reason to speculate as the reason is know (as stated by Jeff Bonwick, one of the co-inventors of ZFS):
>> Apple can currently just take the ZFS CDDL code and incorporate it (like they did with DTrace), but it may be that they wanted a "private license" from Sun (with appropriate technical support and indemnification), and the two entities couldn't come to mutually agreeable terms.
> I cannot disclose details, but that is the essence of it.
More evidence they thought hdds were on their way out was the unibody macbook keynote. They made a big deal about how the user can access their hdd from the latch on the bottom without any tools as they said ssd was on the horizon.
Do Apple SSDs have a much longer longevity and reliability? I've not looked at the specific patents nor am I an expert on signal processing but I've worked on SSD controllers and NAND manufacturers in the past and they had their own similar ideas as this.
From my experience working on Mac laptops, yeah. SSD failures are incredibly rare but on the flip side when they do go out repairs are very costly.
I know if my previous job at a large hard drive manufacturer we had special Apple drives that ran different parts and firmware than the regular PC drives. Their specs and tolerances where much different than the PC market at a whole.
Interestingly, when M4 mac mini went on sale, version with 32GB RAM/1TB drive was priced exactly 2x as 16GB RAM / 512GB drive version. This kinda implies that Apple sells only RAM and storage, and gives away the rest for free.
There is someone on YT (Doctor Feng, or similar, though I can't find) who literally will have people ship him entry level iPhones/iPads/MBPs, etc, and he'll upgrade them to 4 and 8 TB SSDs. And create ASMR videos of the process.
Even with upgradable memory:
When I bought my "cheesegrater" Mac Pro, I wanted 8TB of SSD.
Except Apple wanted $3,000 for 7TB of SSD (considering the sticker price came with a baseline of 1TB).
I bought a 4xM.2 card and 4x2TB Samsung Pro SSDs, which cost me $1,300. However, I kept the 1TB "system" SSD, which was faster, at 6.8GBps versus the system drive's 5.5 GBps.
Similar to memory. OWC literally sells the same memory as Apple (same manufacturer, same specifications). Apple also wanted $3,000 for 160GB of memory (going from 32 to 192). I paid $1,000.
It makes no sense for nonvolatile storage, where the power consumption, bandwidth limit, and latency of socketed interconnects are trivial, compared to the speed, latency, and power consumption of the drive itself.
For RAM, it's an entirely different ball game. The closer you can have it to the processor die, the higher the bandwidth, the lower the latency, and the lower the power consumption.
On the one hand, I get you. On the other, I’m just not sure we live in that world anymore for most people.
My daily driver is a base config 16” M1 MacBook Pro from 2021 and I have no inclinations to upgrade at all. Even the battery is still good.
I run CAD, compile and run large C++ projects. Do tons of heavy stuff in matlab. Various visualizations of simulations. My laptop just isn’t the slow thing anymore. I’m sure workloads exists that would push this machine but how many people are actually doing that. (Ok fine, chrome exists)
Even the smaller SSD isn’t an issue for me in practice because iCloud Drive and Box automatically move things I don’t often access off disk freeing local space.
Frankly if smaller memory footprints and smaller SSDs translates to lower base config prices and longer battery life it was the right choice, for me anyway.
I bought one during their preorder period. The first SSD started to fail due to overheating. I just received and installed the replacement this week. Fingers crossed that it will be okay.
Important note: the seller provides no warranty for the SSDs. I was fortunate that they offered a 1-year warranty when I bought mine, but that is no longer the case now. $700 is a pretty big risk when there's no warranty.
FWIW, the non-Pro-compatible SSDs were overpriced initially as well, but they came down in price as they became more prevalent. Wait a few months, and we'll probably see the same with Pro-compatible SSDs.
> I was provided the $699 M4 Pro 4TB SSD upgrade by M4-SSD. It's quite expensive (especially compared to normal 4TB NVMe SSDs, which range from $200-400)...
Yes, that kind of culture is why while I appreciate many of Apple's technologies, I rather let customers or employers provide hardware if they feel so inclined for me to use Apple.
Mothers mac mini 2014? Slow as a dog, 30 second pauses, became unusable. Extremely tricky to reach the 5400rpm hard disc. Found a third party adaptor could bodge an nvme under the easily removable base flap. Suddenly transformed it to a fast nippy useable machine. (She paid up for the 8gb ram originally). But still rather annoyed that Apple essentially crippled their own product and it could only be fixed by chance. Wasn't a cheap pc...
> Fix the cables in place. This can be very fiddly. It helps greatly to have a fine pointed set of tweezers to assist with placement, bending and the application of pressure whilst screw-down is underway. Take your time and try to get all the cable core under the screw or at least a fair amount.
If you do this mod, you should really use crimped ring connectors instead of just hooking the power cables around the screws. It greatly reduces the risk of pull-out since the screw retains the connector, which also means less chance of shorts and a much easier install. Also since the terminals are uniform and flat, you get much more even clamping. I would also add heat shrink over the crimp.
I don't have a Mini so can't comment on the right size to buy, but you can buy ring terminals in practically any diameter for next to nothing:
They probably use them in production as test jig connects for passing power. They are vertical inter-board rails. When making physical connections for high current contacts it pays to have a larger surface area in case there is a poor connection as substantial draw may occur for short periods. Also, such surfaces may degrade over time, so extra surface area is desirable.
The linked repo has a pretty good rundown of possible reasons:
> If non-square screens on Macbook Pros make your blood boil with rage
> If you can't afford or don't want to pay for a Macbook Pro (smart choice)
> If you have ergonomics concerns with shrinking laptops and one size fits all keyboards
> If you like your systems to be repairable and modular rather than comprised of proprietary parts shoehorned in to a closed source design available only from a single vendor for a limited time
> If you are blind (and don't want to carry a screen around)
> If you want to use AR instead of a screen and therefore prefer to be untethered
> If you are on a sailing ship, submarine, mobile home, campervan, paraglider, recumbent touring bicycle, or otherwise off-grid
> If you want a capable unix system to power a mobile mechatronic system
I'd add in not having to deal with a Macbook in clamshell mode doing stupid crap like forcing you to double-tap the touchID button sometimes, refusing to connect to external keyboards and mice on wake, and some of the other annoyances I have dealt with.
Also, a Mac Mini is small, and a MacBook is not, at least as a function of "desk area" vs "area consumed".
1. cheaper 2. different form factor 3. more choice of battery/kb/mouse/screen/camera 4. not landfill when you have to replace battery/kb/mouse/screen/camera 5. doesn't have an annoying chunk out of the screen 6. doesn't have a video camera pointed at you all the time 7. keyboard that suits large hands 8. keyboard in preferred layout 9. not subject to apple tax on most components/upgrades
The first enclosure I ever dealt with was a 7-bay RAID-0 that could just barely handle AVR75 encoding from Avid. Just barely to the point that only video was saved to the array. The audio throughput would put it over the top, so audio was saved to a separate external drive.
Using SSD feels like a well deserved power up from those days.
This is why I always recommend developers try using SQLite on top of NVMe storage. The performance is incredible. I don't think you would see query times anywhere near 20uS with a hosted SQL solution, even if it's on the same machine using named pipes or other IPC mechanism.
Obviously not true that they’d overwhelm modern databases but feels like that place has had the same opinions since the 1960s.
Most (cheap) SSDs their performance goes off a cliff once you hit the boundary of these tricks.
Dead Comment
Tell me more. When do I hit the boundary? What is perf before/after said boundary? What are the tricks?
Tell me something actionable. Educate me.
For the ones new enough to get an SSD upgrade, it's night and day the difference (even a Power Mac G4 can feel fresh and fast just swapping out the drive). For older Macs like PowerBooks and classic Macs, there are so many SD/CF card to IDE/SCSI/etc. adapters now, they also get a significant boost.
But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
I remember this being a key troubleshooting step. Listen/feel for the hum of the hard drive OR the telltale click clack, grinding, etc that foretold doom.
Somehow the 750MB HDD from 1996 is still working, but I admit that the crunch and rumble of HDDs is a nostalgia I'm happy to leave in the past.
My 1.67 PowerBook G4 screams with a 256GB mSATA SSD-IDE adapter. Until you start compiling code or web surfing, it still feels like a pretty modern machine. I kind of wish I didn't try the same upgrade on a iBook G3, though...
Would those be bandwidth limited by the adapter/card or CPU? Can you get throughput higher than say, a cheap 2.5" SSD over Sata 3/4?
I have (stupidly) used a too small Samsung EVO drive as a caching drive, and that is probably the first computer part that I've worn out (bar a mouse & keyboard).
I recently bought a $17 SSD for my son’s middle school project that was speced to deliver like 3x what I needed in those days. From a storage perspective, I was probably spending $50 GB/mo all-in to deploy a multi million dollar storage solution. TBH… you’d probably smoke that system with used laptops today.
Woah, how long would that last before you'd start having to replace the drives?
---
1. https://www.backblaze.com/cloud-storage/resources/hard-drive...
More typically, you'd have a drive die much less frequently, but it was something you absolutely had to be prepared for. With RAID-6 and a hot spare, you could be okay with a single drive failure. Theoretically, you could lose two, but it would be a very nervy day getting the array to rebuild without issue.
Dead Comment
About a decade and a half ago, Apple paid half a billion dollars to acquire the patents of a company making enterprise SSD controllers.
> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention. In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.
https://www.anandtech.com/show/5258/apple-acquires-anobit-br...
If they're really interested with data integrity they should add checksums to APFS.
If you don't have RAID you can't rebuild corrupted data, but at least you know there's a problem and perhaps restore from Time Machine.
For metadata, you may have multiple copies, so can use a known-good one (this is how ZFS works: some things have multiple copies 'inherently' because they're so important).
Edit:
> Apple File System uses checksums to ensure data integrity for metadata but not for the actual user data, relying instead on error-correcting code (ECC) mechanisms in the storage hardware.[18]
* https://en.wikipedia.org/wiki/Apple_File_System#Data_integri...
Or you can spend half a billion dollars to solve the issue in hardware.
As one of the creators of ZFS wrote when APFS was announced:
> Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data.
https://arstechnica.com/gadgets/2016/06/a-zfs-developers-ana...
APFS keeps redundant copies and checksums for metadata, but doesn't constantly checksum files looking for changes any more than NTFS does.
https://git.eeqj.de/sneak/attrsum
I use zfs where I can (it has content checksums) but it sucks bad on macOS, so I wrote attrsum. It keeps the file content checksum in an xattr (which APFS (and ext3/4) supports).
I use it to protect my photo library on a huge external SSD formatted with APFS (encrypted, natch) because I need to mount it on a mac laptop for Lightroom.
Odds are very good that totally different people work on the architecture of AFS and SoC design.
There is no reason to speculate as the reason is know (as stated by Jeff Bonwick, one of the co-inventors of ZFS):
>> Apple can currently just take the ZFS CDDL code and incorporate it (like they did with DTrace), but it may be that they wanted a "private license" from Sun (with appropriate technical support and indemnification), and the two entities couldn't come to mutually agreeable terms.
> I cannot disclose details, but that is the essence of it.
* https://archive.is/http://mail.opensolaris.org/pipermail/zfs...
* https://web.archive.org/web/*/http://mail.opensolaris.org/pi...
Every flash controller does this. Modern NAND is just math on a stick. Lots and lots of math.
Still sucks that you can’t use standard parts.
I know if my previous job at a large hard drive manufacturer we had special Apple drives that ran different parts and firmware than the regular PC drives. Their specs and tolerances where much different than the PC market at a whole.
Interestingly, when M4 mac mini went on sale, version with 32GB RAM/1TB drive was priced exactly 2x as 16GB RAM / 512GB drive version. This kinda implies that Apple sells only RAM and storage, and gives away the rest for free.
Even with upgradable memory:
When I bought my "cheesegrater" Mac Pro, I wanted 8TB of SSD.
Except Apple wanted $3,000 for 7TB of SSD (considering the sticker price came with a baseline of 1TB).
I bought a 4xM.2 card and 4x2TB Samsung Pro SSDs, which cost me $1,300. However, I kept the 1TB "system" SSD, which was faster, at 6.8GBps versus the system drive's 5.5 GBps.
Similar to memory. OWC literally sells the same memory as Apple (same manufacturer, same specifications). Apple also wanted $3,000 for 160GB of memory (going from 32 to 192). I paid $1,000.
DirectorFeng: https://www.youtube.com/channel/UCbzzMQ1mNKjAaDwbELsVYcQ
For RAM, it's an entirely different ball game. The closer you can have it to the processor die, the higher the bandwidth, the lower the latency, and the lower the power consumption.
My daily driver is a base config 16” M1 MacBook Pro from 2021 and I have no inclinations to upgrade at all. Even the battery is still good.
I run CAD, compile and run large C++ projects. Do tons of heavy stuff in matlab. Various visualizations of simulations. My laptop just isn’t the slow thing anymore. I’m sure workloads exists that would push this machine but how many people are actually doing that. (Ok fine, chrome exists)
Even the smaller SSD isn’t an issue for me in practice because iCloud Drive and Box automatically move things I don’t often access off disk freeing local space.
Frankly if smaller memory footprints and smaller SSDs translates to lower base config prices and longer battery life it was the right choice, for me anyway.
Important note: the seller provides no warranty for the SSDs. I was fortunate that they offered a 1-year warranty when I bought mine, but that is no longer the case now. $700 is a pretty big risk when there's no warranty.
FWIW, the non-Pro-compatible SSDs were overpriced initially as well, but they came down in price as they became more prevalent. Wait a few months, and we'll probably see the same with Pro-compatible SSDs.
Privately it is all about Linux/Windows/Android.
Very good insights,
https://en.m.wikipedia.org/wiki/The_Cult_of_Mac
Looks like you also have to do the upgrade yourself (so it’s not all just cash money being forked over).
I've been traveling for business with this as my sole machine for 3 months straight and it has proven to be an excellent system.
If you do this mod, you should really use crimped ring connectors instead of just hooking the power cables around the screws. It greatly reduces the risk of pull-out since the screw retains the connector, which also means less chance of shorts and a much easier install. Also since the terminals are uniform and flat, you get much more even clamping. I would also add heat shrink over the crimp.
I don't have a Mini so can't comment on the right size to buy, but you can buy ring terminals in practically any diameter for next to nothing:
https://www.digikey.com/en/products/filter/terminals/ring-co...
Are the populated from the existing PSU input or just there in case anyone wanted to mod it?
> If non-square screens on Macbook Pros make your blood boil with rage
> If you can't afford or don't want to pay for a Macbook Pro (smart choice)
> If you have ergonomics concerns with shrinking laptops and one size fits all keyboards
> If you like your systems to be repairable and modular rather than comprised of proprietary parts shoehorned in to a closed source design available only from a single vendor for a limited time
> If you are blind (and don't want to carry a screen around)
> If you want to use AR instead of a screen and therefore prefer to be untethered
> If you are on a sailing ship, submarine, mobile home, campervan, paraglider, recumbent touring bicycle, or otherwise off-grid
> If you want a capable unix system to power a mobile mechatronic system
I'd add in not having to deal with a Macbook in clamshell mode doing stupid crap like forcing you to double-tap the touchID button sometimes, refusing to connect to external keyboards and mice on wake, and some of the other annoyances I have dealt with.
Also, a Mac Mini is small, and a MacBook is not, at least as a function of "desk area" vs "area consumed".
Saving $500 for 30 min of actual work that’s also easily reversible if needed for a support case is too good to ignore.