I now ordered a Beelink GTR9 Pro, which unlike the Framework has dual 10G Ethernet, not the weird 5G flavor. We'll see how that goes.
I now ordered a Beelink GTR9 Pro, which unlike the Framework has dual 10G Ethernet, not the weird 5G flavor. We'll see how that goes.
This is the real story not the conspiracy-tinged market segmentation one. Which is silly because at levels where high-end consumer/enthusiast Ryzen (say, 9950 X3D) and lowest-end Threadripper/EPYC (most likely a previous-gen chip) just happen to truly overlap in performance, the former will generally cost you more!
However Apple will let you upgrade to the pro (double the bandwidth), max (4x the bandwidth), and ultra (8x the bandwidth). The m4 max is still efficient, gives decent battery life in a thin light laptop. Even the ultra is pretty quiet/cool even in a tiny mac studio MUCH smaller than any thread ripper pro build I've seen.
Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.
We need a bigger memory controller.
To get more traces to the memory controller We need more pins on the CPU.
Now need a bigger CPU package to accommodate the pins.
Now we need a motherboard with more traces, which requires more layers, which requires a more expensive motherboard.
We need a bigger motherboard to accommodate the 6 or 8 dimm sockets.
The additional traces, longer traces, more layers on the motherboard, and related makes the signalling harder, likely needs ECC or even registered ECC.
We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel). All larger, more expensive more than 2x the power, and is likely to be in a $5-$15k workstation/server not a $2k framework desktop the size of a liter of milk or so.
And even more curious, Framework Desktop is deliberately less repairable than their laptops. They soldered on the RAM. Which makes it a very strange entry for a brand marketing itself as the DIY dream manufacturer. They threw away their user-repairable mantra when they made the Desktop, it's less user repairable than most other desktops you could go out and buy today.
If you want the high memory bandwidth get the strix halo, if not get any normal PC. Sure apple has the bandwidth as well, and also soldered memory as well.
If you want dimms and you want the memory bandwidth get a threadripper (4 channel), siena (6 channel), thread ripper pro (8 channel), or Epyc (12 channel). But be prepared to double your power (at least), quadruple your space, double your cost, and still not have a decent GPU.
So it's not full ECC like servers have with dimms with a multiple of 9 chips with ECC protecting everything from the dimms to the CPU.
Keep in mind the ram is inside the strix halo package, not something HP has control over.
Seems like threadripper is too low volume to be price competitive with Epyc and there's a relatively price insensitive workstation market out there.
I think the Xeon systems should have worked and that it was actually a motherboard bios issue, but I had seen a photo of it running in a threadripper and prayed I wasn’t digging an even deeper hole.
If you need the memory bandwidth the strix halo looks good, if you are cache friendly and don't care about using almost double the power than the 9950x is a better deal.
The logistics of this archive are quite crazy; most 2-4U JBODs I worked with hold like 24 or 45 SFF SAS disks.
Standard size (unless things have changed) for 10k sff sas disks seems to be about 1.2TB, so you'd need 544 of these to build a raidz big enough. So we're talking 12 4U jbods, well over a full rack.
I guess I can just hope some rich techie with a volcano lair / private datacenter somewhere is keeping a copy..
Buy a Data60 (60 disk chassis), add 60 drives. Buy a 1U server (2 for redundancy). I'd recommend 5 stripes of 11 drives (55 total) with 5 global spares. Use a RAIDz3 so 8 disks of data per 11 drives.
Total storage should be around 8 * 24 * 5 = 960GB, likely 10% less because of marketing 10^9 bytes instead of 2^30 for drive sizes. Another 10% because ZFS doesn't like to get very full. So something like 777TB usable which easily fits 650TB.
I'd recommend a pair of 2TB NVMe with a high DWPD as a cache.
The disks will cost $18k, the data60 is 4U, and a server to connect it is 1U. If you want more space upgrade to 30TB $550 each) drives or buy another Data60 full of drives.