No way buying a bunch of minis could be as efficient as much denser GPU racks. You have to consider all the logistics and power draw, and high end nVidia stuff and probably even AMD stuff is faster than M series GPUs.
What this does offer is a good alternative to GPUs for smaller scale use and research. At small scale it’s probably competitive.
Apple wants to dominate the pro and serious amateur niches. Feels like they’re realizing that local LLMs and AI research is part of that, is the kind of thing end users would want big machines to do.
I use the third-party app Harmonic to browse HN and it has a dark mode. It's quite nice, actually. Though I do think your tone is unusually inappropriate for a feature request.
I used it and it was good. The thing about the compression for me was that because it wasn't for broadcast or sharing it didn't have to be as rigorous or detailed. Plus, it's a desktop. Stuff doesn't change as often as often as in a movie.
Intrigued. Still looking, though for robust multilingual end-to-end PDF to PDF archival OCR for consumer hardware that can replace OCRmyPDF without excessive customization for each run.
After trying a dozen or more paid and open source, I keep going back to MacWhisper. The dictation feature is in advanced beta but works well. The only thing I want it doesn't have is to have different models chosen for different tasks at the same time: one model for each drop folder, a different one for dictation and then another general-purpose one for drag-and-drop. I have the memory for it and MacWhisper can flush a model after a certain amount of unuse time anyway.
What this does offer is a good alternative to GPUs for smaller scale use and research. At small scale it’s probably competitive.
Apple wants to dominate the pro and serious amateur niches. Feels like they’re realizing that local LLMs and AI research is part of that, is the kind of thing end users would want big machines to do.