Readit News logoReadit News
mhitza · 7 months ago
I've ran a comparison benchmark for the smaller models https://gist.github.com/mhitza/f5a8eeb298feb239de10f9f60f841...

Comparing it against the RTX 4000 SFF Ada (20GB) which is around $1.2k (if you believe the original price on the nvidia website https://marketplace.nvidia.com/en-us/enterprise/laptops-work...). Which I have access to on a Hetzner GEX44.

I'm going to ballpark it between 2.5-3x faster than the desktop. Except for the tg128 test, where the difference is "minimal" (but I didn't do the math).

yencabulator · 7 months ago
The whole point of these integrated memory designs is to go beyond that 20 GB VRAM.
throwdbaaway · 7 months ago
Actually, you can combine them. When compared to Mac Studio, the main advantage of these Strix Halo boxes is that you still add a bunch of egpu over usb4/oculink, for better PP.
Tsiklon · 7 months ago
I see Wendell of Level1Techs combines the two in his video on this system.

Theoretically you can have the best of both worlds if you don’t mind running an Occulink E-GPU enclosure

https://youtu.be/L-xgMQ-7lW0

reissbaker · 7 months ago
Thanks for the excellent writeup. I'm pleasantly surprised that ROCm worked as well as it did — for the price these aren't bad for LLM workloads and some moderate gaming. (Apple is probably still the king of affordable at-home inference, but for games... Amazing these days but Linux is so much better.)
mulmen · 7 months ago
I switched to Fedora Sway as my daily driver nearly two years ago. A Windows title wasn’t working on my brand new PC. I switched to Steam+Proton+Fedora and it worked immediately. Valve now offers a more stable and complete Windows API through Proton than Microsoft does through Windows itself.
jeffbee · 7 months ago
I had been hoping that these would be a bit faster than the 9950X because of the different memory architecture, but it appears that due to the lower power design point the AI Max+ 395 loses across the board, by large margins. So I guess these really are niche products for ML users only, and people with generic workloads that want more than the 9950X offers are shopping for a Threadripper.
dijit · 7 months ago
Sounds about right.

I’m struggling to justify the cost of a Threadripper (let alone pro!) for a AAA game studio though.

I wonder who can justify these machines. High frequency trading? data science? shouldn’t that be done on servers?

kadoban · 7 months ago
Threadripper very rarely seems to make any sense. The only times it seems like you want it are for huge memory support/bandwidth and/or a huge number of pcie slots. But it's not cheap or supported enough compared to epyc to really make sense to me any time I've been specing out a system along those lines.
jeffbee · 7 months ago
Yeah I don't get it either. To get marginally more resources than the 9950X you have to make a significant leap in price to a $1500+ CPU on a $1000 motherboard.
sliken · 7 months ago
Across the board, by a large margin? Phoronix ran 200 benchmarks on the 9950x vs 395x max and found a difference of less than 5%. Not bad considering the average power use was 91 watts vs 154 watts.

If you need the memory bandwidth the strix halo looks good, if you are cache friendly and don't care about using almost double the power than the 9950x is a better deal.

jeffbee · 7 months ago
The way Phoronix weights an average score for a machine is ridiculous, because there aren't any users who do all of machine learning, fluid dynamics, video compression, database hosting, and software development, and games on the same machine. I looked at the applications that matter to me and the 9950X wins by 40% in those.
rtkwe · 7 months ago
It also seems like the tools aren't there to fully utilize them. Unless I misunderstood he was running off CPU only for all the test so there's still the iGPU and NPU performance that's not been utilized in these tests.
geerlingguy · 7 months ago
No, only a couple initial tests with Ollama used CPU. I ran most tests on Vulkan / iGPU, and some on ROCm (read further down the thread).

I found it difficult to install ROCm on Fedora 42 but after upgrading to Rawhide it was easy, so I re-tested everything with ROCm vs Vulkan.

Ollama, for some silly reason, doesn't support Vulkan even though I've used a fork many times to get full GPU acceleration with it on Pi, Ampere, and even this AMD system... (moral of the story just stick with llama.cpp).

adolph · 7 months ago
The Framework Desktop has at least two M.2 connectors for NVME. I wonder if an interconnect with higher performance than Ethernet or Thunderbolt could be established using one of the M.2 to connect to PCIe via Oculink?
nrp · 7 months ago
There is also a PCIe x4 slot that you can use for other high throughput network options.
adolph · 7 months ago
I missed that. Too bad it is under the power cables. I’d be hard to fit something in there using the stock case.
_joel · 7 months ago
> usually resulting in one word repeating ad infinitum

I've had that using gemini (via windsurf). Doesn't seem to happen with other models. No idea if there's any correlation but it's an interesting failure mode.

mattnewton · 7 months ago
This is usually a symptom of greedy sampling (always picking the most probable token) on smaller models. It's possible that configuration had different sampling defaults, ie. was not using top p or temperature. I'm not familiar with distributed-llama but from searching the git repo it looks like it at least takes a --temperature flag and probably has one for top p.

I'd recommend rerunning the benchmarks with the sampling methods explicitly configured the same in each tool. It's tempting to benchmark with all the nondeterminism turned off, but I think it's less useful since in practice for any model you're self hosting for real work you're going to probably want top-p sampling or something like it and you want to benchmark the implementation of that too.

I've never seen gemini do this though, that'd be kinda wild if they shipped something that samples that way. I wonder if windsurf was sending a different config over the api or if this was a different bug.

justinclift · 7 months ago
I've seen that occasionally with one of the deepseek models when using the default Ollama context size of 4096, rather than whatever the model's preferred context size was.

After having that happen, I switched my stuff to check the model's preferred context size, then set the context size to match, before using any given model.

mrbungie · 7 months ago
Yep, sometimes Gemini for some reason ends up in what I call "ergodic self-flagellation".

Here are some examples: https://www.reddit.com/r/GeminiAI/comments/1lxqbxa/i_am_actu...

iamtheworstdev · 7 months ago
for those who are already in the field and doing these things - if I wanted to start running my own local LLM.. should I find an Nvidia 5080 GPU for my current desktop or is it worth trying one of these Framework AMD desktops?
loudmax · 7 months ago
The short answer is that the best value is a used RTX 3090 (the long answer being, naturally, it depends). Most of the time, the bottleneck for running LLMs on consumer grade equipment is memory and memory bandwidth. A 3090 has 24GB of VRAM, while a 5080 only has 16GB of VRAM. For models that can fit inside 16GB of VRAM, the 5080 will certainly be faster than the 3090, but the 3090 can run models that simply won't fit on a 5080. You can offload part of the model onto the CPU and system RAM, but running a model on a desktop CPU is an enormous drag, even when only partially offloaded.

Obviously an RTX 5090 with 32GB of VRAM is even better, but they cost around $2000, if you can find one.

What's interesting about this Strix Halo system is that it has 128GB of RAM that is accessible (or mostly accessible) to the CPU/GPU/APU. This means that you can run much larger models on this system than you possibly could on a 3090, or even a 5090. The performance tests tend to show that the Strix Halo's memory bandwidth is a significant bottleneck though. This system might be the most affordable way of running 100GB+ models, but it won't be fast.

behohippy · 7 months ago
Used 3090s have been getting expensive in some markets. Another option is dual 5060ti 16 gig. Mine are lower powered, single 8 pin power, so they max out around 180W. With that I'm getting 80t/s on the new qwen 3 30b a3b models, and around 21t/s on Gemma 27b with vision. Cheap and cheerful setup if you can find the cards at MSRP.
cpburns2009 · 7 months ago
Just a point of clarification. I believe the 128GB Strix Halo can only allocate up to 96GB of RAM to the GPU.
wmf · 7 months ago
If you think the future is small models (27B) get Nvidia; if you think larger models (70-120B) are worth it then you need AMD or Apple.
yencabulator · 7 months ago
I wonder how much MoE will disrupt this. qwen3:30b-a3b is pretty good even on pure CPU, but a lot smarter than a 3B parameter model. If the CPU-GPU bottleneck isn't too tight, a large model might be able to sustainably cache the currently active experts in GPU RAM.
jauntywundrkind · 7 months ago
> For networking, I expected more out of the Thunderbolt / USB4 ports, but could only get 10 Gbps.

I really wish we saw more testing of USB subsystems! With PCIe being so limited, there's such allure to having two USB4 ports! But will they work?

IIRC we saw similar very low bandwidth on Apple's ARM chips too. This was during M1 or so; dunno if things got better with that chip or future ones! Presumably so or I feel like we'd be hearing about it, but also, these things can just go so hidden!

It was really cool back in Ryzen 1 era seeing their CPU get some USB on the cpu itself, not have to go through the IO/peripheral Hub (southbridge?), with its limited connection to the CPU. There's a great up breakout chart here, showing both the 1800x and the various chipsets available: relishable data. https://www.techpowerup.com/cpu-specs/ryzen-7-1800x.c1879

I feel like there's been some recent improvements to USB4/thunderbolt in the kernel, to really insure all lanes get used. But I'm struggling to find a reference/link. What kernel was this tested against? If nothing else, it's be great to poke around at debugfs, to make sure it's getting all the lanes configured. https://www.phoronix.com/news/Linux-6.13-USB-Changes

Havoc · 7 months ago
Jeff - check out the distributed-llama project...you should be able to distribute over entire cluster
geerlingguy · 7 months ago
I've been testing Exo (seems dead), llama.cpp RPC (has a lot of performance limitations) and distributed-llama (faster but has some Vulkan quirks and only works with a few models).

See my AI cluster automation setup here: https://github.com/geerlingguy/beowulf-ai-cluster

I was building that through the course of making this video, because it's insane how much manual labor people put into building home AI clusters :D

burnte · 7 months ago
He mentioned that in the video.