Readit News logoReadit News
c_o_n_v_e_x · 2 years ago
For anyone running locally, could you please describe your hardware setup? CPU only? CPU+GPU(s)? How much memory? What type of CPU? Particularly interested in larger models (say >30b params).

For transparency, I work for an x86 motherboard manufacturer and the LLM-on-local-hw space is very interesting. If you're having trouble finding the right HW, would love to hear those pain points.

brucethemoose2 · 2 years ago
The most popular performant desktop llm runtimes are pure GPU (exLLAMA) or GPU + CPU (llama.cpp). People stuff the biggest models that will fit into the collective RAM + VRAM pool, up to ~48GB for llama 70B. Sometimes users will split models across two 24GB CUDA GPUs.

Inference for me is bottlenecked by my GPU and CPU RAM bandwidth. TBH the biggest frustration is that OEMs like y'all can't double up VRAM like you could in the old days, or sell platforms with a beefy IGP, and that quad channel+ CPUs are too expensive.

Vulkan is a popular target runtimes seem to be heading for. IGPs with access to lots of memory capacity + bandwidth will be very desirable, and I hear Intel/AMD are cooking up quad channel IGPs.

On the server side, everyone is running Nvidia boxes, I guess. But I had a dream about an affordable llama.cpp host: The cheapest Sapphire Rapids HBM SKUs, with no DIMM slots, on a tiny, dirt cheap motherboard you can pack into a rack like sardines. Llama.cpp is bottlenecked by bandwidth, and ~64GB is perfect.

mk_stjames · 2 years ago
If you're at a motherboard manufacturer I have some definite points for you to hear.

One is.. there are essentially zero motherboards that space out x16 pci-e slots so that you can appropriately use more than 2 triple-slot GPUs. 3090s and 4090s are all triple slot cards, but often motherboard are putting x16 slots spaced 2-apart, with x8 or less slots in between. There may be a few that allow you to fit 2x cards, but none that would support 3x I don't think, and definitely none that do 4x. Obviously that would result in a non-standard length motherboard (much taller). But in the ML world it would be appreciated because it would be possible to build quad card systems without watercooling cards or using A5000/A6000 or other dual slot, expensive datacenter cards.

And then, even for dual-slot cards like the A5000/A6000 etc again there are very few motherboards that you can get the x16 slots spaced appropriately. The Supermicro H12SSL-i is about the only one that gets 4 x16 slots double-slot spaced appropriately and in a way that you could run 4 blower or WC'd cards and not overlap something else. And then, even when you do, you have the problem of the pin headers on the bottom of the motherboard interfere with the last card. That location of the pin headers is archaic and annoying and just needs to die.

Remember those mining-rig specialty motherboards, with all the wide-spaced pci-e slots for like 8x GPU's at once? we need that, but with x16-bandwidth slots. Those mining cards were typically only x1 bandwidth slots (even if they were x16 length) because for mining, bandwidth between cards and CPU isn't a problem, but for ML it is.

Sure, these won't fit the typical ATX case standards. But if you build it, they will come.

brucethemoose2 · 2 years ago
This would have to be a server board, or at least a HEDT board.

And yeah, as said below, 4x 4090s would trip most circuit breakers and require some contortions to power with a regular PSU. And it would be so expensive that you mind as well buy 2x A6000s.

Really, the problem is no one will sell fast, sanely priced 32GB+ GPUs. I am hoping Intel will disrupt this awful status quo with Battlemage.

ohgodplsno · 2 years ago
The thought of power draw of 4 4090s going through a commercial along with insisting on staying with aircooling in a case that is now going to be horribly cramped and with no airflow probably keeps a firefighter awake a night sometimes.

There's no reasonable use of consumer motherboards having the space for that. Even SLI is more or less abandoned. Needless to say:

* Making a new motherboard form factor, incompatible with _everything in the market_

* Having to make therefore new cases, incompatible with _everything in the market_ (or, well, it'll be compatible. It'll just be extremely empty.

* Having to probably make your own PSUs because I wouldn't trust a constant 1200W draw from just GPUs on your average Seasonic PSU

If you build it, not only will noone come, but they also don't have the money to pay for what you'd charge to even just offset costs.

Tepix · 2 years ago
I totally agree!

There are a few more boards with 4 x16 slots at 2 slot spacing:

GIGABYTE MU92-TU0

ASRock Rack SPC621D8 (3 variants)

ASRock C621A WS

Supermicro X12SPA-TF

ImprobableTruth · 2 years ago
I use two 3090s to run the 70b model at a good speed. Takes 32 gigs of vram, more depending on context. I tried CPU+GPU (5900X + 3090) but with extended context it's slow enough that I wouldn't recommend it (~1 token/s). CPU only gets "let it run over night" slow. Works ok-ish for with a small context though (even if it's still "non-interactive" slow).
josephg · 2 years ago
What’s the difference in output quality between that and the 33b parameter model? That would fit entirely in vram, right?
Tepix · 2 years ago
I'm running two RTX 3090 at PCIe 4.0 x8 on a X570 board w/ 128GB DDR4 @ 3200.

Going beyond that is very expensive right now.

The AMD X670 chipset offers 28 PCIe 5.0 lanes, can't you make a mainboard with three x16 PCIe 4.0 slots out of that? Ideally two models: One with 2-slot spacing (for watercooling) and another (oversized) board with 3-slot spacing for cases like the Fractal Design Meshify 2 XL.

c_o_n_v_e_x · 2 years ago
Slot spacing on motherboards is a challenge due to high frequency signal attenuation. You have finite limits on how far your slots can be from the CPU. Your signal budget/allowable distances are decreasing as each successive PCIe generation runs at a higher frequency.

Yes, you can space out slots widely, however this means you have to use PCIe redrivers/retimers which adds cost to the board. You can also use different materials for the motherboard but again, this adds cost.

We'd love to provide better slot spacing configs, but there are technical and commercial tradeoffs to be made.

Deleted Comment

sandGorgon · 2 years ago
In EdgeChains, we run models using DeepJavaLibrary (DJL). Preferably only CPU - more focused on edge+embedding usecases
thevania · 2 years ago
I would love to see an AMD MI300A board for hobbyist :D
rrherr · 2 years ago
brucethemoose2 · 2 years ago
Very unfavorably. Mostly because the ONNX models are FP32/FP16 (so ~3-4x the RAM use), but also because llama.cpp is well optimized with many features (like prompt caching, grammar, device splitting, context extending, cfg...)

MLC's Apache TVM implementation is also excellent. The autotuning in particular is like black magic.

Havoc · 2 years ago
Speaking of MLC - recently discovered they have a iphone app that can do lama7b locally on high end iphones at decent pace. Bit hard to find in the store given the ocean of API front end apps - called MLCchat.
skeletoncrew · 2 years ago
I tried quite a few of these and the ONNX one seems the most elegantly put together of all. I’m impressed.

Speed can be improved. Quick and dirty/hype solutions, not sure.

I really hope ONNX gets traction it deserves.

version_five · 2 years ago
Ggml / llama.cpp has a lot of hardware optimizations built in now, CPU, GPU and specific instruction sets like for apple silicon (I'm not familiar with the names). I would want to know how many of those are also present in onnx and available to this model.

There are currently also more quantization options available as mentioned. Though those incur a performance loss (they make the model faster but worse) so it depends on what you're optimizing for.

brucethemoose2 · 2 years ago
ONNX is a format. There are different runtimes for different devices... But I can't speak for any of them.

> specific instruction sets like for apple silicon

You are thinking of the Accelerate framework support, which is basically Apple's ARM CPU SIMD library.

But Llama.cpp also has a Metal GPU backend, which is the defacto backend for Apple devices now.

moffkalast · 2 years ago
These are still FP16/32 models, almost certainly a few times slower and larger than the latest N bit quantized GGMLs.
abrookewood · 2 years ago
For anyone unsure what ONNX actually is: "ONNX is an open format built to represent machine learning models ... [which] defines a common set of operators ... a common file format ... [and should make] it easier to access hardware optimizations".

[0] https://onnx.ai/

hashtag-til · 2 years ago
This is very cool! I really hope the ONNX project gets much more adoption in the next months and years and help reduce the fragmentation in the ML ecosystem.
brucethemoose2 · 2 years ago
Eh... I have seen ONNX demos for years, and they tend to stay barebones and slow, kinda like this.

NCNN, MLIR and TVM based ports have been far more impressive.

mathisfun123 · 2 years ago
Lololol show me an "MLIR" port. Do you mean tensorflow port or jax port or torch port (that uses torch-mlir)? Or do you really mean llama implemented in linalg/tosa/tendor?
lostmsu · 2 years ago
Have you used TVM much? Is it only good for inference?
claytonjy · 2 years ago
I'm not sure there's much chance of that happening. ONNX seems to be the broadest in coverage, but for basically any model ONNX supports, there's a faster alternative.

For the latest generative/transformer stuff (whisper, llama, etc) it's often specialized C(++) stuff, but torch 2.0 compilation keeps geting better, BetterTransformers, TensorRT, etc.

esperent · 2 years ago
How does Llama 2 compare to GPT-4? I see a lot of discussion about it but not much comparison. I don't have the hardware to run the 13b or 30b model locally so I'd be running it in the cloud anyway. In that case, should I stick with GPT-4?
ImprobableTruth · 2 years ago
GPT-4 trounces everything else available. I'd say that 70b is about 3.5 level, unless you're doing something where finetuning greatly benefits you.

13b is alright for toy applications, but the difference to gpt 3.5 (let alone gpt4) is huge.

chpatrick · 2 years ago
Vicuna 1.5 13b seems comparable to GPT 3.5 to me, and the fact that you can run it locally on last gen commodity hardware is incredible.

I bought a dodgy used mining 3090 last year and now my computer is writing poetry in the terminal in real time.

spacebanana7 · 2 years ago
Llama 2 has uncensored versions. For many applications that alone makes it superior.
turnsout · 2 years ago
Does anyone know the feasibility of converting the ONNX model to CoreML for accelerated inference on Apple devices?
kiratp · 2 years ago
If you’re working with LLMs, just use this - https://github.com/ggerganov/llama.cpp

It has Metal support.

refulgentis · 2 years ago
That's sort of a non-sequitor, so does ONNX. Conversely, $X.cpp is great for local hobbyist stuff but not at all for deployment to iOS.
mchiang · 2 years ago
They used to have this: https://github.com/onnx/onnx-coreml
refulgentis · 2 years ago
They still do. HN is way behind on ONNX and I'd go so far as to say it's the "Plastics."[1] of 2023.

[1] https://www.youtube.com/watch?v=PSxihhBzCjk

refulgentis · 2 years ago
"Not even wrong" question, ONNX is a runtime that can use/uses CoreML.
turnsout · 2 years ago
I didn't realize that! I wonder how performant a small Llama would be on iOS.
brucethemoose2 · 2 years ago
MLC's Apache TVM implementation can also compile to Metal.

Not sure if they made an autotuning profile for it yet.

glitchc · 2 years ago
How was this allowed? I was under the impression that companies the size of Microsoft needed to contact Meta to negotiate a license.

Excerpt from the license:

Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.

thadk · 2 years ago
> Meta and Microsoft have been longtime partners on AI, starting with a collaboration to integrate ONNX Runtime with PyTorch to create a great developer experience for PyTorch on Azure, and Meta’s choice of Azure as a strategic cloud provider. (sic)

https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-me...

amelius · 2 years ago
> To get access permissions to the Llama 2 model, please fill out the Llama 2 access request form. If allowable, you will receive GitHub access in the next 48 hours, but usually much sooner.

I guess they send the form to Meta?

Anyway, I hope this is not what Open Source will be like from now on.

stu2b50 · 2 years ago
So they negotiated a license? Meta partnered with Azure for the Llama 2 launch, there’s no reason to think that they’re antagonistic towards each other.
hint23 · 2 years ago
For best performance on x86 CPU and Nvidia GPU, ts_server is interesting ( https://bellard.org/ts_server ).