For anyone running locally, could you please describe your hardware setup? CPU only? CPU+GPU(s)? How much memory? What type of CPU? Particularly interested in larger models (say >30b params).
For transparency, I work for an x86 motherboard manufacturer and the LLM-on-local-hw space is very interesting. If you're having trouble finding the right HW, would love to hear those pain points.
The most popular performant desktop llm runtimes are pure GPU (exLLAMA) or GPU + CPU (llama.cpp). People stuff the biggest models that will fit into the collective RAM + VRAM pool, up to ~48GB for llama 70B. Sometimes users will split models across two 24GB CUDA GPUs.
Inference for me is bottlenecked by my GPU and CPU RAM bandwidth. TBH the biggest frustration is that OEMs like y'all can't double up VRAM like you could in the old days, or sell platforms with a beefy IGP, and that quad channel+ CPUs are too expensive.
Vulkan is a popular target runtimes seem to be heading for. IGPs with access to lots of memory capacity + bandwidth will be very desirable, and I hear Intel/AMD are cooking up quad channel IGPs.
On the server side, everyone is running Nvidia boxes, I guess. But I had a dream about an affordable llama.cpp host: The cheapest Sapphire Rapids HBM SKUs, with no DIMM slots, on a tiny, dirt cheap motherboard you can pack into a rack like sardines. Llama.cpp is bottlenecked by bandwidth, and ~64GB is perfect.
If you're at a motherboard manufacturer I have some definite points for you to hear.
One is.. there are essentially zero motherboards that space out x16 pci-e slots so that you can appropriately use more than 2 triple-slot GPUs. 3090s and 4090s are all triple slot cards, but often motherboard are putting x16 slots spaced 2-apart, with x8 or less slots in between. There may be a few that allow you to fit 2x cards, but none that would support 3x I don't think, and definitely none that do 4x. Obviously that would result in a non-standard length motherboard (much taller). But in the ML world it would be appreciated because it would be possible to build quad card systems without watercooling cards or using A5000/A6000 or other dual slot, expensive datacenter cards.
And then, even for dual-slot cards like the A5000/A6000 etc again there are very few motherboards that you can get the x16 slots spaced appropriately. The Supermicro H12SSL-i is about the only one that gets 4 x16 slots double-slot spaced appropriately and in a way that you could run 4 blower or WC'd cards and not overlap something else. And then, even when you do, you have the problem of the pin headers on the bottom of the motherboard interfere with the last card. That location of the pin headers is archaic and annoying and just needs to die.
Remember those mining-rig specialty motherboards, with all the wide-spaced pci-e slots for like 8x GPU's at once? we need that, but with x16-bandwidth slots. Those mining cards were typically only x1 bandwidth slots (even if they were x16 length) because for mining, bandwidth between cards and CPU isn't a problem, but for ML it is.
Sure, these won't fit the typical ATX case standards. But if you build it, they will come.
This would have to be a server board, or at least a HEDT board.
And yeah, as said below, 4x 4090s would trip most circuit breakers and require some contortions to power with a regular PSU. And it would be so expensive that you mind as well buy 2x A6000s.
Really, the problem is no one will sell fast, sanely priced 32GB+ GPUs. I am hoping Intel will disrupt this awful status quo with Battlemage.
The thought of power draw of 4 4090s going through a commercial along with insisting on staying with aircooling in a case that is now going to be horribly cramped and with no airflow probably keeps a firefighter awake a night sometimes.
There's no reasonable use of consumer motherboards having the space for that. Even SLI is more or less abandoned. Needless to say:
* Making a new motherboard form factor, incompatible with _everything in the market_
* Having to make therefore new cases, incompatible with _everything in the market_ (or, well, it'll be compatible. It'll just be extremely empty.
* Having to probably make your own PSUs because I wouldn't trust a constant 1200W draw from just GPUs on your average Seasonic PSU
If you build it, not only will noone come, but they also don't have the money to pay for what you'd charge to even just offset costs.
I use two 3090s to run the 70b model at a good speed. Takes 32 gigs of vram, more depending on context. I tried CPU+GPU (5900X + 3090) but with extended context it's slow enough that I wouldn't recommend it (~1 token/s). CPU only gets "let it run over night" slow. Works ok-ish for with a small context though (even if it's still "non-interactive" slow).
I'm running two RTX 3090 at PCIe 4.0 x8 on a X570 board w/ 128GB DDR4 @ 3200.
Going beyond that is very expensive right now.
The AMD X670 chipset offers 28 PCIe 5.0 lanes, can't you make a mainboard with three x16 PCIe 4.0 slots out of that?
Ideally two models:
One with 2-slot spacing (for watercooling) and another (oversized) board with 3-slot spacing for cases like the Fractal Design Meshify 2 XL.
Slot spacing on motherboards is a challenge due to high frequency signal attenuation. You have finite limits on how far your slots can be from the CPU. Your signal budget/allowable distances are decreasing as each successive PCIe generation runs at a higher frequency.
Yes, you can space out slots widely, however this means you have to use PCIe redrivers/retimers which adds cost to the board. You can also use different materials for the motherboard but again, this adds cost.
We'd love to provide better slot spacing configs, but there are technical and commercial tradeoffs to be made.
Very unfavorably. Mostly because the ONNX models are FP32/FP16 (so ~3-4x the RAM use), but also because llama.cpp is well optimized with many features (like prompt caching, grammar, device splitting, context extending, cfg...)
MLC's Apache TVM implementation is also excellent. The autotuning in particular is like black magic.
Speaking of MLC - recently discovered they have a iphone app that can do lama7b locally on high end iphones at decent pace. Bit hard to find in the store given the ocean of API front end apps - called MLCchat.
Ggml / llama.cpp has a lot of hardware optimizations built in now, CPU, GPU and specific instruction sets like for apple silicon (I'm not familiar with the names). I would want to know how many of those are also present in onnx and available to this model.
There are currently also more quantization options available as mentioned. Though those incur a performance loss (they make the model faster but worse) so it depends on what you're optimizing for.
For anyone unsure what ONNX actually is: "ONNX is an open format built to represent machine learning models ... [which] defines a common set of operators ... a common file format ... [and should make] it easier to access hardware optimizations".
This is very cool! I really hope the ONNX project gets much more adoption in the next months and years and help reduce the fragmentation in the ML ecosystem.
Lololol show me an "MLIR" port. Do you mean tensorflow port or jax port or torch port (that uses torch-mlir)? Or do you really mean llama implemented in linalg/tosa/tendor?
I'm not sure there's much chance of that happening. ONNX seems to be the broadest in coverage, but for basically any model ONNX supports, there's a faster alternative.
For the latest generative/transformer stuff (whisper, llama, etc) it's often specialized C(++) stuff, but torch 2.0 compilation keeps geting better, BetterTransformers, TensorRT, etc.
How does Llama 2 compare to GPT-4? I see a lot of discussion about it but not much comparison. I don't have the hardware to run the 13b or 30b model locally so I'd be running it in the cloud anyway. In that case, should I stick with GPT-4?
How was this allowed? I was under the impression that companies the size of Microsoft needed to contact Meta to negotiate a license.
Excerpt from the license:
Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for Licensee,
or Licensee's affiliates, is greater than 700 million monthly active users in the
preceding calendar month, you must request a license from Meta, which Meta may
grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you
such rights.
> Meta and Microsoft have been longtime partners on AI, starting with a collaboration to integrate ONNX Runtime with PyTorch to create a great developer experience for PyTorch on Azure, and Meta’s choice of Azure as a strategic cloud provider. (sic)
> To get access permissions to the Llama 2 model, please fill out the Llama 2 access request form. If allowable, you will receive GitHub access in the next 48 hours, but usually much sooner.
I guess they send the form to Meta?
Anyway, I hope this is not what Open Source will be like from now on.
So they negotiated a license? Meta partnered with Azure for the Llama 2 launch, there’s no reason to think that they’re antagonistic towards each other.
For transparency, I work for an x86 motherboard manufacturer and the LLM-on-local-hw space is very interesting. If you're having trouble finding the right HW, would love to hear those pain points.
Inference for me is bottlenecked by my GPU and CPU RAM bandwidth. TBH the biggest frustration is that OEMs like y'all can't double up VRAM like you could in the old days, or sell platforms with a beefy IGP, and that quad channel+ CPUs are too expensive.
Vulkan is a popular target runtimes seem to be heading for. IGPs with access to lots of memory capacity + bandwidth will be very desirable, and I hear Intel/AMD are cooking up quad channel IGPs.
On the server side, everyone is running Nvidia boxes, I guess. But I had a dream about an affordable llama.cpp host: The cheapest Sapphire Rapids HBM SKUs, with no DIMM slots, on a tiny, dirt cheap motherboard you can pack into a rack like sardines. Llama.cpp is bottlenecked by bandwidth, and ~64GB is perfect.
One is.. there are essentially zero motherboards that space out x16 pci-e slots so that you can appropriately use more than 2 triple-slot GPUs. 3090s and 4090s are all triple slot cards, but often motherboard are putting x16 slots spaced 2-apart, with x8 or less slots in between. There may be a few that allow you to fit 2x cards, but none that would support 3x I don't think, and definitely none that do 4x. Obviously that would result in a non-standard length motherboard (much taller). But in the ML world it would be appreciated because it would be possible to build quad card systems without watercooling cards or using A5000/A6000 or other dual slot, expensive datacenter cards.
And then, even for dual-slot cards like the A5000/A6000 etc again there are very few motherboards that you can get the x16 slots spaced appropriately. The Supermicro H12SSL-i is about the only one that gets 4 x16 slots double-slot spaced appropriately and in a way that you could run 4 blower or WC'd cards and not overlap something else. And then, even when you do, you have the problem of the pin headers on the bottom of the motherboard interfere with the last card. That location of the pin headers is archaic and annoying and just needs to die.
Remember those mining-rig specialty motherboards, with all the wide-spaced pci-e slots for like 8x GPU's at once? we need that, but with x16-bandwidth slots. Those mining cards were typically only x1 bandwidth slots (even if they were x16 length) because for mining, bandwidth between cards and CPU isn't a problem, but for ML it is.
Sure, these won't fit the typical ATX case standards. But if you build it, they will come.
And yeah, as said below, 4x 4090s would trip most circuit breakers and require some contortions to power with a regular PSU. And it would be so expensive that you mind as well buy 2x A6000s.
Really, the problem is no one will sell fast, sanely priced 32GB+ GPUs. I am hoping Intel will disrupt this awful status quo with Battlemage.
There's no reasonable use of consumer motherboards having the space for that. Even SLI is more or less abandoned. Needless to say:
* Making a new motherboard form factor, incompatible with _everything in the market_
* Having to make therefore new cases, incompatible with _everything in the market_ (or, well, it'll be compatible. It'll just be extremely empty.
* Having to probably make your own PSUs because I wouldn't trust a constant 1200W draw from just GPUs on your average Seasonic PSU
If you build it, not only will noone come, but they also don't have the money to pay for what you'd charge to even just offset costs.
There are a few more boards with 4 x16 slots at 2 slot spacing:
GIGABYTE MU92-TU0
ASRock Rack SPC621D8 (3 variants)
ASRock C621A WS
Supermicro X12SPA-TF
Going beyond that is very expensive right now.
The AMD X670 chipset offers 28 PCIe 5.0 lanes, can't you make a mainboard with three x16 PCIe 4.0 slots out of that? Ideally two models: One with 2-slot spacing (for watercooling) and another (oversized) board with 3-slot spacing for cases like the Fractal Design Meshify 2 XL.
Yes, you can space out slots widely, however this means you have to use PCIe redrivers/retimers which adds cost to the board. You can also use different materials for the motherboard but again, this adds cost.
We'd love to provide better slot spacing configs, but there are technical and commercial tradeoffs to be made.
Deleted Comment
MLC's Apache TVM implementation is also excellent. The autotuning in particular is like black magic.
Speed can be improved. Quick and dirty/hype solutions, not sure.
I really hope ONNX gets traction it deserves.
There are currently also more quantization options available as mentioned. Though those incur a performance loss (they make the model faster but worse) so it depends on what you're optimizing for.
> specific instruction sets like for apple silicon
You are thinking of the Accelerate framework support, which is basically Apple's ARM CPU SIMD library.
But Llama.cpp also has a Metal GPU backend, which is the defacto backend for Apple devices now.
[0] https://onnx.ai/
NCNN, MLIR and TVM based ports have been far more impressive.
For the latest generative/transformer stuff (whisper, llama, etc) it's often specialized C(++) stuff, but torch 2.0 compilation keeps geting better, BetterTransformers, TensorRT, etc.
13b is alright for toy applications, but the difference to gpt 3.5 (let alone gpt4) is huge.
I bought a dodgy used mining 3090 last year and now my computer is writing poetry in the terminal in real time.
It has Metal support.
[1] https://www.youtube.com/watch?v=PSxihhBzCjk
Not sure if they made an autotuning profile for it yet.
Excerpt from the license:
Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-me...
I guess they send the form to Meta?
Anyway, I hope this is not what Open Source will be like from now on.