Was wondering if I was to buy cheapest hardware (eg PC) to run for personal use at reasonable speed llama 2 70b what would that hardware be? Any experience or recommendations?
Anything with 64GB of memory will run a quantized 70B model. What else you need depends on what is acceptable speed for you. With a decent CPU but without any GPU assistance, expect output on the order of 1 token per second, and excruciatingly slow prompt ingestion. Any decent Nvidia GPU will dramatically speed up ingestion, but for fast generation, you need 48GB VRAM to fit the entire model. That means 2x RTX 3090 or better. That should generate faster than you can read.
Edit: the above is about PC. Macs are much faster at CPU generation, but not nearly as fast as big GPUs, and their ingestion is still slow.
Do these large models need the equivalent of SLI to take advantage of multiple GPU? Nvidia removed SLI from consumer cards a few years ago so I’m curious whether it’s even an option these days.
I built a DIY PC with used GPUs (2x RTX 3090) for around 2300€ earlier this year. You can probably do it for slightly less now (i also added 128GB RAM and NVLink). You can generate text with >10 tok/s with that setup.
Make sure to get a PSU with more than 1000W.
Air cooling is a challenge, but it's possible.
Almost everything was used, the GPUs were around 720€ each. You can now buy them as low as 600€. Make sure to get two identical ones if you plan to connect them with NVLink.
In this video from June, George Hotz says to go with "3090s over 4090s. 3090s have NVLink... 4090s are $1600 and 3090s are 750. RAM bandwidth is about the same." Has he changed his recommendations since then?
right. I opted for a used 3090 myself and plan to get a 2nd one soon. At current market prices 2x 3090s is cheaper than a single 4090 and provides double vram with more performance. If fine-tuning/lora-ing and energy efficiency is a concern though, I would opt for a 4090 since it is both far faster and far more efficient
We bought an A6000 48GB ( as mentioned by someone else ) and it’s works great for $3800. The power requirements are modest as well compared to consumer GPU’s. We looked at the ADA version but even used they are a lot more and your buying speed not usability. I would rather buy another A6000 and have 96GB of ram to fine tune with. That’s just me though and everyone needs to rank their needs against what they can afford.
A 192gb Mac Studio should be able to run an unquantized 70B and I think would cost less than running a multi gpu setup made up of nvidia cards. I haven’t actually done the math, though.
If you factor in electricity costs over a certain time period it might make the Mac even cheaper!
A Mac studio will "run" the model as a glorified chat bot but it'll be unusable for anything interesting at 5-6t/s. With a couple of high end consumer GPUs you're going to get closer to 20t/s. You also be able to realistically fine tune models, and run other interesting things besides an LLM.
I have a $5000 128GB M2 Ultra Mac Studio that I got for LLMs due to speculation like GP here on HN. I get 7.7 tok/s with LLaMA2 70B q6_K ggml (llama.cpp).
It has some upsides in that I can run quantizations larger than 48GB with extended context, or run multiple models at once, but overall I wouldn't strongly recommend it for LLMs over an Intel+2x4090 setup.
Inferencing would probably be ~10x slower than tiling the model across equivalently priced Nvidia hardware. The highest-end M2 Mac chip you can purchase today struggles to compete with last-gen laptop cards from Nvidia. Once you factor in the value of CUDA in this space and the quality of ML driver Nvidia offers, I don't see why Macs are even considered in the "cheapest hardware" discussion.
> If you factor in electricity costs over a certain time period it might make the Mac even cheaper!
I dunno about that. The M2 Max will happily pull over 200w in GPU-heavy tasks, if we're comparing a 40-series card with CUDA optimizations to Pytorch with Metal Performance Shaders, my performance-per-watt money is on Nvidia's hardware.
Well, to be fair, to run an unquantized 70B model is going to take somewhere in the area of 160gb of VRAM (if my quick back of the napkin math is ok). I'm not quite sure of the state of GPUs these days, but getting a 2x a100 80gb (or 4x 40gb) setup is probably going to cost more than a Mac Studio with maxed out RAM.
If we are talking quantized, I am currently running LLaMA v1 30B at 4 bits on a MacBook Air 24GB ram, which is only a little bit more expensive than what a 24GB 4090 retails for. The 4090 would crush the MacBook Air in tokens/sec, I am sure. It is however completely usable on my MacBook (4 tokens/second, IIRC? I might be off on that).
A 4 bit 70B model should take about 36GB-40GB of RAM so a 64GB MacStudio might still be price competitive with a dual 4090 or 4090 / 3090 split setup. The cheapest Studio with 64GB of RAM is 2,399.00 (USD).
The only info I can provide is the table I've seen on: https://github.com/jmorganca/ollama where it states one needs "32 GB to run the 13B models." I would assume you may need a GPU for this.
Related, could someone please point me in the right direction on how to run Wizard Vicuna Uncensored or Llama2 13B locally in Linux? I've been searching for a guide and have not found what I need for a beginner like myself. In the Github I referenced the download is only for Mac at the time. I have a Macbook Pro M1 I can use though it's running Debian.
You can run `ollama run wizard-vicuna-uncensored:13b` and it should pull and run it. For llama2 13b, it's `ollama run llama2:13b`. I haven't seen the 13b uncensored version yet.
Do you have a guide that you followed and could link it to me or was it just from prior knowledge? Also, do you know if I could run the Wizard Vicuna on it? That model isn't listed on the above page.
I've been able to run it fine using llama.cpp on my 2019 iMac with 128GB of RAM. It's not super fast, but it works fine for "send it a prompt, look at the reply a few minutes later", and all it cost me was a few extra sticks of RAM.
You can run on cpu and regular ram, but gpu is quite a bit faster.
You need about a gig of RAM/nvram per billion parameters (plus some headroom for a context window). Lower precision doesn’t really affect quality.
When Ethereum flipped from proof of work to proof of stake, a lot of used high-end cards hit the market.
4 of them in a cheap server would do the trick. Would be a great business model for some cheap colo to stand up a crap-ton of those and rent while servers to everyone here.
In the meantime if you’re interested in a cheap server as described above, post in this thread.
Edit: the above is about PC. Macs are much faster at CPU generation, but not nearly as fast as big GPUs, and their ingestion is still slow.
Recommended reading: Tim Dettmer's guide https://timdettmers.com/2023/01/30/which-gpu-for-deep-learni...
In this video from June, George Hotz says to go with "3090s over 4090s. 3090s have NVLink... 4090s are $1600 and 3090s are 750. RAM bandwidth is about the same." Has he changed his recommendations since then?
https://youtu.be/Mr0rWJhv9jU?t=1157
It has some upsides in that I can run quantizations larger than 48GB with extended context, or run multiple models at once, but overall I wouldn't strongly recommend it for LLMs over an Intel+2x4090 setup.
It's competitive, but has significant tradeoffs.
> If you factor in electricity costs over a certain time period it might make the Mac even cheaper!
I dunno about that. The M2 Max will happily pull over 200w in GPU-heavy tasks, if we're comparing a 40-series card with CUDA optimizations to Pytorch with Metal Performance Shaders, my performance-per-watt money is on Nvidia's hardware.
If we are talking quantized, I am currently running LLaMA v1 30B at 4 bits on a MacBook Air 24GB ram, which is only a little bit more expensive than what a 24GB 4090 retails for. The 4090 would crush the MacBook Air in tokens/sec, I am sure. It is however completely usable on my MacBook (4 tokens/second, IIRC? I might be off on that).
A 4 bit 70B model should take about 36GB-40GB of RAM so a 64GB MacStudio might still be price competitive with a dual 4090 or 4090 / 3090 split setup. The cheapest Studio with 64GB of RAM is 2,399.00 (USD).
Related, could someone please point me in the right direction on how to run Wizard Vicuna Uncensored or Llama2 13B locally in Linux? I've been searching for a guide and have not found what I need for a beginner like myself. In the Github I referenced the download is only for Mac at the time. I have a Macbook Pro M1 I can use though it's running Debian.
Thank you.
There's a complete list of models at https://gist.github.com/mchiang0610/b959e3c189ec1e948e4f6a1f...
We'll have a better way to browse these soon.
Do you have a guide that you followed and could link it to me or was it just from prior knowledge? Also, do you know if I could run the Wizard Vicuna on it? That model isn't listed on the above page.
You need about a gig of RAM/nvram per billion parameters (plus some headroom for a context window). Lower precision doesn’t really affect quality.
When Ethereum flipped from proof of work to proof of stake, a lot of used high-end cards hit the market.
4 of them in a cheap server would do the trick. Would be a great business model for some cheap colo to stand up a crap-ton of those and rent while servers to everyone here.
In the meantime if you’re interested in a cheap server as described above, post in this thread.