Readit News logoReadit News
geertj · a year ago
This runs the 671B model in Q4 quantization at 3.5-4.25 TPS for $2K on a single socket Epyc server motherboard using 512GB of RAM.

This [1] X thread runs the 671B model in the original Q8 at 6-8 TPS for $6K using a dual socket Epyc server motherboard using 768GB of RAM. I think this could be made cheaper by getting slower RAM but since this is RAM bandwidth limited that would likely reduce TPS. I’d be curious if this would just be a linear slowdown proportional to the RAM MHz or whether CAS latency plays into it as well.

[1] https://x.com/carrigmat/status/1884244369907278106?s=46&t=5D...

nielsole · a year ago
I've been running the unsloth 200GB dynamic quantisation with 8k context on my 64GB Ryzen 7 5800G. CPU and iGPU utilization were super low, because it basically has to read the entire model from disk. (Looks like it needs ~40GB of actual memory that it cannot easily mmap from disk) With a Samsung 970 Evo Plus that gave me 2.5GB/s read speed. That came out at 0.15 tps Not bad for completely underspecced hardware.

Given the model has only so few active parameters per token (~40B), it is likely that just being able to hold it in memory absolve the largest bottleneck. I guess with a single consumer PCIe4.0x16 graphics card you could get at most 1tps just because of the PCIe transfer speed? Maybe CPU processing can be faster simply because DDR transfer is faster than transfer to the graphics card.

TeMPOraL · a year ago
To add another datapoint, I've been running the 131GB (140GB on disk) 1.58 bit dynamic quant from Unsloth with 4k context on my 32GB Ryzen 7 2700X (8 cores, 3.70 GHz), and achieved exactly the same speed - around 0.15 tps on average, sometimes dropping to 0.11, tps occasionally going up to 0.16 tps. Roughly 1/2 of your specs, roughly 1/2 smaller quant, same tps.

I've had to disable the overload safeties in LM Studio and tweak with some loader parameters to get the model to run mostly from disk (NVMe SSD), but once it did, it also used very little CPU!

I tried offloading to GPU, but my RTX 4070 Ti (12GB VRAM) can take at most 4 layers, and it turned out to make no difference in tps.

My RAM is DDR4, maybe switching to DDR5 would improve things? Testing that would require replacing everything but the GPU, though, as my motherboard is too old :/.

conor_mc · a year ago
I wonder if the, now abandoned, Intel Optane drives could help with this. They had very low latency, high IOPS, and decent throughput. They made RAM modules as well. A ram disk made of them might be faster.
smcleod · a year ago
I get around 4-5t/s with the unsloth 1.58bit quant on my home server that has 2x3090 and 192GB of DDR5 Ryzen 9, usable but slow.
baobun · a year ago
I imagine you can get more by striping drives. Depending on what chipset you have, the CPU should handle at least 4. Sucks that no AM4 APU supports PCIe 4 while the platform otherwise does.

Deleted Comment

geertj · a year ago
> I’d be curious if this would just be a linear slowdown proportional to the RAM MHz or whether CAS latency plays into it as well.

Per o3-mini, the blocked gemm (matrix multiply) operations have very good locality and therefore MT/s should matter much more than CAS latency.

iwontberude · a year ago
I have been doing this with an Epyc 7402 and 512GB of DDR4 and its been fairly performant, you dont have to wait very long to get pretty good results. It's still LLM levels of bad, but at least I dont have to pay $20/mo to OpenAI.
whatevaa · a year ago
I don't think the cost of such machine will ever be a better than $20/mo, though. Capital costs are too high.
3abiton · a year ago
3x the price for less than 2x the speed increase. I don't think the price justifies the upgrade.
phonon · a year ago
Q4 vs Q8.
bee_rider · a year ago
I mean, nothing ever actually scales linearly, right?
TacticalCoder · a year ago
TFA says it can bump the spec to 768 GB but that it's then more like $2500 than $2000. At 768 GB that'd be the full, 8 bit, model.

Seems indeed like a good price compared to $6000 for someone who wants to hack a build.

I mean: $6 K is doable but I take it take many who'd want to build such a machine for fun would prefer to only fork $2.5K.

plagiarist · a year ago
Is there a source that unrolls that without creating an account?
isoprophlex · a year ago
Online, R1 costs what, $2/MTok?

This rig does >4 tok/s, which is ~15-20 ktok/hr, or $0.04/hr when purchased through a provider.

You're probably spending $0.20/hr on power (1 kW) alone.

Cool achievement, but to me it doesn't make a lot of sense (besides privacy...)

rightbyte · a year ago
> Cool achievement, but to me it doesn't make a lot of sense (besides privacy...)

I would argue that is enough and that this is awesome. It was a long time ago I wanted to do a tech hack like this much.

isoprophlex · a year ago
Well thinking about it a bit more, it would be so cool if you could

A) somehow continuously interact with the running model, ambient-computing style. Say have the thing observe you as you work, letting it store memories.

B) allowing it to process those memories when it chooses to/whenever it's not getting any external input/when it is "sleeping" and

C) (this is probably very difficult) have it change it's own weights somehow due to whatever it does in A+B.

THAT, in a privacy friendly self-hosted package, i'd pay serious money for

codetrotter · a year ago
> doesn't make a lot of sense (besides privacy...)

Privacy is worth very much though.

onlyrealcuzzo · a year ago
What privacy benefit do you get running this locally vs renting a baremetal GPU and running it there?

Wouldn't that be much more cost-effective?

Especially when you inevitably want to run a better / different model in the near future that would benefit from different hardware?

You can get similar Tok/sec on a single RTX 4090 - which you can rent for <$1/hr.

infecto · a year ago
Definitely but when you can run this in places like Azure with tight contracts it makes little sense except for the ultra paranoid.
jpc0 · a year ago
You could absolutely install 2kw of solar for probably around 2-4k and then at worst it turns your daytime usage into 0$. I also would be surprised if this was pulling 1kw in reality, I would want to see an actual measurement of what it is realistically pulling at the wall.

I believe it was an 850w PSU on the spec sheet?

dboreham · a year ago
Quick note that solar power doesn't have zero cost.
killingtime74 · a year ago
Marginal cost $0, 2kw solar + inverter + battery + install is worth more than this rig

Dead Comment

rufus_foreman · a year ago
Privacy, for me, is a necessary feature for something like this.

And I think your math is off, $0.20 per kWh at 1 kW is is $145 a month. I pay $0.06 per kWh. I've got what, 7 or 8 computers running right now and my electric bill for that and everything else is around $100 a month, at least until I start using AC. I don't think the power usage of something like this would be significant enough for me to even shut it off when I wasn't using it.

Anyway, we'll find out, just ordered the motherboard.

michpoch · a year ago
> I pay $0.06 per kWh

That is like, insanely cheap. In Europe I'd expect prices between $0.15 - 0.25 per kWh. $0.06 sounds like you live next to some solar farm or large hydro installation? Is that a total price, with transfer?

rodonn · a year ago
Depends on where you live. The average in San Francisco is $0.29 per kWh.
magic_hamster · a year ago
This gets you the (arguably) most powerful AI in the world running completely privately, under your control, in around $2000. There are many use cases for when you wouldn't want to send your prompts and data to a 3rd party. A lot of businesses have a data export policy where you are just not allowed to use company data anywhere but internal services. This is actually insanely useful.
api · a year ago
How is it that cloud LLMs can be so much cheaper? Especially given that local compute, RAM, and storage are often orders of magnitude cheaper than cloud.

Is it possible that this is an AI bubble subsidy where we are actually getting it below cost?

Of course for conventional compute cloud markup is ludicrous, so maybe this is just cloud economy of scale with a much smaller markup.

NeutralCrane · a year ago
My guess is two things:

1. Economies of scale. Cloud providers are using clusters in the tens of thousands of GPUs. I think they are able to run inference much more efficiently than you would be able to in a single cluster just built for your needs.

2. As you mentioned, they are selling at a loss. OpenAI is hugely unprofitable, and they reportedly lose money on every query.

thijson · a year ago
I think batch processing of many requests is cheaper. As each layer of the model is loaded into cache, you can put through many prompts. Running it locally you don't have that benefit.
michpoch · a year ago
> Especially given that local compute, RAM, and storage are often orders of magnitude cheaper than cloud

He uses old, much less efficient GPUs.

He also did not select his living location based on the electricity prices, unlikely the cloud providers.

realusername · a year ago
It's cheaper because you are unlikely to run your local AI at top capacity 24/7 so you have unused capacity which you are paying for.
octacat · a year ago
It is shared between users and better utilized and optimized.
agieocean · a year ago
Isn't that just because they can get massive discounts on hardware buying in bulk (for lack of a proper term) + absorb losses?
matja · a year ago
How would it use 1kW? Socket SP3 tops at 280W and the system in the article has a 850W PSU so I'm not sure what I'm missing.
falcor84 · a year ago
I assume that the parent just rounded 850W up to 1kW, no?
topbanana · a year ago
The point is running locally, not efficiently
onlyrealcuzzo · a year ago
> You're probably spending $0.20/hr on power (1 kW) alone.

For those that aren't following - means you're spending ~$10/MTok on power alone (compared to $2/MTok hosted).

bloomingkales · a year ago
"besides privacy"

lol.

Yeah, just besides that one little thing. We really are a beaten down society aren't we.

Aurornis · a year ago
Most people value privacy, but they’re practical about it.

The odds of a cloud server leaking my information is non-zero, but it’s very small. A government entity could theoretically get to it, but they would be bored to tears because I have nothing of interest to them. So practically speaking, the threat surface of cloud hosting is an acceptable tradeoff for the speed and ease of use.

Running things at home is fun, but the hosted solutions are so much faster when you actually want to get work done. If you’re doing some secret sensitive work or have contract obligations then I could understand running it locally. For most people, trying to secure your LLM interactions from the government isn’t a priority because the government isn’t even interested.

Legally, the government could come and take your home server too. People like to have fantasies about destroying the server during a raid or encrypting things, but practically speaking they’ll get to it or lock you up if they want it.

7thpower · a year ago
There is something about this comment that is so petty that I had to re-read it. Nice dunk, I guess.
brookst · a year ago
Privacy is a relatively new concept, and the idea that individuals are entitled to complete privacy is a very new and radical concept.

I am as pro-privacy as they come, but let’s not pretend that government and corporate surveillance is some wild new thing that just appeared. Read Horace’s Satires for insight into how non-private private correspondence often was in Ancient Rome.

WiSaGaN · a year ago
I think the main point of local model is privacy set aside hobby and tinkering.
carbonbioxide · a year ago
I think the privacy should be the whole point. There's always a price to pay. I'm optimistic that soon you'll be able to get better speeds with less hardware.
ekianjo · a year ago
> (besides privacy...)

that's the whole point of local models

spaceport · a year ago
The system idles at 60w and running hits 260w.
jaggs · a year ago
I think you may be underestimating future enshitification? (e.g. it's going to be trivially easy for the cloud suppliers to cram ads into all the chat responses at will).
huijzer · a year ago
What is a bit weird about AI currently is that you basically always want to run the best model, but the price of the hardware is a bit ridiculous. In the 1990s, it was possible to run Linux on scrappy hardware. You could also always run other “building blocks” like Python, Docker, or C++ easily.

But the newest AI models require an order of magnitude more RAM than my system or the systems I typically rent have.

So I’m curious to people here, has this in the history of software happened before? Maybe computer games are a good example. There people would also have to upgrade their system to run the latest games.

spamizbad · a year ago
Like AI, there were exciting classes of applications in the 70s, 80s and 90s that mandated pricier hardware. Anything 3D related, running multi-user systems, higher end CAD/EDA tooling, and running any server that actually got put under “real” load (more than 20 users).

If anything this isn’t so bad: $4K in 2025 dollars is an affordable desktop computer from the 90s.

lukeschlather · a year ago
The thing is I'm not that interested in running something that will run on a $4K rig. I'm a little frustrated by articles like this, because they claim to be running "R1" but it's a quantized version and/or it has a small context window... it's not meaningfully R1. I think to actually run R1 properly you need more like $250k.

But it's hard to tell because most of the stuff posted is people trying to do duct tape and bailing wire solutions.

handzhiev · a year ago
Indeed, even design and prepress required quite expensive hardware. There was a time when very expensive Silicone Graphics workstations were a thing.
Keyframe · a year ago
Of course it has. Coughs in SGI and advanced 3D and video software like PowerAnimator, Softimage, Flame. Hardware + software combo starting around 60k of 90's dollars, but to do something really useful with it you'd have to enter 100-250k of 90's dollars range.
tarruda · a year ago
> What is a bit weird about AI currently is that you basically always want to run the best model,

I think the problem is thinking that you always need to use the best LLM. Consider this:

- When you don't need correct output (such as when writing a blog post, there's no right/wrong answer), "best" can be subjective.

- When you need correct output (such as when coding), you always need to review the result, no matter how good the model is.

IMO you can get 70% of the value of high end proprietary models by just using something like Llama 8b, which is runnable on most commodity hardware. That should increase to something like 80% - 90% when using bigger open models such as the newly released "mistral small 3"

lukeschlather · a year ago
With o1 I had a hairy mathematical problem recently related to video transcoding. I explained my flawed reasoning to o1, and it was kind of funny in that it took roughly the same amount of time to figure out the flaw in my reasoning, but it did, and it also provided detailed reasoning with correct math to correct me. Something like Llama 8b would've been worse than useless. I ran the same prompt by ChatGPT and Gemini, and both gave me sycophantic confirmation of my flawed reasoning.

> When you don't need correct output (such as when writing a blog post, there's no right/wrong answer), "best" can be subjective.

This is like, everything that is wrong with the Internet in a single sentence. If you are writing a blog post, please write the best blog post you can, if you don't have a strong opinion on "best," don't write.

lurking_swe · a year ago
for coding insights / suggestions as you type, similar to copilot, i agree.

for rapidly developing prototypes or working on side projects, i find llama 8b useless. it might take 5-6 iterations to generate something truly useful. compared to say 1-shot with claude sonnet 3.5 or open ai gpt-4o. that’s a lot less typing and time wasted.

NegativeK · a year ago
I'm not sure Linux is the best comparison; it was specifically created to run on standard PC hardware. We have user access to AI models for little or no monetary cost, but they can be insanely expensive to run.

Maybe a better comparison would be weather simulations in the 90s? We had access to their outputs in the 90s but running the comparable calculations as a regular Joe might've actually been impossible without a huge bankroll.

bee_rider · a year ago
Or 3D rendering, or even particularly intense graphic design-y stuff I think, right? In the 90’s… I mean, computers in the $1k-$2k range were pretty much entry level, right?
detourdog · a year ago
The early 90's and digital graphic production. Computer upgrades could make intensive alterations interactive. This was true of photoshop and excel. There were many bottle necks to speed. Upgrade a network of graphic machines from 10mbit networking to 100mbit did wonders for server based workflows.
evilduck · a year ago
Adjusting for inflation, $2000 is about the same price as the first iMac, an entry level consumer PC at the time. Local AI is still pretty accessible to hobbyist level spending.
diffeomorphism · a year ago
Not adjusting at all, this is not "entry level" but rather "enthusiast"

https://www.logicalincrements.com/

Still accessible but only for dedicated hobbyists with deeper pockets.

svilen_dobrev · a year ago
well, if there was e.g. a model trained for coding - i.e. specialization as such, having models trained mostly for this or that - instead of everything incl. Shakespeare, the kitchen sink and the cockroaches biology under it, that would make those runable on much low level hardware. But there is only one, The-Big-Deal.. in many incarnations.
ant6n · a year ago
Read “masters of doom”, they go into quite some detail on how Carmack got himself a very expensive work station to develop Doom/Quake.
holoduke · a year ago
We finally enter an era where the demand for more memory is really needed. Small local ai models will be used for many things in the near future. Requiring lots of memory. Even phones will be in the need for terabytes of fast memory in the future.
Loic · a year ago
In the 90's it was really expensive to run 3D Studio or POVray. It could take days to render a single image. Silicon Graphics workstations could do it faster but were out of the budget of non professionals.
qingcharles · a year ago
Raytracing decent scenes was a big CPU hog in the 80s/90s for me. I'd have to leave single frames running overnight.
cactusplant7374 · a year ago
How were you running Docker in the 1990s?
mdp2021 · a year ago
> you basically always want to run the best model, but the price of the hardware is a bit ridiculous. In the 1990s, it was possible to run Linux on scrappy hardware. You could also always run other “building blocks” like Python, Docker, or C++ easily

= "When you needed to run common «building blocks» (such as, in other times, «Python, Docker, or C++» - normal fundamental software you may have needed), even scrappy hardware would suffice in the '90s"

As a matter of facts, people would upgrade foremostly for performance.

buescher · a year ago
Heh. I caught that too, and was going to say "I totally remember running Docker on Slackware on my 386DX40. I had to upgrade to 8MB of RAM. Good times."
notsylver · a year ago
I think it would be more interesting doing this with smaller models (33b-70b) and see if you could get 5-10 tokens/sec on a budget. I've desperately wanted something locally thats around the same level of 4o, but I'm not in a hurry to spend $3k on an overpriced GPU or $2k on this
gliptic · a year ago
Your best bet for 33B is already having a computer and buying a used RTX 3090 for <$1k. I don't think there's currently any cheap options for 70B that would give you >5. High memory bandwidth is just too expensive. Strix Halo might give you >5 once it comes out, but will probably be significantly more than $1k for 64 GB RAM.
ants_everywhere · a year ago
With used GPUs do you have to be concerned that they're close to EOL due to high utilization in a Bitcoin or AI rig?
pmarreck · a year ago
M4 Mac with unified GPU RAM

Not very cheap though! But you get a quite usable personal computer with it...

jjallen · a year ago
How does inference happen on a GPU with such limited memory compared with the full requirements of the model? This is something I’ve been wondering for a while
ynniv · a year ago
Umm, two 3090's? Additional cards scale as long as you have enough PCIe channels.
api · a year ago
Apple M chips with their unified GPU memory are not terrible. I have one of the first M1 Max laptops with 64G and it can run up to 70B models at very useful speeds. Newer M series are going to be faster and they offer more RAM now.

Are there any other laptops around other than the larger M series Macs that can run 30-70B LLMs at usable speeds that also have useful battery life and don’t sound like a jet taxiing to the runway?

For non-portables I bet a huge desktop or server CPU with fast RAM beats the Mac Mini and Studio for price performance, but I’d be curious to see benchmarks comparing fast many core CPU performance to a large M series GPU with unified RAM.

jenny91 · a year ago
As a data point: you can get an RTX 3090 for ~$1.2k and it runs deepseek-r1:32b perfectly fine via Ollama + open webui at ~35 tok/s in an OpenAI-like web app and basically as fast as 4o.
kevinak · a year ago
You mean Qwen 32b fine-tuned on Deepseek :)

There is only one model of Deepseek (671b), all others are fine-tunes of other models

driverdan · a year ago
> you can get an RTX 3090 for ~$1.2k

If you're paying that much you're being ripped off. They're $800-900 on eBay and IMO are still overpriced.

bick_nyers · a year ago
It will be slower for a 70b model since Deepseek is an MoE that only activates 37b at a time. That's what makes CPU inference remotely feasible here.
firtoz · a year ago
Would it be something like this?

> OpenAI's nightmare: DeepSeek R1 on a Raspberry Pi

https://x.com/geerlingguy/status/1884994878477623485

I haven't tried it myself or haven't verified the creds, but seems exciting at least

gliptic · a year ago
That's 1.2 t/s for the 14B Qwen finetune, not the real R1. Unless you go with the GPU with the extra cost, but hardly anyone but Jeff Geerling is going to run a dedicated GPU on a Pi.
etra0 · a year ago
it's using a Raspberry Pi with a.... USD$1k gpu, which kinda defeat the purpose of using the RPI in the first place imo.

or well, I guess you save a bit on power usage.

spaceport · a year ago
I put together a $350 build with a 3060 12GB and its still my favorite build. I run llama 3.2 11b q4 on it and its a really efficient way to get started and the tps is great.
Svoka · a year ago
You can run smaller models on MacbookPro with ollama with those speeds. Even with several 3k GPUs it won't come close to 4o level.
spaceport · a year ago
Hi HN, Garage youtuber here. Wanted to add in some stats on the wattages/ram.

Idle wattage: 60w (well below what I expected, this is w/o GPUs plugged in)

Loaded wattage: 260w

RAM Speed I am running currently: 2400 (V likely 3200 has a decent perf impact)

brunohaid · a year ago
Still surprised that the $3000 NVIDIA Digits doesn’t come up more often in that and also the gung-ho market cap discussion.

I was an AI sceptic until 6 months ago, but that’s probably going to be my dev setup from spring onwards - running DeepSeek on it locally, with a nice RAG to pull in local documentation and datasheets, plus a curl plugin.

https://www.nvidia.com/en-us/project-digits/

fake-name · a year ago
It'll probably be more relevant when you can actually buy the things.

It's just vaporware until then.

brunohaid · a year ago
Call me naive, but I somehow trust them to deliver in time/specs?

It’s also a more general comment around „AI desktop appliance“ vs homebuilts. I’d rather give NVIDIA/AMD $3k for a well adjusted local box than tinkering too much or feeding the next tech moloch, and have a hunch I’m not the only one feeling that way. Once it’s possible of course.

fulafel · a year ago
Also, LPDDR memory, and no published bandwidth numbers.
ganoushoreilly · a year ago
and people are missing the "Starting at" price. I suspect the advertised specs will end up more than $3k. If it comes out at that price, i'm in for 2. But I'm not holding my breath given Nvidia and all.
ranguna · a year ago
I'm not sure you can fit a decent quant of R1 in digits, 128 GB of memory is not enough for 8 and I'm not sure of 4 but I have my doubts. So you might have to go for around 1, which has a significant quality loss.
Cane_P · a year ago
You can connect two, and get 256 GB. But it will still not be enough to run it in native format. You will still need to use lower quant.
diffeomorphism · a year ago
The webpage does not say $3000 but starting at $3000. I am not so optimistic that the base model will actually be capable of this.
Cane_P · a year ago
They won't have different models, in any other ways than if you want more storage (up to 4 TB, we don't know the lowest they will sell) and cabling necessary for connecting two DIGITS (it won't be included in the box).

We already know that it is going to be one single CPU and GPU and fixed memory. The GPU is most likely the RTX 5070 Ti laptop model (992 TFLOPS, clocked 1% higher to get 1 PFLOP).

yapyap · a year ago
probably because nvidia digits is just a concept rn
christophilus · a year ago
Aside: it’s pretty amazing what $2K will buy. It’s been a minute since I built my desktop, and this has given me the itch to upgrade.

Any suggestions on building a low-power desktop that still yields decent performance?

Havoc · a year ago
>Any suggestions on building a low-power desktop that still yields decent performance?

You don't for now. The bottleneck is mem throughput. That's why people using CPU for LLM are running xeon-ish/epyc setups...lots of mem channels.

The APU class gear along the lines of Halo Strix is probably the path closest to lower power but it's not going to do 500gb of ram and still doesn't have enough throughput for big models

spaceport · a year ago
Not to be that yt'r that shills my videos all over, but you did ask for a low powered desktop build and this $350 one I put together is still my favorite. The 3060 12GB with llama 3.2 vision 11b is a very fun box that is low idle power (intel rules) to leave on 24/7 and have it run some additional services like HA.

https://youtu.be/iflTQFn0jx4

baobun · a year ago
Hard to know what ranges you have in mind with "decent performance" and "low-power".

I think your best bet might be a Ryzen U-series mini PC. Or perhaps an APU barebone. The ATX platform is not ideal from a power-efficiency perspective (whether inherently or from laziness or conspiracy from mobo and PSU makers, I do not know). If you want the flexibility or scale, you pay the price of course but first make sure it's what you want. I wouldn't look at discrete graphics unless you have specific needs (really high-end gaming, workstation, LLMs, etc) - the integrated graphics of last few years can both drive your 4k monitors and play recent games at 1080p smoothly, albeit perhaps not simultaneously ;)

Lenovo Tiny mq has some really impressive flavors (ECC support at the cost of CPU vendor-lock on PRO models) and there's the whole roster of Chinese competitors and up-and-comers if you're feeling adventerous. Believe me you can still get creative if you want to scratch the builder itch - thermals is generally what keeps these systems from really roaring (:

jbritton · a year ago
Does it make any sense to have specialized models, which could possibly be a lot smaller. Say a model that just translates between English and Spanish, or maybe a model that just understands unix utilities and bash. I don’t know if limiting the training content affects the ultimate output quality or model size.
walterbell · a year ago
Some enterprises have trained small specialized models based on proprietary data.

https://www.maginative.com/article/nvidia-leverages-ai-to-as...

> NVIDIA researchers customized LLaMA by training it on 24 billion tokens derived from internal documents, code, and other textual data related to chip design. This advanced “pretraining” tuned the model to understand the nuances of hardware engineering. The team then “fine-tuned” ChipNeMo on over 1,000 real-world examples of potential assistance applications collected from NVIDIA’s designers.

2023 paper, https://research.nvidia.com/publication/2023-10_chipnemo-dom...

> Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks.

2024 paper, https://developer.nvidia.com/blog/streamlining-data-processi...

> Domain-adaptive pretraining (DAPT) of large language models (LLMs) is an important step towards building domain-specific models. These models demonstrate greater capabilities in domain-specific tasks compared to their off-the-shelf open or commercial counterparts.