Readit News logoReadit News
bubaumba commented on How I run LLMs locally   abishekmuthian.com/how-i-... · Posted by u/Abishek_Muthian
navbaker · 8 months ago
At the 48GB level, L40S are great cards and very cost effective. If you aren’t aiming for constant uptime on several >70B models at once, they’re for sure the way to go!
bubaumba · 8 months ago
> L40S are great cards and very cost effective

from https://www.asacomputers.com/nvidia-l40s-48gb-graphics-card....

nvidia l40s 48gb graphics card Our price: $7,569.10*

Not arguing against 'great', but cost efficiency is questionable. for 10% you can get two used 3090. The good thing about LLMs is they are sequential and should be easily parallelized. Model can be split in several sub-models, by the number of GPUs. Then 2,3,4.. GPUs should improve performance proportionally on big batches, and make it possible to run bigger model on low end hardware.

bubaumba commented on How I run LLMs locally   abishekmuthian.com/how-i-... · Posted by u/Abishek_Muthian
nickpsecurity · 8 months ago
People shared them regularly on r/LocalLlama:

https://www.reddit.com/r/LocalLLaMA/

Search for terms like hardware build, running large models, multiple GPU’s, etc. Many people there have multiple, consumer GPU’s. There’s probably something about running multiple A100’s.

HuggingFace might have tutorials, too.

Warning: If it’s A100’s, most people say to just rent them from the cloud as needed cuz they’re highly costly upfront. If they’re normally idle, then it’s not as cost effective to own them. Some were using services like vast.ai to get rentals cheaper.

bubaumba · 8 months ago
it may be even cheaper to use big models through API. Claude, GPT, whatever. Rental is efficient only for big butches, while API is priced per call/size and is cheap for small models.
bubaumba commented on Why OpenAI's Structure Must Evolve to Advance Our Mission   openai.com/index/why-our-... · Posted by u/meetpateltech
rvba · 8 months ago
How can one slash a non existant donation from taxes?
bubaumba · 8 months ago
It's a trick. If you donate something to charity you can write it off the taxes. After IPOs most new billionaires did it, including Zuk. It's enough to just promise it publicly. There is no time limit, it can take years to 'donate', but writing off is immediate. To optimize it even further they create their own dummy charities full of friends and relatives.
bubaumba commented on Why OpenAI's Structure Must Evolve to Advance Our Mission   openai.com/index/why-our-... · Posted by u/meetpateltech
whamlastxmas · 8 months ago
I would guess they’re going to put as many expenses as possible on the nonprofit. For example, all the compute used for free tiers of ChatGPT will be charged to the nonprofit despite being a massive benefit to the for-profit. They may even charge the training costs, which will be in the billions, to the nonprofit as well
bubaumba · 8 months ago
Simple tax optimization. Like new billionaires promising to significant donations. a) they don't have to donate. b) they can immediately slash those promised donations from taxes.
bubaumba commented on Show HN: I made a website to semantically search ArXiv papers   papermatch.mitanshu.tech/... · Posted by u/Quizzical4230
Quizzical4230 · 8 months ago
I definitely was thinking about something like this for PaperMatch itself. Where anyone can pull a docker image and search through the articles locally! Do you think this idea is worthwhile pursuing?
bubaumba · 8 months ago
Absolutely worth doing. Here is interesting related video, local RAG:

https://www.youtube.com/watch?v=bq1Plo2RhYI

I'm not an expert, but I'll do it for learning. Then open source if it works. As far as I understand this approach requires a vector database and LLM which doesn't have to be big. Technically it can be implemented as local web server. Should be easy to use, just type and get a sorted by relevance list.

bubaumba commented on Show HN: I made a website to semantically search ArXiv papers   papermatch.mitanshu.tech/... · Posted by u/Quizzical4230
namanyayg · 8 months ago
What are other good areas where semantic search can be useful? I've been toying with the idea for a while to play around and make such a webapp.

Some of the current ideas I had:

1. Online ads search for marketers: embed and index video + image ads, allow natural language search to find marketing inspiration. 2. Multi e-commerce platform search for shopping: find products across Sephora, zara, h&m, etc.

I don't know if either are good enough business problems worth solving tho.

bubaumba · 8 months ago
3. Quick lookup into internal documents. Almost any company needs it. Navigating file-system like hierarchy is slow and limited. That was old way.

4. Quick lookup into the code to find relevant parts even when the wording in comments is different.

bubaumba commented on Show HN: I made a website to semantically search ArXiv papers   papermatch.mitanshu.tech/... · Posted by u/Quizzical4230
bubaumba · 8 months ago
This is cool, but how about local semantic search through tens of thousands articles and books. Sure I'm not the first, there should be some tools already.
bubaumba commented on Adversarial policies beat superhuman Go AIs (2023)   arxiv.org/abs/2211.00241... · Posted by u/amichail
voidfunc · 8 months ago
I don't think I see them as a win, but they're easily dealt with. AI will need analysts at the latter stage to evaluate the outputs but that will be a relatively short-lived problem.
bubaumba · 8 months ago
> I don't think I see them as a win

Unavoidable, probably

> but they're easily dealt with. AI will need analysts at the latter stage to evaluate the outputs but that will be a relatively short-lived problem

That solves only to some degree. Hallucinations may happen at this stage too. Then either correct answer can get rejected or false pass through.

bubaumba commented on Adversarial policies beat superhuman Go AIs (2023)   arxiv.org/abs/2211.00241... · Posted by u/amichail
thiago_fm · 8 months ago
I bet that there's a similarity between this and what happens to LLM hallucinations.

At some point we will realize that AI will never be perfect, it will just have much better precision than us.

bubaumba · 8 months ago
> it will just have much better precision than us.

and much faster with the right hardware. And that's enough if AI can do in seconds what humans takes years. With o3 the price is only the limit, looks like.

bubaumba commented on 7-year-old boy undergoes heart surgery after getting hit by falling drone   cbsnews.com/news/florida-... · Posted by u/perihelions
bubaumba · 8 months ago
After collision drone should shut itself down and drop like a rock. Looks like it wasn't the case here. And there should be nobody on the ground below, of course. Flying over that lake would be safe

u/bubaumba

KarmaCake day20August 26, 2024View Original