Readit News logoReadit News
derbaum commented on Puerto Rico's Solar Microgrids Beat Blackout   spectrum.ieee.org/puerto-... · Posted by u/ohjeez
nandomrumber · 2 months ago
You’d be surprised how few watts a fridge and a TV draw, 500 watts combined, and that’s only while the compressor in the fridge is running. Don’t open the fridge very often, or keep a lot of thermal mass in the form of filled water bottles in there, and the compressor in a fridge will spend most its time not running.
derbaum · 2 months ago
Now I'm curious... Is your last suggestion correct? Wouldn't the time to cool down between pause intervals be proportionally longer due to the higher thermal mass and cancel out any savings gained by the long pause? Maybe the overall energy draw is even higher because the heat losses are higher when you spend a longer time with a high dT.
derbaum commented on 0.9999 ≊ 1   lcamtuf.substack.com/p/09... · Posted by u/zoidb
hinkley · 3 months ago
I don’t like any of his examples at the top. Look, it’s not that hard:

    x = 0.999…

    2x = 1.999…

    2x - x = 1

    x = 1
Multiplying by ten just confused things and the result doesn’t follow for most people.

derbaum · 3 months ago
Whether you multiply by 10 or 2, the same "counter" argument from the article stands. Only now you don't have a trailing zero after infinite nines, you have a trailing 8.
derbaum commented on Qwen3: Think deeper, act faster   qwenlm.github.io/blog/qwe... · Posted by u/synthwave
thierrydamiba · 4 months ago
How do people typically do napkin math to figure out if their machine can “handle” a model?
derbaum · 4 months ago
Very rough (!) napkin math: for a q8 model (almost lossless) you have parameters = VRAM requirement. For q4 with some performance loss it's roughly half. Then you add a little bit for the context window and overhead. So a 32B model q4 should run comfortably on 20-24 GB.

Again, very rough numbers, there's calculators online.

derbaum commented on Gemma 3 Technical Report [pdf]   storage.googleapis.com/de... · Posted by u/meetpateltech
meetpateltech · 6 months ago
Gemma 3 is out! Multimodal (image + text), 128K context, supports 140+ languages, and comes in 1B, 4B, 12B, and 27B sizes with open weights & commercial use.

Gemma 3 model overview: https://ai.google.dev/gemma/docs/core

Huggingface collection: https://huggingface.co/collections/google/gemma-3-release-67...

ollama: https://ollama.com/library/gemma3

derbaum · 6 months ago
The ollama page shows Gemma 27B beating Deepseek v3 and o3-mini on lmarena. I'm very excited to try it out.
derbaum commented on Has LLM killed traditional NLP?   medium.com/altitudehq/is-... · Posted by u/vietthangif
thaumasiotes · 8 months ago
Take two documents.

Feed one through an LLM, one word at a time, and keep track of words that experience greatly inflated probabilities of occurrence, compared to baseline English. "For" is probably going to maintain a level of likelihood close to baseline. "Engine" is not.

Do the same thing for the other one.

See how much overlap you get.

derbaum · 8 months ago
Wouldn't a simple comparison of the word frequency in my text against a list of usual word frequencies do the trick here without an LLM? Sort of a BM25?
derbaum commented on Has LLM killed traditional NLP?   medium.com/altitudehq/is-... · Posted by u/vietthangif
macNchz · 8 months ago
I've used embeddings to define clusters, then passed sampled documents from each cluster to an LLM to create labels for each grouping. I had pretty impressive results from this approach when creating a category/subcategory labels for a collection of texts I worked on recently.
derbaum · 8 months ago
That's interesting, it sounds a bit like those cluster graph visualisation techniques. Unfortunately, my texts seem to fall into clusters that really don't match the ones that I had hoped to get out of these methods. I guess it's just a matter of fine-tuning now.
derbaum commented on Has LLM killed traditional NLP?   medium.com/altitudehq/is-... · Posted by u/vietthangif
derbaum · 8 months ago
One of the things I'm still struggling with when using LLMs over NLP is classification against a large corpus of data. If I get a new text and I want to find the most similar text out of a million others, semantically speaking, how would I do this with an LLM? Apart from choosing certain pre-defined categories (such as "friendly", "political", ...) and then letting the LLM rate each text on each category, I can't see a simple solution yet except using embeddings (which I think could just be done using BERT and does not count as LLM usage?).
derbaum commented on Fluid Simulation Pendant   mitxela.com/projects/flui... · Posted by u/sschueller
adriand · 8 months ago
Fascinating video. I watched almost the whole thing without planning to, I got sucked in.

This is one of those examples of software that reminds me of my struggle to understand how LLMs are passing code evaluations that culminate with people declaring that they are now better than even the best human coders. I have tried to get LLMs (specifically, Claude and ChatGPT, trying various models) to assist with niche problems and it's been a terrible experience. Fantastic with CRUD or common algorithms, terrible when it's something novel or unusual.

The author creates his own version of a "FLIP simulation". I'm going to go out on a limb and posit that even ChatGPT's unreleased o3 model would not be up to the task of writing the software that powers this pendant. Is this incorrect? I realize perhaps that my comment is a little off-topic given that this is not an AI project. However, this project seems like an excellent example of the sort of thing that I am quite skeptical the supposedly "world-class" artificial software engineers could pull off.

derbaum · 8 months ago
The "issue" with saying an LLM can't do this is that CFD simulations are not actually that niche. Many university courses ask their students to write these types of algorithms for their course project. All this knowledge is present freely on the internet (as is evident by the Youtube videos that the author mentioned), and as such can be learned by an LLM. The article is of course still very impressive.
derbaum commented on Nvidia's Project Digits is a 'personal AI supercomputer'   techcrunch.com/2025/01/06... · Posted by u/magicalhippo
derbaum · 8 months ago
I'm a bit surprised by the amount of comments comparing the cost to (often cheap) cloud solutions. Nvidia's value proposition is completely different in my opinion. Say I have a startup in the EU that handles personal data or some company secrets and wants to use an LLM to analyse it (like using RAG). Having that data never leave your basement sure can be worth more than $3000 if performance is not a bottleneck.
derbaum commented on A Replacement for BERT   huggingface.co/blog/moder... · Posted by u/cubie
jph00 · 9 months ago
Hi gang, Jeremy from Answer.AI here. Nice to see this on HN! :) We're very excited about this model release -- it feels like it could be the basis of all kinds of interesting new startups and projects.

In fact, the stuff mentioned in the blog post is only the tip of the iceberg. There's a lot of opportunities to fine tune the model in all kinds ways, which I expect will go far beyond what we've managed to achieve in our limited exploration so far.

Anyhoo, if anyone has any questions, feel free to ask!

derbaum · 9 months ago
Hey Jeremy, very exciting release! I'm currently building my first product with RoBERTa as one central component, and I'm very excited to see how ModernBERT compares. Quick question: When do you think the first multilingual versions will show up? Any plans of you training your own?

u/derbaum

KarmaCake day59December 19, 2024View Original