Readit News logoReadit News
GaggiX commented on Making games in Go: 3 months without LLMs vs. 3 days with LLMs   marianogappa.github.io/so... · Posted by u/maloga
dingnuts · 13 hours ago
As an LLM hater, I have to say, this is exactly the use case I want code generation for. If I need to figure out the problem as I develop, which is the case for new code, the model can kindly get out of my way. But if I have already written a bunch of code and I can explain the problem with the understanding that I've gained from my implementation and have the bot redo the grunt work? fine with me..
GaggiX · 11 hours ago
>As an LLM hater

I thought this was the start of a joke or something, I guess if you use LLMs you are a "LLM lover" then.

GaggiX commented on The AI vibe shift is upon us   cnn.com/2025/08/22/busine... · Posted by u/lelele
GaggiX · a day ago
>technology that has never proven its worth outside of specious hype

Reading stuff like this makes me question the entirety of the article.

GaggiX commented on Europe's crusade against air conditioning is insane   noahpinion.blog/p/europes... · Posted by u/paulpauper
nailer · 2 days ago
Energy costs are high because of the European culture of degrowth - Germany shut down all its nuclear plants in favour of buying Russian gas and oil.
GaggiX · 2 days ago
Germany also was the biggest country for solar energy, what it's essentially making a huge chunk of the energy production in summer in Europe, what started the industrialization of solar panels, making them dirt cheap. (Also Europe is not just Germany making a dumb decision before)
GaggiX commented on Europe's crusade against air conditioning is insane   noahpinion.blog/p/europes... · Posted by u/paulpauper
GaggiX · 2 days ago
This article is so stupid for not mentioning energy costs. It's like Europeans don't use AC because they want to degrowth or something.
GaggiX commented on In a first, Google has released data on how much energy an AI prompt uses   technologyreview.com/2025... · Posted by u/jeffbee
michaelt · 4 days ago
The original press release and report are at [1], couldn't find a link to them in the article.

> In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity

If they're running on, say, two RTX 6000s for a total draw of ~600 watts, that would be a response time of 1.44 seconds. So obviously the median prompt doesn't go to some high-end thinking model users have to pay for.

It's a very low number; for comparison, an electric vehicle might consume 82kWh to travel 363 miles. So that 0.24 watt-hours of energy is equivalent to driving 5.6 feet (1.7 meters) in such an EV.

When I hear reports that AI power demand is overloading electricity infrastructure, it always makes me think: Even before the AI boom, shouldn't we have a bunch of extra capacity under construction, ready for EV driving, induction stoves and heat-pump heating?

[1] https://cloud.google.com/blog/products/infrastructure/measur...

GaggiX · 4 days ago
>~If they're running on, say, two RTX 6000s for a total draw of ~600 watts, that would be a response time of 1.44 seconds. So obviously the median prompt doesn't go to some high-end thinking model users have to pay for.

You're not accounting for batches for the optimal gpu utilization, maybe it can takes 30 seconds but it completed 30 requests.

GaggiX commented on Gemma 3 270M re-implemented in pure PyTorch for local tinkering   github.com/rasbt/LLMs-fro... · Posted by u/ModelForge
canyon289 · 5 days ago
Hey all, I created this model with a top notch team. I answered many questions last week when this hit the front page, and happy to answer more here as well.

https://news.ycombinator.com/item?id=44902148

Personally I'm excited that you all have access to this model now and hope you all get value out of using them.

GaggiX · 5 days ago
I imagine you and your team have finetuned the model on different tasks, can you share some results? (I have only seen the alien NPC finetuning)
GaggiX commented on What could have been   coppolaemilio.com/entries... · Posted by u/coppolaemilio
gavmor · 6 days ago
Unlike "value demand," which is genuine demand arising from customer needs, failure demand is demand caused by failures such as errors, defects, inefficiencies, or poor service delivery. For example, if a service does not fulfill a customer's need properly, the customer must come back, creating more demand that is essentially avoidable. Failure demand leads to inefficiency, additional costs, and deteriorated customer and employee experiences.
GaggiX · 6 days ago
So you're implying that ChatGPT and similar are so popular because of "errors, defects, inefficiencies, or poor service delivery", how does this make any sense in the context?
GaggiX commented on What could have been   coppolaemilio.com/entries... · Posted by u/coppolaemilio
GaggiX · 6 days ago
>"while delivering absolutely nothing of value"

Well maybe for you and not the millions of people that use this technology daily.

GaggiX commented on Llama-Scan: Convert PDFs to Text W Local LLMs   github.com/ngafar/llama-s... · Posted by u/nawazgafar
KnuthIsGod · 7 days ago
Sub-2010 level OCR using LLM.

It is hype-compatible so it is good.

It is AI so it is good.

It is blockchain so it is good.

It is cloud so it is good.

It is virtual so it is good.

It is UML so it is good.

It is RPN so it is good.

It is a steam engine so it is good.

Yawn...

GaggiX · 7 days ago
>Sub-2010 level OCR

It's not.

GaggiX commented on Llama-Scan: Convert PDFs to Text W Local LLMs   github.com/ngafar/llama-s... · Posted by u/nawazgafar
ggnore7452 · 7 days ago
I’ve done a similar PDF → Markdown workflow.

For each page:

- Extract text as usual.

- Capture the whole page as an image (~200 DPI).

- Optionally extract images/graphs within the page and include them in the same LLM call.

- Optionally add a bit of context from neighboring pages.

Then wrap everything with a clear prompt (structured output + how you want graphs handled), and you’re set.

At this point, models like GPT-5-nano/mini or Gemini 2.5 Flash are cheap and strong enough to make this practical.

Yeah, it’s a bit like using a rocket launcher on a mosquito, but this is actually very easy to implement and quite flexible and powerfuL. works across almost any format, Markdown is both AI and human friendly, and surprisingly maintainable.

GaggiX · 7 days ago
>are cheap and strong enough to make this practical.

It all depends on the scale you need them, with the API it's easy to generate millions of tokens without thinking.

u/GaggiX

KarmaCake day4232September 7, 2021View Original