Readit News logoReadit News
nahnahno commented on Making games in Go: 3 months without LLMs vs. 3 days with LLMs   marianogappa.github.io/so... · Posted by u/maloga
starchild3001 · 17 hours ago
What I like about this post is that it highlights something a lot of devs gloss over: the coding part of game development was never really the bottleneck. A solo developer can crank out mechanics pretty quickly, with or without AI. The real grind is in all the invisible layers on top; balancing the loop, tuning difficulty, creating assets that don’t look uncanny, and building enough polish to hold someone’s attention for more than 5 minutes.

That’s why we’re not suddenly drowning in brilliant Steam releases post-LLMs. The tech has lowered one wall, but the taller walls remain. It’s like the rise of Unity in the 2010s: the engine democratized making games, but we didn’t see a proportional explosion of good game, just more attempts. LLMs are doing the same thing for code, and image models are starting to do it for art, but neither can tell you if your game is actually fun.

The interesting question to me is: what happens when AI can not only implement but also playtest -- running thousands of iterations of your loop, surfacing which mechanics keep simulated players engaged? That’s when we start moving beyond "AI as productivity hack" into "AI as collaborator in design." We’re not there yet, but this article feels like an early data point along that trajectory.

nahnahno · 16 hours ago
This is not true in my experience. Cranking out code is obviously the bottleneck, unless you have the luxury of working on a very narrow problem. The author describes a multi-modal project that does not afford this luxury.
nahnahno commented on US reportedly forcing TSMC to buy 49% stake in Intel to secure tariff relief   notebookcheck.net/Despera... · Posted by u/voxadam
robertjpayne · 19 days ago
iPhone performance and battery life would likely slide back 5-10 years if Apple was forced to use Intel chips instead of TSMC today.

Not just that, the raft of features that may have to be disabled until that performance and performance per watt gets back to where it is today.

nahnahno · 19 days ago
Complete nonsense. Intel 18A, were yields good enough, is competitive with TSMC N2.
nahnahno commented on U.S. bombs Iranian nuclear sites   bbc.co.uk/news/live/ckg3r... · Posted by u/mattcollins
mdhb · 2 months ago
Then you know… there’s the whole crimes against humanity thing from the ICC too…
nahnahno · 2 months ago
Based on their heavily biased view of the Gaza conflict, based on their Arabic affiliations and the Hamas-run Gaza government’s reporting.
nahnahno commented on When Fine-Tuning Makes Sense: A Developer's Guide   getkiln.ai/blog/why_fine_... · Posted by u/scosman
storus · 3 months ago
I thought that fine-tuning is no longer being done in the industry, instead transformer adapters like LoRA are being used? Having 1000 fine-tune models for each customer seems too heavy when one can have instead 1000 transformer adapters and swap them during the inference for each batch.

I mean there are tricks like Q-GaLore that allow training LLaMA-7B on a single 16GB GPU but LoRA still seems to be better for production to me.

nahnahno · 3 months ago
LoRA and QLoRA are still fine tuning I thought? Just updating a subset of parameters. You are still training a base model that was pre-trained (and possibly fine tuned after).
nahnahno commented on Cheap blood test detects pancreatic cancer before it spreads   nature.com/articles/d4158... · Posted by u/rbanffy
caycep · 6 months ago
https://europepmc.org/article/MED/39937880

Sadly, the group lists funding sources as: National Cancer Institute: P30CA069533 National Cancer Institute: P30CA069533

So the group's activities likely on pause, and with a good likelihood of closure due to the lack of NIH indirects from the current administration.

nahnahno · 6 months ago
79% accuracy. Useless
nahnahno commented on Ask HN: Anyone else find LLM related posts causing them to lose interest in HN    · Posted by u/3vidence
nahnahno · 7 months ago
Yes, likewise.
nahnahno commented on Nvidia announces next-gen RTX 5090 and RTX 5080 GPUs   theverge.com/2025/1/6/243... · Posted by u/somebee
vonneumannstan · 8 months ago
Totally unreasonable expectation. Sry. The cards are literally built for gamers for gaming. That they work for ML is a happy coincidence.
nahnahno · 7 months ago
You can’t possibly be naive enough to believe that Nvidia’s Titan class cards were designed exclusively for gamers.
nahnahno commented on Mark Zuckerberg blamed Sheryl Sandberg for Meta 'inclusivity' push: report   msn.com/en-us/money/execu... · Posted by u/impish9208
add-sub-mul-div · 7 months ago
School integration was woke. Women voting was woke. Interracial marriage was woke. Gay marriage was woke. Over the long run, progress only goes in the right direction. There's no debate, just noise. The year to year skirmishes aren't terribly important in the big picture.
nahnahno · 7 months ago
Moving away from color and race blind hiring to discriminatory hiring was not woke, by your definition.
nahnahno commented on Nvidia announces next-gen RTX 5090 and RTX 5080 GPUs   theverge.com/2025/1/6/243... · Posted by u/somebee
vonneumannstan · 8 months ago
If you want to run LLMs buy their H100/GB100/etc grade cards. There should be no expectation that consumer grade gaming cards will be optimal for ML use.
nahnahno · 8 months ago
Yes there should be. We don’t want to pay literal 10x markup because the card is suddenly “enterprise”.
nahnahno commented on Can LLMs write better code if you keep asking them to “write better code”?   minimaxir.com/2025/01/wri... · Posted by u/rcarmo
scosman · 8 months ago
I have to disagree. Naive algorithms are absolutely fine if they aren’t performance issues.

The comment you are replying to is making the point that “better” is context dependent. Simple is often better.

> There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. - Donald Knuth

nahnahno · 8 months ago
> Writing naive algorithms

Depends on the circumstance, and how difficult an appropriate algorithm is to write, but in my experience, if code performance is important, this tends to yield large, painful rewrites down the road.

u/nahnahno

KarmaCake day132June 24, 2021View Original