Readit News logoReadit News
starchild3001 commented on MIT Viral Study Debunked [video]   youtube.com/watch?v=X6O21... · Posted by u/starchild3001
starchild3001 · a day ago
An MIT study concluded that only about 5% of news articles show genuine subject expertise and report facts responsibly. The other 95% are primarily driven by clickbait—optimizing for attention and engagement at the cost of accuracy—leaving readers misinformed. Forbes, for example, ranks among the worst offenders, with its truth-to-clickbait ratio barely scraping above 5%. Unfortunately, most outlets fare little better, functioning as copycats of dubious sources like Forbes. Ironically, the authors of this study admitted their “research” drew mainly from their own imagination and an in-house LLM.
starchild3001 commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
starchild3001 · 2 days ago
This feels like a real inflection point for image editing models. What stood out to me isn’t just the raw generative quality, but the consistency across edits and the ability to blend multiple references without falling apart. That’s something people have been hacking around with pipelines (Midjourney → Photoshop → Inpainting tool), but seeing it consolidated in one model/API makes workflows dramatically simpler.

That said, I think we’re still in the “GPT-3.5” phase of image editing: amazing compared to what came before, but still tripping over repeated patterns (keyboards, clocks, Go boards, hands) and sometimes refusing edits due to safety policies. The gap between hype demos and reproducible results is also very real; I’ve seen outputs veer from flawless to poor with just a tiny prompt tweak.

starchild3001 commented on Claim: GPT-5-pro can prove new interesting mathematics   twitter.com/SebastienBube... · Posted by u/marcuschong
starchild3001 · 4 days ago
Hypothesis: If you had ~1M dollar to burn, I think we should try setting up an AI agent to explore and try to invent new mathematics. It turns out agents can get an IMO gold with Gemini 2.5 Pro production model only. Therefore I suspect a swarm of agents burning through tokens like there's no tomorrow can invent new math.

Reference: https://arxiv.org/abs/2507.15855

Alternative: If Gemini Deep Think or GPT5-Pro people are listening, I think they should give free access to their models with potential scaffolding (ie. agentic workflow) to say some ~100 researchers to see if any of them can prove new math with their technology.

starchild3001 commented on DeepConf: Scaling LLM reasoning with confidence, not just compute   arxiviq.substack.com/p/de... · Posted by u/che_shr_cat
starchild3001 · 4 days ago
I love this direction of research.

Reducing costs of reasoning is a huge ongoing challenge in LLMs. We're spending so much energy and compute resources today on reasoning that today's consumption rates were unexpected (to me) a short 1 yr ago. We're literally burning forests, the atmosphere and making electricity expensive for everyone.

DeepThink v3.1 made a significant leap in this direction recently -- significantly shorter thinking tokens at the same quality. GPT5's router was also one (important) attempt to reduce reasoning costs and make o3-quality available in the free tier without breaking the bank. This is also why Claude 4 is winning the coding wars against its reasoning peers -- it provides great quality without all the added reasoning tokens.

Getting inspiration from Alpha-go and MCMC literature -- applying tree weighting, prioritization and pruning feels extremely appropriate. (To improve the quality of Deep Think -- offered by Gemini & GPT5 Pro today)

So, yes, more of this please. Totally the right direction.

starchild3001 commented on Making games in Go: 3 months without LLMs vs. 3 days with LLMs   marianogappa.github.io/so... · Posted by u/maloga
starchild3001 · 4 days ago
What I like about this post is that it highlights something a lot of devs gloss over: the coding part of game development was never really the bottleneck. A solo developer can crank out mechanics pretty quickly, with or without AI. The real grind is in all the invisible layers on top; balancing the loop, tuning difficulty, creating assets that don’t look uncanny, and building enough polish to hold someone’s attention for more than 5 minutes.

That’s why we’re not suddenly drowning in brilliant Steam releases post-LLMs. The tech has lowered one wall, but the taller walls remain. It’s like the rise of Unity in the 2010s: the engine democratized making games, but we didn’t see a proportional explosion of good game, just more attempts. LLMs are doing the same thing for code, and image models are starting to do it for art, but neither can tell you if your game is actually fun.

The interesting question to me is: what happens when AI can not only implement but also playtest -- running thousands of iterations of your loop, surfacing which mechanics keep simulated players engaged? That’s when we start moving beyond "AI as productivity hack" into "AI as collaborator in design." We’re not there yet, but this article feels like an early data point along that trajectory.

starchild3001 commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
starchild3001 · 5 days ago
I think this essay lands on a useful framing, even if you don’t buy its every prescription. If we zoom out, history shows two things happening in parallel: (1) brute-force scaling driving surprising leaps, and (2) system-level engineering figuring out how to harness those leaps reliably. GPUs themselves are a good analogy: Moore’s Law gave us the raw FLOPs, but CUDA, memory hierarchies, and driver stacks are what made them usable at scale.

Right now, LLMs feel like they’re at the same stage as raw FLOPs; impressive, but unwieldy. You can already see the beginnings of "systems thinking" in products like Claude Code, tool-augmented agents, and memory-augmented frameworks. They’re crude, but they point toward a future where orchestration matters as much as parameter count.

I don’t think the "bitter lesson" and the "engineering problem" thesis are mutually exclusive. The bitter lesson tells us that compute + general methods win out over handcrafted rules. The engineering thesis is about how to wrap those general methods in scaffolding that gives them persistence, reliability, and composability. Without that scaffolding, we’ll keep getting flashy demos that break when you push them past a few turns of reasoning.

So maybe the real path forward is not "bigger vs. smarter," but bigger + engineered smarter. Scaling gives you raw capability; engineering decides whether that capability can be used in a way that looks like general intelligence instead of memoryless autocomplete.

starchild3001 commented on The Amiga games and demo scene collection   amiga.vision/... · Posted by u/doener
striking · 5 days ago
The joy of the demoscene is inextricable from the human and physical nature of it.

Yes, you can have AI tools vibe code up "new" 68k assembly for old machines, but you're never going to see it find genuinely new techniques for pushing the limits of the hardware until you give it access to actual hardware. The demoscene pushes the limits so hard that emulators have to be updated after demos are published. That makes it prohibitively expensive and difficult to employ AI to do this work in the manner you describe.

Don't mistake productivity for progress. There is joy in solving hard problems yourself, especially when you're the one who chose the limitations... And remember to sit back and enjoy yourself once in a while.

Speaking of, here's a demo you can sit back and enjoy: https://youtu.be/3aJzSySfCZM

starchild3001 · 5 days ago
Re: AI. I believe this will still be a human operation, as far as I can see.

Awesome demo! It's a little bit of middle age crisis :), but superbly done! Thank you.

starchild3001 commented on The Amiga games and demo scene collection   amiga.vision/... · Posted by u/doener
egypturnash · 5 days ago
seriously, has Starchild3001 never looked at the modern indy game scene? Half of it is flooded with people choosing restrictions based on old machines. More consoles than computers, games trying to look like an NES or a PSX are a dime a dozen.
starchild3001 · 5 days ago
I mostly follow Amiga and C64 (a little bit). I don't follow the platforms you're talking about.
starchild3001 commented on The Amiga games and demo scene collection   amiga.vision/... · Posted by u/doener
acherion · 5 days ago
> An amiga museum with all the games, artwork, coding technology, music technology etc. Perhaps an AI can be tasked to produce all of this soon. Youtube videos might be an engaging delivery mechanism. A physical museum too can be considered, perhaps as part of Computer History Museum and similar.

Sounds like you haven't been in touch with the Amiga scene in quite a while, if you think the above is something new. Perhaps Amiga / retro museums haven't been set up in your location, but there are heaps of them in Europe, for example. Youtube videos are a dime a dozen, just search 'amiga' on youtube and you will find literally hundreds of channels dedicated to the Amiga and/or Commodore in general. I subscribe to many of them already, and they all provide excellent in depth content for the Amiga, from hardware, to software, to games, to demos.

> AI coding might unlock mass creation of new software, games, demos, music etc. What was once conceived impossible will be very possible and likely abundant soon

Why would game writing / music creation / demos / software be "once conceived impossible"? Kids were doing the very thing in their bedrooms in the 80s and 90s, without AI. What would AI bring to the table nowadays that couldn't be done in the 80s/90s when the Amiga was popular?

People developing for the Amiga were putting their heart and soul into their creations. AI can't replicate that, and it definitely can't improve it, in any sense of the word.

starchild3001 · 5 days ago
Re: Online content.

I'm well aware of what's available out there as online content (it's no farther than a Google or youtube search).

Do you think what's out there as online content is what's truly possible if we had a million more Amiga enthusiasts?

That's my vision of what's to come in, say, 10-20 yrs. Imagine every Amiga game played and recorded by many (AI) users from start to finish. Every tactic explored, and cool strategies figured out. I for one would watch this.

Imagine vibe coding becoming more and more possible with 68k assembly. And having 1000x Amiga (AI) developers producing cool demo, intro and game material. New material. Novel and cutting edge material. At massive scale.

I believe this is the future we're headed. I for one am very excited about it.

----------

Re: A physical museum.

No, an Amiga or Commodore focus cannot be found anywhere in Silicon Valley or in United States. Even Computer History Museum (CHM) in Silicon Valley has very little Commodore content.

I live <1 mile away from the original Amiga offices in Los Gatos. It's a bit of shame that there's so little Amiga or Commodore in CHM.

u/starchild3001

KarmaCake day427December 9, 2015View Original