Readit News logoReadit News
Aditya_Garg commented on The Gervais Principle, or the Office According to “The Office” (2009)   ribbonfarm.com/2009/10/07... · Posted by u/janandonly
adamesque · 2 days ago
Okay, so then — who gets squeezed more by AI: the clueless or the losers?
Aditya_Garg · 2 days ago
losers, clueless never had to be productive, just scapegoats. But now losers dont get that buffer window to try and become sociopaths, they just dont get hired at all.
Aditya_Garg commented on Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs   dnhkng.github.io/posts/ry... · Posted by u/dnhkng
Aditya_Garg · 2 days ago
Wild stuff and great read

Do you think karpathy's autoresearch would be useful here?

Aditya_Garg commented on OpenAI raises $110B on $730B pre-money valuation   techcrunch.com/2026/02/27... · Posted by u/zlatkov
dangus · 13 days ago
I think you’re just describing how it’s circular.

It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.

But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.

Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.

Aditya_Garg · 12 days ago
This is a common misconception

OpenAI and others are already profitable on inference (inference is really really cheap)

They are just heavily investing into the latest frontier

The biggest risk is whether they can stay cutting edge, or if open source or others will catch up quickly.

Aditya_Garg commented on Show HN: Sgai – Goal-driven multi-agent software dev (GOAL.md → working code)   github.com/sandgardenhq/s... · Posted by u/sandgardenhq
Aditya_Garg · 14 days ago
Pretty cool

Your goal.md examples are all features for the existing codebase. Any largish goal.md examples where your system is able to 1 shot a pretty large app?

The goal.md is what makes this thing either amazing or terrible for the user, so any guidelines or clear examples on writing a good one would go a long way.

Aditya_Garg commented on Show HN: Rendering 18,000 videos in real-time with Python   madebymohammed.com/pysaic... · Posted by u/mbmproductions
Aditya_Garg · 17 days ago
This is insane man, great work. Video within video blew my mind

Could you push it even further? Infini video zoom?

I keep zooming and its just videos all the way down

Aditya_Garg commented on Claws are now a new layer on top of LLM agents   twitter.com/karpathy/stat... · Posted by u/Cyphase
dainiusse · 19 days ago
I don't understand the mac mini hype. Why can it not be a vm?
Aditya_Garg · 19 days ago
It absolutely can be a vm. Someone even got it running on a 2 dollar esp32. Its just making api calls
Aditya_Garg commented on Cord: Coordinating Trees of AI Agents   june.kim/cord... · Posted by u/gfortaine
Aditya_Garg · 19 days ago
^ AI slop
Aditya_Garg commented on Claude Code's compaction discards data that's still on disk   github.com/anthropics/cla... · Posted by u/aciccarelli2
CjHuber · 19 days ago
I honestly still don't see the point of compaction. I mean it would be great if it did work, but I do my best do minimize any potential for hallucination and a lossy summary is the most counterproductive thing for that.

If you have it write down every important information and finding along a plan that it keeps updated, why would you even want compaction and not just start a blank sessions by reading that md?

I'm kind of suprised that anyone even thinks that compaction is currently in any way useful at all. I'm working on something which tries to achieve lossless compaction but that is incredibly expensive and the process needs around 5 to 10 times as many tokens to compact as the conversation it is compacting.

Aditya_Garg · 19 days ago
You just described the ralph loop, and its incredibly effective. Compaction is on the way out
Aditya_Garg commented on Show HN: I spent 3 years reverse-engineering a 40 yo stock market sim from 1986   wallstreetraider.com/stor... · Posted by u/benstopics
Aditya_Garg · a month ago
Amazing read! Is it possible to do something like this but for wall street raider?

https://labs.ramp.com/rct

"claude code plays wall street raider" would be very very cool.

u/Aditya_Garg

KarmaCake day125November 25, 2017View Original