Readit News logoReadit News
overgard commented on Tiny C Compiler   bellard.org/tcc/... · Posted by u/guerrilla
kimixa · 2 days ago
Man I can't wait for tcc to be reposted for the 4th time this week with the license scrubbed and the comment of "The Latest AI just zero-shotted an entire C compiler in 5 minutes!"
overgard · 2 days ago
And the subsequent youtube hype videos of "COMPILER WRITING IS OVER!"
overgard commented on Claude Opus 4.6   anthropic.com/news/claude... · Posted by u/HellsMaddy
jama211 · 4 days ago
There’s nothing wrong with that, except it lets ai skeptics feel superior
overgard · 4 days ago
I haven't looked at it directly, so I can speak on quality, but it's a pretty weird way to write a terminal app
overgard commented on Microsoft's Copilot chatbot is running into problems   wsj.com/tech/ai/microsoft... · Posted by u/fortran77
overgard · 4 days ago
I think the difference with Copilot vs other tools is how it's being pushed on us, and where it's being pushed. Because Copilot is bundled with software people need, it's going to get a lot more scrutiny compared to AI products that aren't built into the OS/built into Office, etc. In a way, being an "agentic OS" isn't that different from say Moltbot (conceptually), but one of those ideas seems to have gotten quite a bit of excitement, and the other a lot of anger, because Moltbot isn't being forced on you.
overgard commented on Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation   github.com/gavrielc/nanoc... · Posted by u/jimminyx
raincole · 7 days ago
> From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference.

> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)

Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.

overgard · 7 days ago
Some of it's whistle blowers, some of it is pretty simple math and analysis. Some of it's just common sense. Constantly raising money isn't sustainable and just increases obligations dramatically.. if these companies didn't need the cash to keep operating, they probably wouldn't be asking for tens of billions a year because it creates profit expectations that simply can't be delivered on.
overgard commented on Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation   github.com/gavrielc/nanoc... · Posted by u/jimminyx
sothatsit · 7 days ago
Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.

Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.

[0] https://youtu.be/GcqQ1ebBqkc?si=Vs2R4taIhj3uwIyj&t=1088

overgard · 7 days ago
> Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply.

Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.

As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...

From the article:

> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.

> Anyway, I’m sure these numbers are great-oh my GOD!

> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.

overgard commented on Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation   github.com/gavrielc/nanoc... · Posted by u/jimminyx
spiderice · 7 days ago
> what if the current prices really are unsustainable and the thing goes 10x?

Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.

overgard · 7 days ago
From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference. It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated), but if the screws start being turned on investment dollars they not only have to increase the price of their current offerings (2x cost wouldn't shock me), but some of them also need a (massive) influx of capital to handle things like datacenter build obligations (10s of billions of dollars). So I don't think it's crazy to think that prices might go up quite a bit. We've already seen waves of it, like last summer when Cursor suddenly became a lot more expensive (or less functional, depending on your perspective)
overgard commented on 1-Click RCE to steal your Moltbot data and keys   depthfirst.com/post/1-cli... · Posted by u/arwt
Trufa · 8 days ago
It is very much fun! Chaotic and definitely dangerous but a fun little experiment of the boundaries.

It’s definitely not it it’s final form but it’s showing potential.

overgard · 7 days ago
I can see how it could be fun, but I'm a bit skeptical that it's a practical path forward. The security problems it has (prompt injection for example) don't seem solvable with LLMs in general
overgard commented on 1-Click RCE to steal your Moltbot data and keys   depthfirst.com/post/1-cli... · Posted by u/arwt
amelius · 8 days ago
What you are missing: now people finally have a Siri that actually works.
overgard · 7 days ago
I guess that's one reason. If I'm perfectly honest I always turn Siri off because I don't trust Siri either; but that's less of a "malicious actors" thing and more of a "it doesn't work well thing". Although to be honest, outside of driving in a car I don't really want a voice interface. With a lot of things I feel like I need to overspecify it if I have to do it verbally. Like "play this song, but play it from my spotify Liked playlist so that when the song is over it transitions to something I want" (I've never tried that since I figure siri can't do it -- just an example)
overgard commented on Ask HN: Any real OpenClaw (Clawd Bot/Molt Bot) users? What's your experience?    · Posted by u/cvhc
bobjordan · 9 days ago
Here is what I have my openclaw agent setup to do in my wsl environment on my 22 core development workstation in my office:

#1) I can chat with the openclaw agent (his name is "Patch") through a telegram chat, and Patch can spawn a shared tmux instance on my 22 core development workstation. #2) I can then use the `blink` app on my iphone + tailscale and that allows me to use a command in blink `ssh dev` which connects me via ssh to my dev workstation in my office, from my iphone `blink` app.

Meanwhile, my agent "Patch" has provided me a connection command string to use in my blink app, which is a `tmux <string> attach` command that allows me to attach to a SHARED tmux instance with Patch.

Why is this so fking cool and foundationally game changing?

Because now, my agent Patch and I can spin up MULTIPLE CLAUDE CODE instances, and work on any repository (or repositories) I want, with parallel agents.

Well, I could already spawn multiple agents through my iphone connection without Patch, but the problem is then I need to MANAGE each spawned agent, micromanaging each agent instance myself. But now, I have a SUPERVISOR for all my agents, Patch is the SUPERVISOR of my muliple claude code instances.

This means I no longer have to context switch by brain between five or 10 or 20 different tmux on my own to command and control multiple different claude code instances. I can now just let my SUPERVISOR agent, Patch, command and control the mulitple agents and then report back to me the status or any issues. All through a single telegram chat with my supervisor agent, Patch.

This frees up my brain to only have to just have to manage Patch the supervisor, instead of micro-managing all the different agents myself. Now, I have a true management structure which allows me to more easily scale. This is AWESOME.

overgard · 8 days ago
Maybe this is just a skill issue on my part, but I'm still trying to wrap my head around the workflow of running multiple claude agents at once. How do they not conflict with each other? Also how do you have a project well specified enough that you can have these agents working for hours on end heads down? My experience as a developer (even pre AI) has mostly been that writing-code-fast has rarely been the progress limiter.. usually the obstacles are more like, underspecified projects, needing user testing, disagreements on the value of specific features, subtle hard to fix bugs, communication issues, dealing with other teams and their tech, etc. If I have days where I can just be heads down writing a ton of code I'm very happy.
overgard commented on Code is cheap. Show me the talk   nadh.in/blog/code-is-chea... · Posted by u/ghostfoxgod
visarga · 9 days ago
> "write some unit tests for Redux today"

The equivalent of "draw me a dog" -> not a masterpiece!? who would have thought? You need to come up with a testing methodology, write it down, and then ask the model to go through it. It likes to make assumptions on unspecified things, so you got to be careful.

More fundamentally I think testing is becoming the core component we need to think about. We should not vibe-check AI code, we should code-check it. Of course it will write the actual test code, but your main priority is to think about "how do I test this?"

You can only know the value of a code up to the level of its testing. You can't commit your eyes into the repo, so don't do "LGTM" vibe-testing of AI code, it's walking a motorcycle.

overgard · 8 days ago
I think you're assuming a lot about my prompting. I know to be specific with LLMs

u/overgard

KarmaCake day11201November 16, 2009View Original