Readit News logoReadit News
itsalotoffun commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
Davidzheng · 10 days ago
it actually doesn't matter what we want. Because eliminating it will in long run increase yield, economic forces will automate humans away by capitalistic forces.
itsalotoffun · 10 days ago
This is correct. It will require non-market forces to regulate soft-landings for humans. We may see a wave of "job-preserving" legislation in the coming years but these will eventually be washed away in favor of taxing the AI economy.
itsalotoffun commented on Best Practices for Building Agentic AI Systems   userjot.com/blog/best-pra... · Posted by u/vinhnx
bashtoni · 10 days ago
Am I the only one who cannot stand this terrible AI generated writing style?

These awful three sentence abominations:

"Each subagent runs in complete isolation. The primary agent handles all the orchestration. Simple." "No conversation. No “remember what we talked about.” Just task in, result out." "No ambiguity. No interpretation. Just data."

AI is good at some things, but copywriting certainly isn't one of them. Whatever the author put into the model to get this output would have been better than what the AI shat out.

itsalotoffun · 10 days ago
I'm genuinely curious, is it: a) the writing style you can't stand, b) the fact that this piece tripped your "this is written by AI" and it's AI-written stuff you can't stand? And what the % split between the two is.

(I find there's a growing push-back against being fed AI-anything, so when this is suspected it seems like it generates outsized reactions)

itsalotoffun commented on Best Practices for Building Agentic AI Systems   userjot.com/blog/best-pra... · Posted by u/vinhnx
imsh4yy · 10 days ago
Author of this post here.

For context, I'm a solo developer building UserJot. I've been recently looking deeper into integrating AI into the product but I've been wanting to go a lot deeper than just wrapping a single API call and calling it a day.

So this blog post is mostly my experience trying to reverse engineer other AI agents and experimenting with different approaches for a bit.

Happy to answer any questions.

itsalotoffun · 10 days ago
Also, regarding your agents (primary and sub):

- Did you build your own or are you farming out to say Opencode? - If you built your own, did you roll from scratch or use a framework? Any comments either way on this? - How "agentic" (or constrained as the case may be) are your agents in terms of the tools you've provided them?

itsalotoffun commented on Best Practices for Building Agentic AI Systems   userjot.com/blog/best-pra... · Posted by u/vinhnx
imsh4yy · 10 days ago
Author of this post here.

For context, I'm a solo developer building UserJot. I've been recently looking deeper into integrating AI into the product but I've been wanting to go a lot deeper than just wrapping a single API call and calling it a day.

So this blog post is mostly my experience trying to reverse engineer other AI agents and experimenting with different approaches for a bit.

Happy to answer any questions.

itsalotoffun · 10 days ago
When you discuss caching, are you talking about caching the LLM response on your side (what I presume) or actual prompt caching (using the provider cache[0])? Curious why you'd invalidate static content?

[0]: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

itsalotoffun commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
itsalotoffun · 10 days ago
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]

What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.

Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.

itsalotoffun commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
Davidzheng · 11 days ago
your argument assumes that they don't believe in model welfare when they explicitly hire people to work on model welfare?
itsalotoffun · 10 days ago
While I'm certain you'll find plenty of people who believe in the principle of model welfare (or aliens, or the tooth fairy), it'd be surprising to me if the brain-trust behind Anthropic truly _believed_ in model "welfare" (the concept alone is ludicrous). It makes for great cover though to do things that would be difficult to explain otherwise, per OP's comments.
itsalotoffun commented on Best Practices for Building Agentic AI Systems   userjot.com/blog/best-pra... · Posted by u/vinhnx
itsalotoffun · 11 days ago
Super practical, no-bullshit write up clearly coming from the trenches. Worth the read.
itsalotoffun commented on Are you willing to pay $100k a year per developer on AI?   theregister.com/2025/08/1... · Posted by u/rntn
JimDabell · 11 days ago
> In the early days of ride share it was an amazing advancement and very cheap, because it was highly subsidized.

This is not an analogous situation.

Inference APIs aren’t subsidised, and I’m not sure the monthly plans are any more either. AI startups burn a huge amount of money on providing free service to drive growth. That’s something they can reduce at any time without raising costs for their customers at all. Not to mention the fact that the cost of providing inference is plummeting by several orders of magnitude.

Uber weren’t providing free service to huge numbers of people, so when they wanted to turn a profit they couldn’t reduce there and had to raise prices for their customers. And the fees they pay to drivers didn’t drop a thousandfold so it wasn’t getting vastly cheaper to provide service.

itsalotoffun · 11 days ago
Lots of chat about this:

> Inference APIs aren’t subsidised

This is hard to pin down. There are plenty of metal companies providing hosted inference at market rates (i.e. assumed profitably if heading towards some commodity price floor). The premise that every single one of these companies is operating at a loss is unlikely. The open question is about the "off-book" training costs for the models running on these servers: are your unit economics positive when factoring training costs. And if those training costs are truly off-book, it's not a meritless argument to say the model providers are "subsidizing" the inference industry. But it's not a clear cut argument either.

Anthropic and OpenAI are their own beasts. Are their unit economics negative? Depends on the time frame you're considering. In the mid-longer run, they're staking everything on "most decidedly not negative". But what are the rest of us paying on the day OpenAI posts 50% operating margins?

itsalotoffun commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
aethrum · 12 days ago
Huh, interesting. Though I do wonder if the best possible thing an AI could help code would be another AI tool
itsalotoffun · 11 days ago
This way to the hard take-off.
itsalotoffun commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
andrewmutz · 12 days ago
The author does not understand what LLMs and coding tools are capable of today.

> LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over. This is exactly the opposite of what I am looking for. Software engineers test their work as they go. When tests fail, they can check in with their mental model to decide whether to fix the code or the tests, or just to gather more data before making a decision. When they get frustrated, they can reach for help by talking things through. And although sometimes they do delete it all and start over, they do so with a clearer understanding of the problem.

My experiences are based on using Cline with Anthropic Sonnet 3.7 doing TDD on Rails, and have a very different experience. I instruct the model to write tests before any code and it does. It works in small enough chunks that I can review each one. When tests fail, it tends to reason very well about why and fixes the appropriate place. It is very common for the LLM to consult more code as it goes to learn more.

It's certainly not perfect but it works about as well, if not better, than a human junior engineer. Sometimes it can't solve a bug, but human junior engineers get in the same situation too.

itsalotoffun · 11 days ago
> It works in small ... chunks

Yup.

> I ... review each one

Yup.

These two practices are core to your success. GenAI hangs reliably hangs itself given longer rope.

u/itsalotoffun

KarmaCake day115July 16, 2025View Original