Readit News logoReadit News

Deleted Comment

Deleted Comment

bubblyworld commented on What is going on right now?   catskull.net/what-the-hel... · Posted by u/todsacerdoti
grey-area · 2 days ago
Or maybe it's just not as good as it's been sold to be. I haven't seen any small teams doing very big things with it, which ones are you thinking of?
bubblyworld · 2 days ago
As always, two things can be true. Ignore both the hucksters and the people loudly denigrating everything LLM-related, and somewhere in between you find the reality.

I'm in a tiny team of 3 writing b2b software in the energy space and claude code is a godsend for the fiddly-but-brain-dead parts of the job (config stuff, managing cloud infra, one-and-done scripts, little single page dashboards, etc).

We've had much less success with the more complex things like maintaining various linear programming/neural net models we've written. It's really good at breaking stuff in subtle ways (like removing L2 regularisation from a VAE while visually it still looks like it's implemented). But personally I still think the juice is worth the squeeze, mainly I find it saves me mental energy I can use elsewhere.

bubblyworld commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
brushfoot · 3 days ago
I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.

bubblyworld · 2 days ago
I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.

If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).

bubblyworld commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
Deestan · 2 days ago
I am now making an emotional reaction based on zero knowledge of the B2B codebase's environment, but to be honest I think it is relevant to the discussion on why people are "worlds apart".

200k lines of code is a failure state. At this point you have lost control and can only make changes to the codebase through immense effort, and not at a tolerable pace.

Agentic code writers are good at giving you this size of mess and at helping to shovel stuff around to make changes that are hard for humans due to the unusable state of the codebase.

If overgrown barely manageble codebases are all a person's ever known and they think it's normal that changes are hard and time-consuming and needing reams of code, I understand that they believe AI agents are useful as code writers. I think they do not have the foundation to tell mediocre from good code.

I am extremely aware of the judgemental hubris of this comment. I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion.

bubblyworld · 2 days ago
> 200k lines of code is a failure state.

What on earth are you talking about? This is unavoidable for many use-cases, especially ones that involve interacting with the real world in complex ways. It's hardly a marker of failure (or success, for that matter) on its own.

bubblyworld commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
therobots927 · 2 days ago
This is pure sophistry and the use of formal mathematical notation just adds insult to injury here:

“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”

This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:

“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”

Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.

I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:

“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”

I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.

bubblyworld · 2 days ago
You seem to be having strong emotions about this stuff, so I'm a little nervous that I'm going to get flamed in response, but my best take at a well-intentioned response:

I don't think the author is arguing that all computing is going to become probabilistic. I don't get that message at all - in fact they point out many times that LLMs can't be trusted for problems with definite answers ("if you need to add 1+1 use a calculator"). Their opening paragraph was literally about not blindly trusting LLM output.

> I don’t actually think the above paragraph makes any sense, does anyone disagree with me?

Yes - it makes perfect sense to me. Working with LLMs requires a shift in perspective. There isn't a formal semantics you can use to understand what they are likely to do (unlike programming languages). You really do need to resort to observation and hypothesis testing, which yes, the scientific method is a good philosophy for! Two things can be true.

> the use of formal mathematical notation just adds insult to injury here

I don't get your issue with the use of a function symbol and an arrow. I'm a published mathematician - it seems fine to me? There's clearly no serious mathematics here, it's just an analogy.

> This AI conversation could not be a better example of the loss of meaning.

The "meaningless" sentence you quote after this is perfectly fine to me. It's heavy on philosophy jargon, but that's more a taste thing no? Words like "ontology" aren't that complicated or nonsensical - in this case it just refers to a set of concepts being used for some purpose (like understanding the behaviour of some code).

bubblyworld commented on Project to formalise a proof of Fermat’s Last Theorem in the Lean theorem prover   imperialcollegelondon.git... · Posted by u/ljlolel
Xcelerate · 3 days ago
I feel like there’s an interesting follow-up problem which is: what’s the shortest possible proof of FLT in ZFC (or perhaps even a weaker theory like PA or EFA since it’s a Π^0_1 sentence)?

Would love to know whether (in principle obviously) the shortest proof of FLT actually could fit in a notebook margin. Since we have an upper bound, only a finite number of proof candidates to check to find the lower bound :)

bubblyworld · 3 days ago
Even super simple results like uniqueness of prime factorisation can pages of foundational mathematics to formalise rigorously. The Principia Mathematica famously takes entire chapters to talk about natural numbers (although it's not ZFC, to be fair). For that reason I think it's pretty unlikely.
bubblyworld commented on Modern CI is too complex and misdirected (2021)   gregoryszorc.com/blog/202... · Posted by u/thundergolfer
thrown-0825 · 4 days ago
until you have to debug a GH action, especially if it only runs on main or is one of the handful of tasks that are only picked up when committed to main.

god help you, and don’t even bother with the local emulators / mocks.

bubblyworld · 4 days ago
I've had a great experience using `act` to debug github actions containers. I guess your mileage, as usual, will vary depending on what you are doing in CI.
bubblyworld commented on Architecting large software projects [video]   youtube.com/watch?v=sSpUL... · Posted by u/jackdoe
wellpast · 8 days ago
I’m just making the point that most software dev work is not novel.

You’re either making a productivity app where CRUD and UX are pretty well known patterns.

Or a scalable web system - also very well tried territory.

Or analytics and data processing - again well trod.

If you’re not a good pattern matcher you might think every UI framework, or your next API abstraction, is some next general theory of relativity.

But otherwise the major novelty in most software project pursuits is going to be the context, people, and industry your building it into not the tech

bubblyworld · 5 days ago
Right, fair enough, I agree with that.
bubblyworld commented on AI vs. Professional Authors Results   mark---lawrence.blogspot.... · Posted by u/biffles
keiferski · 6 days ago
I don’t really find it that surprising - if you write generic low-quality stories, it’s difficult to differentiate your work from an AI writing generic low-quality stories. People have been selling slop stories on Amazon for a long time before AI tools, so describing them as professional authors is not exactly damning here.
bubblyworld · 5 days ago
Okay, but if your argument is that people couldn't differentiate them because these particular authors also write "slop" then you should just lead with that. I think many people would reasonably disagree with you.

u/bubblyworld

KarmaCake day844May 30, 2020View Original