Readit News logoReadit News

Deleted Comment

feastingonslop commented on Stop generating, start thinking   localghost.dev/blog/stop-... · Posted by u/frizlab
locknitpicker · 11 hours ago
> We need no AI for this one: If I could only maintain code I wrote, I'd have to work alone.

I think you missed the whole point. This is not about you understanding a particular change. This is about the person behind the code change not understanding the software they are tasked to maintain. It's a kin to the discussion about the fundamental differences between script kiddies vs hackers.

With LLMs and coding agents, there is a clear pressure to turn developers into prompt kiddies: someone who is able to deliver results when the problem is bounded, but is fundamentally unable to understand what he did or the whole system being used.

This is not about sudden onsets of incompetence. This is about a radical change in workflows that no longer favor or allow research to familiarize with projects. You no longer need to pick through a directory tree to know where things are, or nagivate through code to check where a function is called or what component is related to which component. Having to manually open a file to read or write to it represents a learning moment that allows you to recall and understand how and why things are done. With LLMs you don't even understand what is there.

Thus developers who lean heavily on LLMs don't get to learn what's happening. Everyone can treat the project as a black box, and focus on observable changes to the project's behavior.

feastingonslop · 11 hours ago
> Everyone can treat the project as a black box, and focus on observable changes to the project's behavior.

This is a good thing. I don’t need to focus on oil refineries when I fill my car with gas. I don’t know how to run a refinery, and don’t need to know.

Deleted Comment

feastingonslop commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
varispeed · 2 days ago
How deep do you want to go? Because reasonable person wouldn't have expected to hand hold AI(ntelligence) to that level. Of course after pointing it out, it has corrected itself. But that involved looking at the code and knowing the code is poor. If you don't look at the code how would you know to state this requirement? Somehow you have to assess the level of intelligence you are dealing with.
feastingonslop · 2 days ago
Since the code does not matter, you wouldn’t need or want to phrase it in terms of algorithmic complexity. You surely would have a more real world requirement, like, if the data set has X elements then it should be processed within Y milliseconds. The AI is free to implement that however it likes.
feastingonslop commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
varispeed · 2 days ago
An example: it had a complete interface to a hash map. The task was to delete elements. Instead of using the hash map API, it iterated through the entire underlying array to remove a single entry. The expected solution was O(1), but it implemented O(n). These decisions compound. The software may technically work, but the user experience suffers.
feastingonslop · 2 days ago
If you have particular performance requirements like that, then include them. Test for them. You still don’t have to actually look at the code. Either the software meets expectations or it doesn’t, and keep having AI work at it until you’re satisfied.
feastingonslop commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
flyinglizard · 2 days ago
That's assuming no human would ever go near the code, and that over time it's not getting out of hand (inference time, token limits are all a thing), and that anti-patterns don't get to where the code is a logical mess which produces bugs through a webbing of specific behaviors instead of proper architecture.

However I guess that at least some of that can be mitigated by distilling out a system description and then running agents again to refactor the entire thing.

feastingonslop · 2 days ago
And that is the right assumption. Why would any humans need (or even want) to look at code any more? That’s like saying you want to go manually inspect the oil refinery every time you fill your car up with gas. Absurd.
feastingonslop commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
varispeed · 2 days ago
AI also quickly goes off the rails, even the Opus 2.6 I am testing today. The proposed code is very much rubbish, but it passes the tests. It wouldn't pass skilled human review. Worst thing is that if you let it, it will just grow tech debt on top of tech debt.
feastingonslop · 2 days ago
The code itself does not matter. If the tests pass, and the tests are good, then who cares? AI will be maintaining the code.
feastingonslop commented on Coding agents have replaced every framework I used   blog.alaindichiappari.dev... · Posted by u/alainrk
harrisi · 2 days ago
The aspect of "potentially secure/stable code" is very interesting to me. There's an enormous amount of code that aren't secure or stable already (I'd argue virtually all of the code in existence).

This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.

LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.

feastingonslop · 2 days ago
Nobody is saying to skip testing the software. Testing is still important. What the code itself looks like, isn’t.
feastingonslop commented on Coding agents have replaced every framework I used   blog.alaindichiappari.dev... · Posted by u/alainrk
ipsento606 · 2 days ago
> Software engineers are scared of designing things themselves.

When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.

Those beliefs aren't always true, but they're often true.

Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.

feastingonslop · 2 days ago
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.

u/feastingonslop

KarmaCake day-12December 7, 2025View Original