I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.
This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.
The async approach to coding has been most fascinating, too.
I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.
I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.
Because I can never focus on just one thing, I have a philosophy degree. I’ve worked with product teams and spent lots of time with stakeholders. I’ve written tons of docs because I was the only one on the team who enjoyed it.
I’ve always bemoaned my distractibility as an impediment to deep expertise, but at least it taught me to write well, for all kinds of audiences.
The challenge is that clearly stating things is and always has been the hard part. It’s awesome that we have tools which can translate clear natural language instructions into code but even if we get AGI you’ll still have to do that. Maybe you can save some time in the process by not having to fight with code as much but you’re still going to have to create really clear specs which, again, is the hard part.
Many years ago, in another millennium, before I even went to university but still was an apprentice (the German system, in a large factory), I wrote my first professional software, in assembler. I got stuck on a hard part. Fortunately there was another quite intelligent apprentice colleague with me (now a hard-science Ph.D.), and I delegated that task to him.
He still needed an explanation since he didn't have any of my context, so I bit the bullet and explained the task to him as well as I could. When I was done I noticed that I had just created exactly the algorithm that I needed. I just wrote it down easily myself in less than half an hour after that.
in my experience only a limited part of software can be done with just really clear specs, also at times in my career I have worked on things that became more "clear" what was really needed as time went on the more we worked on it, and in those cases really clear specs would have produced worse outcomes.
I think about this a lot. Early on, as a self taught engineer, I spent a lot of time simply learning the vernacular of the software engineering world so that I could explain what it was that I wanted to do.
Repl.it is so hit or miss for me, and that's that is so frustrating. Like, it can knock out something in minutes that would have taken me an afternoon. That's amazing.
Then other times, I go to create something that is suggested _by them below the prompt box_ and it can't do it properly.
The fact that you think it was suggested _by_ them is I think where your mental model is misleading you.
LLMs can be thought of metaphorically as a process of decompression, if you can give it a compressed form for your scenario 1 it'll go great - you're actually doing a lot of mental work to arrive at that 'compressed' request, checking technical feasibility, thinking about interactions, hinting at solutions.
If you feed it back it's own suggestion it's no so guaranteed to work.
I've found LLMs to be a key tool in helping me articulate something clearly. I write down a few half-vague notes, maybe some hard rules, and my overall intent and ask it to articulate a spec, and then ask to for suggestions, feedback, questions to clarify from a variety of perspectives. This gives me enough material to clarify my actual requirements and then ask for that be broken down into a task list. All along the way I'm both refining my mental model and written material to more clearly communicate my intent to both machines and humans.
Increasingly I've also just ben YOLOing single shot throw-away systems to explore the design space - it is easier to refine the ideas with partially working systems than just abstract prose.
I'm loving the new programming. I don't know where it goes either, but I like it for now.
I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.
It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.
For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.
Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.
Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.
So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.
What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.
> What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.
Exactly my thinking, nearly 50, more than 30 years of experience in early every kind of programming, like you do, I can easily architect/control/adjust the agent to help me produce great code with a very robust architecture. By I do that out of my experience, both in modelling (science) and programming, I wonder how the junior devs will be able to build experience if everything comes cooked by the agent. Time will tell us.
I feel like we've been here before, and there was a time when if you're going to be an engineer, you needed to know core equations, take a lot of derivatives, perform mathematical analysis on paper, get results in an understandable form, and come up with solutions. That process may be analogous to what we used to think of as beginning with core data structures and algorithms, design patterns, architecture and infrastructure patterns, and analyzing them all together to create something nice. Yet today, much of the lower-level mathematics that were previously required no longer are. And although people are trained in their availability and where they are used, they form the backbone of systems that automate the vast majority of the engineering process.
It might be as simple as creating awareness about how everything works underneath and creating graduates that understand how these things should work in a similar vein.
I don’t know how seniors will cope. You seem to have a solid understanding that you can make use of AI. But most seniors on HN struggle with basic tasks using AI. Juniors are likely to out pace them quickly. But potentially without the experience or understanding.
Really well said, it's a large amount of directing in additoin to anything else.
To continue this thought - what could have been different in the last 10-15 years to encourage junior developers to listen more where they might not have to those who were slightly ahead of them?
I also am enjoying LLMs, but I get no joy out of just prompting them again and again. I get so incredibly bored, with a little side of anxiety that I don’t really know how my program works.
I’ll probably get over it, but I’ve been realizing how much fun I get out building something as opposed to just having be built. I used to think all I cared about was results, and now I know that’s not true, so that’s fun!
Of course for the monotonous stuff that I’ve done before or don’t care a lick about, hell yeah I let em run wild. Boilerplate, crud, shell scripts, CSS. Had claude make me a terminal based version of snake. So sick
This is interesting. Maybe slow it down a bit? What I've found is I really need to be extremely involved. I approve every change (claude-code). I'm basically micromanaging an AI developer. I'm constantly reading and correcting. Sometimes I tell it to wait while I help it make some change it's hung up on.
There's no way I could hire someone who'd want me hovering over their shoulder like this.
This sounds tedious I guess, but it's actually quite zen, and faster than solo coding most of the time. It gives me a ton of confidence to try new things and new libraries, because I can ask it to explain why it's suggesting the changes or for an overview of an approach. At no point am I not aware of what it's doing. This isn't even close to what people think of as vibe coding. It's very involved.
I'm really looking forward to increasing context sizes. Sometimes it can spin it's wheels during a refactor and want to start undoing changes it made earlier in the process, and I have to hard correct it. Even twice the context size will be a game changer for me.
I've always felt building something was close to artistry. You create something out of your thoughts, you shape it how you want and you understand how it works to the most minute detail. The amount of times I've shown something seemingly simple to someone and went "but wait this is what is actually happening in the background!" and started explaining something I thought was cool or clever are great memories to me. AI is turning renaissance paintings into mass-market printing. There's no pride, no joy, just productivity. It's precisely those repetitive, annoying tasks that lead you to create a faster alternative, or to think outside the box and find different ways. I just don't get the hype.
My biggest problem with working LLMs is that they don't understand negatives and they also fail to remember their previous instructions somehow.
For example:
If I tell it to not use X, it will do X.
When I point it out, it fixes it.
Then a few prompts later, it will use X again.
Another issue is the hallucinations. Even if you provide it the entire schema (I did this for a toy app I was working with), it kept on making up "columns" that don't exist. My Invoice model has no STATUS column, why do you keep assuming it's there in the code?
I found them useful for generating the initial version of a new simple feature, but they are not very good for making changes to an existing ones.
I've tried many models, Sonnet is the better one at coding, 3.7 at least, I am not impressed with 4.
> Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.
Are people implementing stuff from start to finish in one go? For me, it's always been iterative. Start from scaffolding, get one thing right,then the next. It's like drawing. You start with a few shapes, then connect them. After you sketch on top, then do a line art, and then you finish with values (this step is also iterative refinements). With each step, you become more certain of what you want to do, while also investing the minimum possible effort.
So for me coding is more about refactoring. I always type the minimal amount of code to get something to work. And it usually means shortcuts which I annotate with a TODO comment. Then I iterate over, making it more flexible, adds more flexibility, makes the code more clean.
so i guess that's a good argument for replacing employees with a bespoke LLM for your business-they will never leave after they're trained. and they never ask for a raise. and they dont need benefits or carry other human risks.
> I would normally just relax and do something else. Instead, I'm relaxing and coding.
So more work gets to penetrate a part of your life that it formerly wouldn't. What's the value of “productivity gains”, when they don't improve your quality of life?
> So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.
Wish I had your confidence in this. I can easily see how this nullifies my hard earned experience and basically puts me in the same sport as a more mid level or even junior engineer.
Right, I’ve been using it recently for writing a message queue -> database bridge with checkpointing and all kinds of stuff (I work for a timeseries database company).
I saw this as a chance to embrace AI, after a while of exploring I found Claude Code, and ended up with a pretty solid workflow.
But I say this as someone who has worked with distributed systems / data engineering for almost 2 decades, and spend most of my time reviewing PRs and writing specs anyway.
The trick is to embrace AI on all levels: learn how to use prompts. learn how to use system prompts. learn how to use AI to optimize these prompts. learn how to first write a spec, and use a second AI (“adversarial critic”) to poke holes in that plan. find incompletenesses. delegate the implementation to a cheaper model. learn how to teach AI how to debug problems properly, rather than trying to one-shot fixes in the hope it fixes things. etc
It’s an entirely different way of working.
I think juniors can learn this as well, but need to work within very well-defined frameworks and probably needs to be part of college curriculum as well.
Have you had the realization that you could never go back to dealing with all the minutia again?
LLMs have changed me. I want to go outside while they are working and I am jealous of all the young engineers that won’t lose the years I did sitting in front of a screen for 12 hours a day while sometimes making no progress on connecting two black boxes.
Serious question: have you considered that dealing with all that minutiae and working through all that pain has made you capable to have the LLM write code?
Those young engineers, in 10 years, won't be able to fix what the LLM gave them,because they have not learned anything about programming.
They have all learned how to.micromanage an LLM instead.
That's a pretty good take. I was actually looking for a good analogy recently
I think if I was just starting out learning to program, I would find something fun to build and pick a very correct, typed, and compiled language like Haskell or Purescript or Elm, and have the agent explaining what it's doing and why and go very slow.
Hot take: Junior devs are going to be the ones who "know how to build with AI" better than current seniors.
They are entering the job market with sensibilities for a higher-level of abstraction. They will be the first generation of devs that went through high-school + college building with AI.
Where did they learn sensibility for higher-level of abstraction? AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea, you will have to learn yourself all the way down into the details that the big picture it chose for you was faulty from the start. Convert some convoluted bash script to run on Windows because thats what the office people run? Get strapped in for the AI PowerShell ride of your life.
Do you think that kids growing up now will be better artists than people who spent time learning how to paint because they can prompt an LLM to create a painting for them?
Do you think humanity will be better off because we'll have humans who don't know how to do anything themselves, but they're really good at asking the magical AI to do it for them?
I think this disregards the costs associated with using AI.
It used to be you could learn to program with a cheap old computer a majority of families can afford. It might have run slower, but you still had all the same tooling that's found on a professional's computer.
To use LLMs for coding, you either have to pay a third party for compute power (and access to models), or you have to provide it yourself (and use freely available ones). Both are (and IMO will remain) expensive.
I'm afraid this builds a moat around programming that will make it less accessible as a discipline. Kids won't just tinker they way into a programming career as they used to, if it takes asking for mom's credit card from minute 0.
As for HS + college providing a CS education using LLMs, spare me. They already don't do that when all it takes is a computer room with free software on it. And I'm not advocating for public funds to be diverted to LLM providers either.
At one hand you get insane productivity boost, something that could take maybe days, weeks or months to do now you can do in significantly shorter amount of time, but how much are you learning if you are at a junior level and not consciously being careful about how you use it, feels like it can be dangerous without a critical mindset, where you eventually rely too much on it that you can't survive without it. Or maybe this is ok? Perhaps the way of programming in the future should be like this, since we have this technology now, why not use it?
Like there's a mindset where you just want to get the job done, ok cool just let the llm do it for me (and it's not perfect atm), and ill stitch everything together fix small stuff that it gets wrong etc, saves alot of time and sure I might learn something in the process as well.
And then the other way of working is the traditional way, you google, look up on stackoverflow, read documentations, you sit down try to find out what you need and understand the problem, code a solution iteratively and eventually you get it right and you get a learning experience out of it. Downside is this can take 100 years, at the very least much longer than using an llm in general. And you could argue that if you prompt the llm in a certain way, it would be equivalent to doing all of this but in a faster way, without taking away from you learning.
For seniors it might be another story, it's like they have the critical thinking, experience and creativity already, through years of training, so they don't loose as much compared to a junior. It will be closer for them to treat this as a smarter tool than google.
Personally, I look at it like you now have a smarter tool, a very different one as well, if you use it wisely you can definitely do better than traditional googling and stackoverflow. It will depend on what you are after, and you should be able to adapt to that need. If you just want the job done, then who cares, let the llm do it, if you want to learn you can prompt it in certain way to achieve that, so it shouldn't be a problem. But this sort of way of working requires a conscious effort on how you are using it and an awareness of what downsides there could be if you choose to work with the llm in a certain way to be able to change the way you interact with the llm. In reality I think most people don't go through the hoops of "limiting" the llm so that you can get a better learning experience. But also, what is a better learning experience? Perhaps you could argue that being able to see the solution, or a draft of it, can be a way of speeding up learning experience, because you have a quicker starting point to build upon a solution. I dunno. My only gripe with using LLM, is that deep thinking and creativity can take a dip, you know back in the day when you stumbled upon a really difficult problem, and you had to sit down with it for hours, days, weeks, months until you could solve that. I feel like there are some steps there that are important to internalize, that LLM nowdays makes you skip.
What also would be so interesting to me is to compare a senior that got their training prior to LLM, and then compare them to a senior now that gets their training in the new era of programming with AI, and see what kinds of differences one might find
I would guess that the senior prior to LLM era, would be way better at coding by hand in general, but critical thinking and creativity, given that they both are good seniors, maybe shouldn't be too different honestly
but it just depends on how that other senior, who are used to working with LLMs, interacts with them.
Also I don't like how LLM sometimes can influence your approach to solving something, like perhaps you would have thought about a better way or different way of solving a problem if you didn't first ask the LLM. I think this could be true to a higher degree for juniors than seniors due to gap in experience
when you are senior, you sort of have seen alot of things already, so you are aware of alot of ways to solve something, whereas for a junior that "capability" is more limited than a senior.
What you miss is the constant need to refine and understand the bigger picture. AI makes everyone a lead architect. A non-coder can't do this or will definitely get lost in the weeds eventually.
I'm using AI assistants as an interactive search and coding assistant. I'm still driving the development and implementing the code.
Where I use it for is:
1. Remembering what something is called -- in my case the bootstrap pills class -- so I could locate it in the bootstrap docs. Google search didn't help as I couldn't recall the right name to enter into it. For the AI I described what I wanted to do and it gave the answer.
2. Working with a language/framework that I'm familiar with but don't know the specifics in what I'm trying to do. For example:
- In C#/.NET 8.0 how do I parse a JSON string?
- I have a C# application where I'm using `JsonSerializer.Deserialize` to convert a JSON string to a `record` class. The issue is that the names of the variables are capitalized -- e.g. `record Lorem(int Ipsum)` -- but the fields in the JSON are lowercase -- e.g. `{"ipsum": 123}`. How do I map the JSON fields to record properties?
- In C# how do I convert a `JsonNode` to a `JsonElement`?
3. Understanding specific exceptions and how to solve them.
In each case I'm describing things in general terms, not "here's the code, please fix it" or "write the entire code for me". I'm doing the work of applying the answers to the code I'm working on.
He's still telling the AI what to code. Prompting, i.e. deciding the right thing to build then clearly specifying and communicating it in English, is a skill in itself. People who spend time developing that skill are going to be more employable than people who just devote all their time to coding, the thing at which LLMs are more cost effective.
My theory on AI is it's the next iteration of google search, a better more conversational, base layer over all the information that exists on the internet.
Of course some people will lose jobs just like what happened to several industries when search became ubiquitous. (newspapers, phone books, encyclopedias, travel agents)
But IMHO this isn't the existential crisis people think it is.
It's just a tool. Smart, clever people can do lots of cool stuff with tools.
But you still have to use it,
Search has just become Chat.
You used to have to search, now you chat and it does the searching, and more!
Yeah; there's still a massive chasm between "I spent hours precisely defining my requirements for this greenfield application with no users and the AI one-shot it" and "million line twenty team enterprise SaaS hellscale with ninety-seven stakeholders per line of code".
The fact that AI can actually handle the former case is, to be clear, awesome; but not surprising. Low-code tools have been doing it for years. Retool, even back in 2018, was way more productive than any LLMs I've seen today, at the things Retool could do. But its relative skill at these things, to me, does not conclusively determine that it is on the path toward being able to autonomously handle the latter.
The english language is simply a less formal programming language. Its informality means it requires less skill to master, but also means it may require more volume to achieve desired outcomes. At some level of granularity, it is necessarily the case that programming in english begins to look like programming in javascript; just with capital letters, exclamation points, and threats to fire the AI instead of asserts and conditionals. Are we really saving time, and thus generating higher levels of productivity? Or, is its true benefit that it enables foray into languages and domains you might be unfamiliar with; unlocking software development for a wider range of people who couldn't muster it before? Its probably a bit of both.
Dario Amodei says we'll have the first billion dollar solo-company by 2026 [1]. I lean toward this not happening. I would put money on even $100M not happening, barring some level of hyperinflation which changes our established understanding of what a dollar even is. But, here's what I will say: hitting levels of revenue like this, with a human count so low that the input of the AI has to overwhelm the input from the humans, is the only way to prove to me that, actually, these things might be more than freakin awesome tools. Blog posts from people making greenfield apps named after a furrsona DJ isn't moving the needle for me on this issue.
AI is still in an experimental phase for many teams, especially when it comes to handling complex, long-term projects. For PMs and EMs, the cost-benefit analysis of AI credits vs. manual tasks is a big concern before fully committing to AI adoption. Some teams have seen great success, particularly in areas where speed and flexibility are key, but others are still waiting for clearer ROI before diving in. It’ll be interesting to see how the balance of risk and reward evolves as AI tools mature.
There is certainly much innovation to come in this area.
I'm thinking about Personal Knowledge Systems and their innovative ideas regarding visual representations of data (mind maps, website of interconnected notes, things like that). That could be useful for AI search. What elements are doing in a sense is building concept web, which would naturally fit quite well into visualization.
The ChatBot paradigm is quite centered around short easily digestible narratives, and will humans are certainly narrative generating and absorbing creatures to a large degree, things like having a visually mapped out counter argument can also be surprisingly useful. It's just not something that humans naturally do without effort outside of, say, a philosophy degree.
There is still the specter of the megacorp feed algo monster lurking though, in that there is a tendency to reduce the consumer facing tools to black-box algorithms that are optimized to boost engagement. Many of the more innovative approaches may involve giving users more control, like dynamic sliders for results, that sort of thing.
English and other languages come with lots of ambiguity and assumptions. A significant benefit of programming languages is they have explicit rules for how they will be converted into a running program. An LLM can take many paths from the same starting prompt and deliver vastly different output.
Famously complicated interface with a million buttons and menus.
Now there's more buttons for the AI tools.
Because at the end of the day, using a "brush" tool to paint over the area containing the thing you want it to remove or change in an image is MUCH simpler than trying to tell it that through chat. Some sort of prompt like "please remove the fifth person from the left standing on the brick path under the bus stop" vs "just explicitly select something with the GUI." The former could have a lot of value for casual amateur use; it's not going to replace the precise, high-functionality tool for professional use.
In software - would you rather chat with an LLM to see the contents of a proposed code change, or use a visual diff tool? "Let the agent run and then treat it's stuff as a PR from a junior dev" has been said so many times recently - which is not suggesting just chatting with it to do the PR instead of using the GUI. I would imagine that this would get extended to something like the input not just being less of a free-form chat, but more of a submission of a Figma mockup + a link to a ticket with specs.
There’s an efficient way to serve the results, and there’s an efficient way for a human to consume them, and I find LLMs to be much more efficient in terms of cognitive work done to explore and understand something than a google search. The next thing will have to beat that level of personal mental effort, and I can’t imagine what that next step would look like yet.
Search wasn't just "search". It was "put a prompt in a form and then spend minutes/hours going through various websites until I get my answer". LLMs change that. I don't have to go through 20 different people's blog posts on "Which 12v 100Ah LifePO4 battery tests for the highest watt hours", the LLM simply just gives me answer that is most relevant across those 20 blog posts. It just distilled what I would have taken an hour to do down to seconds or 2 minutes.
A lot of modern entry-level jobs were filled by people who knew how to use google and follow instructions.
I imagine the next generation will have a similar relationship with AI. What might seem "common sense" with the younger, more tech-saavy crowd, will be difficult for older generations whose default behavior isn't to open up chatgpt or gemini and find the solution quickly.
I was a bit wary of trusting the AI summaries Google has been including in search results… but after a few checks it seems like it’s not crap at all, it’s pretty good!
I have systemic concerns with how Google is changing roles from "knowledge bridging" to "knowledge translating", but in terms of information: I find it very useful.
As search gives the answer rather than the path to it, the job of finding things out properly and writing it down for others is lost. If we let that be lost, then we will all be lost.
If we cannot find a way to redirect income from AI back to the creators of the information they rehash (such as good and honest journalism), a critical load-bearing pillar of democratic society will collapse.
The news industry has been in grave danger for years, and we've seen the consequences it brings (distrust, division, misinformation, foreign manipulation). AI may drive the last stake in its back.
It's not about some jobs being replaced; that is not even remotely the issue. The path we are on currently is a dark one, and dismissing it as "just some jobs being lost" is a naive dismissal of the danger we're in.
I agree that people are using it for things they would've googled, but I doubt that it's a good replacement.
To me it mostly comes with a feeling of uncertainty. As if someone tells you something he got told on a party. I need to Google it, to find a trustful source for verification, else it's just a hint.
So I use it if I want a quick hint. Not if I really want to have information worth remembering. So it's certainly not a replacement for me. It actually makes things worse for me because of all that AI slop atm.
I tend to generally think the same as you, as I work in the same field. A long time ago I thought to myself, if AI adoption increases exponentially, there is a chance that the amount of security vulnerabilities introduced by it also increases at the same rate.
However, what we are maybe not considering enough is that general AI adoption could and almost certainly will affect the standards for cybersecurity as well. If everyone uses AI and everyone gets used to its quirks and mistakes and is also forgiving about someone else using it since they themselves use it too, the standards for robust and secure systems could decrease to adjust to that. Now, your services as a cybersecurity consultant are no longer in need as much, as whatever company would need them can easily point to all the other companies also caring less and not doing anything about the security issues introduced by the AI that everyone uses. The legal/regulation body would also have to adjust to this, as it is not possible to enforce certain standards if no one can adhere to them.
I don’t follow. Cybersecurity has always been about reducing the risk of costly cyber attacks. That hasn’t changed. It’s not like suddenly companies will stop caring that their software has been locked down by ransomware, or that their database leaked and now they have to pay a nine-figure fine. It’s not standards for standards’ sake (though it can feel that way). It’s loss prevention.
I seem to have missed the part where he successfully prompted for security, internationalizability, localizability, accessibility, usability, etc., etc.
This is a core problem with amateurs pretending to be software producers. There are others, but this one is fundamental to acceptable commercial software and will absolutely derail vibe coded products from widespread adoption.
And if you think these aspects of quality software are easily reduced to prompts, you've probably never done serious work in those spaces.
what makes you think that the writer omitted these? Any good developer would include these parts of the requirements. That's why we make the money we do. We know what is involved. That is orthogonal to the use of LLMs for coding.
>My four-document system? Spaghetti that happened to land in a pattern I could recognize. Tomorrow it might slide off the wall. That's fine. I'll throw more spaghetti.
Amazing that in July 2025 people still think you can scale development this way.
Why are we counting the number of documents? It doesn't matter. What matters is putting together a plan and being able to articulate what you want. Then review and adjust and prompt again.
You have to know how software gets built and works. You can't just expect to get it right without a decent understanding of software architecture and product design.
This is something that's actually very hard. I'm coming to grips with that slowly, because it's always been part of my process. I'm both a programmer and a graphic designer. It took me a long while to recognize not everyone has spent a great deal of time doing both. Fewer yet decide to learn good software design patterns, study frameworks and open-source projects to understand the problems each of them are solving. It takes a LOT of time. It too me probably 10-15 years just to learn all of this. I've been building software for over 20 years. So it just takes time and that's ok.
The most wonderful thing I see about AI is that it should help people focus on these things. It should free people from getting too far into the weeds and too focused on the code itself. We need more people who can apply critical thinking and design from a bird's eye perspective. We need people who can see the big picture.
Knowing is at least half the battle. Doesn't matter what kind of tools you intend to use if you don't even know where the job site is located.
I've been around the block a few times on ideas like a B2B/SaaS requirements gathering product that other B2B/SaaS vendors could use to collect detailed, structured requirements from their customers. Something like an open-world Turbo Tax style workflow experience where the user is eventually cornered into providing all of the needed information before the implementation effort begins.
> The most wonderful thing I see about AI is that it should help people focus on these things.
Unfortunately I’ve been around this industry long enough to know that this is not in fact what is going to happen. We will be driven by greedy people with small minds to produce faster rather build correct systems, and the people who will pay will be users and consumers.
The input to output ratio is interesting. We are usually optimizing for volume of output, but now it’s inverted. I actually don’t want maximum output, I want the work split up into concrete, verifiable steps and that’s difficult to achieve consistently.
Ive taken to co-writing a plan with requirements with cursor and it works really well at first. But as it makes mistakes and we use those mistakes to refine the document eventually we are ready to “go” and suddenly it’s generating a large volume of code that directly contradicts something in the plan. Small annoyances like its inability to add an empty line after markdown headings have to be explicitly re added and re-reminded.
I almost wish I had more control over how it was iterating. Especially when it comes to quality and consistency.
When I/we can write a test and it can grind on that is when AI is at its best. It’s a closed problem. I need the tools to help me, help it, turn the open problem I’m trying to solve into a set of discrete closed problems.
A bit part of this that people are not understanding is that a major part of the author's success is due to the fact that he clearly does not care at all how anything is implemented, mostly because he doesn't need to.
You get way farther when you have the AI drop in Tailwind templates or Shadcn for you and then just let it use those components. There is so much software outside that web domain though.
A lot of people just stop working on their AI projects because they don't realize how much work it's going to take to get the AI to do exactly what they want in the way that they want, and that it's basically going to be either you accept some sort of randomized variant of what you're thinking of, or you get a thing that doesn't work at all.
I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.
This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.
The async approach to coding has been most fascinating, too.
I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.
I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.
I’ve always bemoaned my distractibility as an impediment to deep expertise, but at least it taught me to write well, for all kinds of audiences.
Boy do I feel lucky now.
Many years ago, in another millennium, before I even went to university but still was an apprentice (the German system, in a large factory), I wrote my first professional software, in assembler. I got stuck on a hard part. Fortunately there was another quite intelligent apprentice colleague with me (now a hard-science Ph.D.), and I delegated that task to him.
He still needed an explanation since he didn't have any of my context, so I bit the bullet and explained the task to him as well as I could. When I was done I noticed that I had just created exactly the algorithm that I needed. I just wrote it down easily myself in less than half an hour after that.
I state things crystal clear in real life on the internets. Seems like most of the time, nobody has any idea what I'm saying. My direct reports too.
Anyway, my point is, if human confusion and lack of clarity is the training set for these things, what do you expect
Then other times, I go to create something that is suggested _by them below the prompt box_ and it can't do it properly.
LLMs can be thought of metaphorically as a process of decompression, if you can give it a compressed form for your scenario 1 it'll go great - you're actually doing a lot of mental work to arrive at that 'compressed' request, checking technical feasibility, thinking about interactions, hinting at solutions.
If you feed it back it's own suggestion it's no so guaranteed to work.
Increasingly I've also just ben YOLOing single shot throw-away systems to explore the design space - it is easier to refine the ideas with partially working systems than just abstract prose.
Deleted Comment
I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.
It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.
For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.
Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.
Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.
So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.
What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.
Exactly my thinking, nearly 50, more than 30 years of experience in early every kind of programming, like you do, I can easily architect/control/adjust the agent to help me produce great code with a very robust architecture. By I do that out of my experience, both in modelling (science) and programming, I wonder how the junior devs will be able to build experience if everything comes cooked by the agent. Time will tell us.
It might be as simple as creating awareness about how everything works underneath and creating graduates that understand how these things should work in a similar vein.
To continue this thought - what could have been different in the last 10-15 years to encourage junior developers to listen more where they might not have to those who were slightly ahead of them?
Deleted Comment
I’ll probably get over it, but I’ve been realizing how much fun I get out building something as opposed to just having be built. I used to think all I cared about was results, and now I know that’s not true, so that’s fun!
Of course for the monotonous stuff that I’ve done before or don’t care a lick about, hell yeah I let em run wild. Boilerplate, crud, shell scripts, CSS. Had claude make me a terminal based version of snake. So sick
There's no way I could hire someone who'd want me hovering over their shoulder like this.
This sounds tedious I guess, but it's actually quite zen, and faster than solo coding most of the time. It gives me a ton of confidence to try new things and new libraries, because I can ask it to explain why it's suggesting the changes or for an overview of an approach. At no point am I not aware of what it's doing. This isn't even close to what people think of as vibe coding. It's very involved.
I'm really looking forward to increasing context sizes. Sometimes it can spin it's wheels during a refactor and want to start undoing changes it made earlier in the process, and I have to hard correct it. Even twice the context size will be a game changer for me.
For example:
If I tell it to not use X, it will do X.
When I point it out, it fixes it.
Then a few prompts later, it will use X again.
Another issue is the hallucinations. Even if you provide it the entire schema (I did this for a toy app I was working with), it kept on making up "columns" that don't exist. My Invoice model has no STATUS column, why do you keep assuming it's there in the code?
I found them useful for generating the initial version of a new simple feature, but they are not very good for making changes to an existing ones.
I've tried many models, Sonnet is the better one at coding, 3.7 at least, I am not impressed with 4.
Are people implementing stuff from start to finish in one go? For me, it's always been iterative. Start from scaffolding, get one thing right,then the next. It's like drawing. You start with a few shapes, then connect them. After you sketch on top, then do a line art, and then you finish with values (this step is also iterative refinements). With each step, you become more certain of what you want to do, while also investing the minimum possible effort.
So for me coding is more about refactoring. I always type the minimal amount of code to get something to work. And it usually means shortcuts which I annotate with a TODO comment. Then I iterate over, making it more flexible, adds more flexibility, makes the code more clean.
one thing at a time. slowly adding features and fighting against bug regressions, same as when I was writting the code myself.
I see it as a worrying extension of a pre-LLM problem: No employer wants to train, they just want to hire employees after someone else trains them.
So more work gets to penetrate a part of your life that it formerly wouldn't. What's the value of “productivity gains”, when they don't improve your quality of life?
Wish I had your confidence in this. I can easily see how this nullifies my hard earned experience and basically puts me in the same sport as a more mid level or even junior engineer.
I saw this as a chance to embrace AI, after a while of exploring I found Claude Code, and ended up with a pretty solid workflow.
But I say this as someone who has worked with distributed systems / data engineering for almost 2 decades, and spend most of my time reviewing PRs and writing specs anyway.
The trick is to embrace AI on all levels: learn how to use prompts. learn how to use system prompts. learn how to use AI to optimize these prompts. learn how to first write a spec, and use a second AI (“adversarial critic”) to poke holes in that plan. find incompletenesses. delegate the implementation to a cheaper model. learn how to teach AI how to debug problems properly, rather than trying to one-shot fixes in the hope it fixes things. etc
It’s an entirely different way of working.
I think juniors can learn this as well, but need to work within very well-defined frameworks and probably needs to be part of college curriculum as well.
LLMs have changed me. I want to go outside while they are working and I am jealous of all the young engineers that won’t lose the years I did sitting in front of a screen for 12 hours a day while sometimes making no progress on connecting two black boxes.
Those young engineers, in 10 years, won't be able to fix what the LLM gave them,because they have not learned anything about programming.
They have all learned how to.micromanage an LLM instead.
Like, this is how we've always done it.
Finding a way to better learn first principles compared to sitting in front of a screen for 12 hours a days is important.
"That's OK, I found a jetpack."
I think if I was just starting out learning to program, I would find something fun to build and pick a very correct, typed, and compiled language like Haskell or Purescript or Elm, and have the agent explaining what it's doing and why and go very slow.
They are entering the job market with sensibilities for a higher-level of abstraction. They will be the first generation of devs that went through high-school + college building with AI.
Do you think humanity will be better off because we'll have humans who don't know how to do anything themselves, but they're really good at asking the magical AI to do it for them?
What a sad future we're going to have.
[1] https://www.vox.com/technology/23882304/gen-z-vs-boomers-sca...
It used to be you could learn to program with a cheap old computer a majority of families can afford. It might have run slower, but you still had all the same tooling that's found on a professional's computer.
To use LLMs for coding, you either have to pay a third party for compute power (and access to models), or you have to provide it yourself (and use freely available ones). Both are (and IMO will remain) expensive.
I'm afraid this builds a moat around programming that will make it less accessible as a discipline. Kids won't just tinker they way into a programming career as they used to, if it takes asking for mom's credit card from minute 0.
As for HS + college providing a CS education using LLMs, spare me. They already don't do that when all it takes is a computer room with free software on it. And I'm not advocating for public funds to be diverted to LLM providers either.
Many times adding a new junior to a team makes it slower.
How does using llms as junior makes you more productive?
Like there's a mindset where you just want to get the job done, ok cool just let the llm do it for me (and it's not perfect atm), and ill stitch everything together fix small stuff that it gets wrong etc, saves alot of time and sure I might learn something in the process as well. And then the other way of working is the traditional way, you google, look up on stackoverflow, read documentations, you sit down try to find out what you need and understand the problem, code a solution iteratively and eventually you get it right and you get a learning experience out of it. Downside is this can take 100 years, at the very least much longer than using an llm in general. And you could argue that if you prompt the llm in a certain way, it would be equivalent to doing all of this but in a faster way, without taking away from you learning.
For seniors it might be another story, it's like they have the critical thinking, experience and creativity already, through years of training, so they don't loose as much compared to a junior. It will be closer for them to treat this as a smarter tool than google.
Personally, I look at it like you now have a smarter tool, a very different one as well, if you use it wisely you can definitely do better than traditional googling and stackoverflow. It will depend on what you are after, and you should be able to adapt to that need. If you just want the job done, then who cares, let the llm do it, if you want to learn you can prompt it in certain way to achieve that, so it shouldn't be a problem. But this sort of way of working requires a conscious effort on how you are using it and an awareness of what downsides there could be if you choose to work with the llm in a certain way to be able to change the way you interact with the llm. In reality I think most people don't go through the hoops of "limiting" the llm so that you can get a better learning experience. But also, what is a better learning experience? Perhaps you could argue that being able to see the solution, or a draft of it, can be a way of speeding up learning experience, because you have a quicker starting point to build upon a solution. I dunno. My only gripe with using LLM, is that deep thinking and creativity can take a dip, you know back in the day when you stumbled upon a really difficult problem, and you had to sit down with it for hours, days, weeks, months until you could solve that. I feel like there are some steps there that are important to internalize, that LLM nowdays makes you skip. What also would be so interesting to me is to compare a senior that got their training prior to LLM, and then compare them to a senior now that gets their training in the new era of programming with AI, and see what kinds of differences one might find I would guess that the senior prior to LLM era, would be way better at coding by hand in general, but critical thinking and creativity, given that they both are good seniors, maybe shouldn't be too different honestly but it just depends on how that other senior, who are used to working with LLMs, interacts with them.
Also I don't like how LLM sometimes can influence your approach to solving something, like perhaps you would have thought about a better way or different way of solving a problem if you didn't first ask the LLM. I think this could be true to a higher degree for juniors than seniors due to gap in experience when you are senior, you sort of have seen alot of things already, so you are aware of alot of ways to solve something, whereas for a junior that "capability" is more limited than a senior.
Where I use it for is:
1. Remembering what something is called -- in my case the bootstrap pills class -- so I could locate it in the bootstrap docs. Google search didn't help as I couldn't recall the right name to enter into it. For the AI I described what I wanted to do and it gave the answer.
2. Working with a language/framework that I'm familiar with but don't know the specifics in what I'm trying to do. For example:
- In C#/.NET 8.0 how do I parse a JSON string?
- I have a C# application where I'm using `JsonSerializer.Deserialize` to convert a JSON string to a `record` class. The issue is that the names of the variables are capitalized -- e.g. `record Lorem(int Ipsum)` -- but the fields in the JSON are lowercase -- e.g. `{"ipsum": 123}`. How do I map the JSON fields to record properties?
- In C# how do I convert a `JsonNode` to a `JsonElement`?
3. Understanding specific exceptions and how to solve them.
In each case I'm describing things in general terms, not "here's the code, please fix it" or "write the entire code for me". I'm doing the work of applying the answers to the code I'm working on.
Of course some people will lose jobs just like what happened to several industries when search became ubiquitous. (newspapers, phone books, encyclopedias, travel agents)
But IMHO this isn't the existential crisis people think it is.
It's just a tool. Smart, clever people can do lots of cool stuff with tools.
But you still have to use it,
Search has just become Chat.
You used to have to search, now you chat and it does the searching, and more!
The fact that AI can actually handle the former case is, to be clear, awesome; but not surprising. Low-code tools have been doing it for years. Retool, even back in 2018, was way more productive than any LLMs I've seen today, at the things Retool could do. But its relative skill at these things, to me, does not conclusively determine that it is on the path toward being able to autonomously handle the latter.
The english language is simply a less formal programming language. Its informality means it requires less skill to master, but also means it may require more volume to achieve desired outcomes. At some level of granularity, it is necessarily the case that programming in english begins to look like programming in javascript; just with capital letters, exclamation points, and threats to fire the AI instead of asserts and conditionals. Are we really saving time, and thus generating higher levels of productivity? Or, is its true benefit that it enables foray into languages and domains you might be unfamiliar with; unlocking software development for a wider range of people who couldn't muster it before? Its probably a bit of both.
Dario Amodei says we'll have the first billion dollar solo-company by 2026 [1]. I lean toward this not happening. I would put money on even $100M not happening, barring some level of hyperinflation which changes our established understanding of what a dollar even is. But, here's what I will say: hitting levels of revenue like this, with a human count so low that the input of the AI has to overwhelm the input from the humans, is the only way to prove to me that, actually, these things might be more than freakin awesome tools. Blog posts from people making greenfield apps named after a furrsona DJ isn't moving the needle for me on this issue.
[1] https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-pr...
Why not? Not like companies have to actually do anything beyond marketing to get insane evaluations… remember theranos?
I think chat-like LLM interfacing is not the most efficient way. There has to be a smarter way.
I'm thinking about Personal Knowledge Systems and their innovative ideas regarding visual representations of data (mind maps, website of interconnected notes, things like that). That could be useful for AI search. What elements are doing in a sense is building concept web, which would naturally fit quite well into visualization.
The ChatBot paradigm is quite centered around short easily digestible narratives, and will humans are certainly narrative generating and absorbing creatures to a large degree, things like having a visually mapped out counter argument can also be surprisingly useful. It's just not something that humans naturally do without effort outside of, say, a philosophy degree.
There is still the specter of the megacorp feed algo monster lurking though, in that there is a tendency to reduce the consumer facing tools to black-box algorithms that are optimized to boost engagement. Many of the more innovative approaches may involve giving users more control, like dynamic sliders for results, that sort of thing.
Famously complicated interface with a million buttons and menus.
Now there's more buttons for the AI tools.
Because at the end of the day, using a "brush" tool to paint over the area containing the thing you want it to remove or change in an image is MUCH simpler than trying to tell it that through chat. Some sort of prompt like "please remove the fifth person from the left standing on the brick path under the bus stop" vs "just explicitly select something with the GUI." The former could have a lot of value for casual amateur use; it's not going to replace the precise, high-functionality tool for professional use.
In software - would you rather chat with an LLM to see the contents of a proposed code change, or use a visual diff tool? "Let the agent run and then treat it's stuff as a PR from a junior dev" has been said so many times recently - which is not suggesting just chatting with it to do the PR instead of using the GUI. I would imagine that this would get extended to something like the input not just being less of a free-form chat, but more of a submission of a Figma mockup + a link to a ticket with specs.
I imagine the next generation will have a similar relationship with AI. What might seem "common sense" with the younger, more tech-saavy crowd, will be difficult for older generations whose default behavior isn't to open up chatgpt or gemini and find the solution quickly.
It's handy when I just need the quick syntax of a command I rarely need, etc.
You find it gives you poor information?
If we cannot find a way to redirect income from AI back to the creators of the information they rehash (such as good and honest journalism), a critical load-bearing pillar of democratic society will collapse.
The news industry has been in grave danger for years, and we've seen the consequences it brings (distrust, division, misinformation, foreign manipulation). AI may drive the last stake in its back.
It's not about some jobs being replaced; that is not even remotely the issue. The path we are on currently is a dark one, and dismissing it as "just some jobs being lost" is a naive dismissal of the danger we're in.
To me it mostly comes with a feeling of uncertainty. As if someone tells you something he got told on a party. I need to Google it, to find a trustful source for verification, else it's just a hint.
So I use it if I want a quick hint. Not if I really want to have information worth remembering. So it's certainly not a replacement for me. It actually makes things worse for me because of all that AI slop atm.
Man, I'm going to make so much money as a Cybersecurity Consultant!
However, what we are maybe not considering enough is that general AI adoption could and almost certainly will affect the standards for cybersecurity as well. If everyone uses AI and everyone gets used to its quirks and mistakes and is also forgiving about someone else using it since they themselves use it too, the standards for robust and secure systems could decrease to adjust to that. Now, your services as a cybersecurity consultant are no longer in need as much, as whatever company would need them can easily point to all the other companies also caring less and not doing anything about the security issues introduced by the AI that everyone uses. The legal/regulation body would also have to adjust to this, as it is not possible to enforce certain standards if no one can adhere to them.
I've found LLM's add lots of standard protections to api endponts, or database constraints etc than I would do on a lazy Saturday.
This is a core problem with amateurs pretending to be software producers. There are others, but this one is fundamental to acceptable commercial software and will absolutely derail vibe coded products from widespread adoption.
And if you think these aspects of quality software are easily reduced to prompts, you've probably never done serious work in those spaces.
To be fair, a lot of commercial software clearly hasn't, either.
I didn't see internationalization and localization, but I don't see anything fundamental about those that would be different.
Security, on the other hand, does feel like a different beast.
>My four-document system? Spaghetti that happened to land in a pattern I could recognize. Tomorrow it might slide off the wall. That's fine. I'll throw more spaghetti.
Amazing that in July 2025 people still think you can scale development this way.
Give it two years.
You have to know how software gets built and works. You can't just expect to get it right without a decent understanding of software architecture and product design.
This is something that's actually very hard. I'm coming to grips with that slowly, because it's always been part of my process. I'm both a programmer and a graphic designer. It took me a long while to recognize not everyone has spent a great deal of time doing both. Fewer yet decide to learn good software design patterns, study frameworks and open-source projects to understand the problems each of them are solving. It takes a LOT of time. It too me probably 10-15 years just to learn all of this. I've been building software for over 20 years. So it just takes time and that's ok.
The most wonderful thing I see about AI is that it should help people focus on these things. It should free people from getting too far into the weeds and too focused on the code itself. We need more people who can apply critical thinking and design from a bird's eye perspective. We need people who can see the big picture.
I've been around the block a few times on ideas like a B2B/SaaS requirements gathering product that other B2B/SaaS vendors could use to collect detailed, structured requirements from their customers. Something like an open-world Turbo Tax style workflow experience where the user is eventually cornered into providing all of the needed information before the implementation effort begins.
Unfortunately I’ve been around this industry long enough to know that this is not in fact what is going to happen. We will be driven by greedy people with small minds to produce faster rather build correct systems, and the people who will pay will be users and consumers.
Ive taken to co-writing a plan with requirements with cursor and it works really well at first. But as it makes mistakes and we use those mistakes to refine the document eventually we are ready to “go” and suddenly it’s generating a large volume of code that directly contradicts something in the plan. Small annoyances like its inability to add an empty line after markdown headings have to be explicitly re added and re-reminded.
I almost wish I had more control over how it was iterating. Especially when it comes to quality and consistency.
When I/we can write a test and it can grind on that is when AI is at its best. It’s a closed problem. I need the tools to help me, help it, turn the open problem I’m trying to solve into a set of discrete closed problems.
You get way farther when you have the AI drop in Tailwind templates or Shadcn for you and then just let it use those components. There is so much software outside that web domain though.
A lot of people just stop working on their AI projects because they don't realize how much work it's going to take to get the AI to do exactly what they want in the way that they want, and that it's basically going to be either you accept some sort of randomized variant of what you're thinking of, or you get a thing that doesn't work at all.