Readit News logoReadit News
Zigurd · 2 days ago
I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.

I recently used a coding agent on a project where I was using an unfamiliar language, framework, API, and protocol. It was a non-trivial project, and I had to be paying attention to what the agent was doing because it definitely would go off into the weeds fairly often. But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation really made everything about the experience better.

I even explored some aspects of LLM performance: I could tell that new and fast changing APIs easily flummox a coding agent, confirming the strong relationship of up-to-date and accurate training material to LLM performance. I've also seen this aspect of agent assisted coding improve and vary across AIs.

observationist · 2 days ago
There's something exhilarating about pushing through to some "everything works like I think it should" point, and you can often get there without doing the conscientious, diligent, methodical "right" way of doing things, and it's only getting easier. At the point where everything works, if it's not just a toy or experiment, you definitely have to go back and understand everything. There will be a ton to fix, and it might take longer to do it like that than just by doing it right the first time.

I'm not a professional SWE, I just know enough to understand what the right processes look like, and vibe coding is awesome but chaotic and messy.

bcrosby95 · 2 days ago
If you're hanging your features off a well trodden framework or engine this seems fine.

If frameworks don't make sense for what you're doing though and you're now relying on your LLM to write the core design of your codebase... it will fall apart long before you reach "its basically working".

The more nuanced interactions in your code the worse it'll do.

lukan · 2 days ago
"It was a non-trivial project, and I had to be paying attention to what the agent was doing"

There is a big difference between vibe coding and llm assisted coding and the poster above seems to be aware of it.

hansmayer · 2 days ago
> I'm not a professional SWE

It was already obvious from your first paragraph - in that context even the sentence "everything works like I think it should" makes absolute sense, because it fits perfectly to limited understanding of a non-engineer - from your POV, it indeed all works perfectly, API secrets in the frontend and 5 levels of JSON transformation on the backend side be damned, right ;) Yay, vibe-coding for everyone - even if it takes longer than the programming the conventional way, who cares, right?

damiangriggs · 2 days ago
I've noticed that as well. I don't memorize every single syntax error, but when I use agents to help code I learn why they fail and how to correct them. The same way I would imagine a teacher learns the best way to teach their students.
ModernMech · 2 days ago
AI is allowing a lot of "non SWEs" to speedrun the failed project lifecycle.

The exuberance of rapid early-stage development is mirrored by the despair of late-stage realizations that you've painted yourself into a corner, you don't understand enough about the code or the problem domain to move forward at all, and your AI coding assistant can't help either because the program is too large for it to reason about fully.

AI lets you make all the classic engineering project mistakes faster.

vidarh · 2 days ago
Same here. I've picked up projects that have languished for years because the boring tasks no longer make me put them aside.
lelanthran · 2 days ago
> I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.

It depends. No one is running their brain at full-throttle for more than a few hours on end.

If your "niggling" defects is mostly changes that don't require deep thought (refactor this variable name, function parameters/return type changes, classes, filename changes, etc), then I can see how it is energising - you're getting repeated dopamine hits for very little effort.

If, OTOH, you are doing deep review of the patterns and structures the LLM is producing, you aren't going to be doing that for more than a few hours without getting exhausted.

I find, myself, that repeatedly correcting stuff makes me tired faster than simply "LGTM, lets yolo it!" on a filename change, or class refactor, etc.

When the code I get is not what I wanted even though it passes the tests, it's more mental energy to correct the LLM than if I had simply done it myself from the first.

A good example of the exhausting tasks from today - my input has preprocessing directives embedded in it; there's only three now (new project), so the code generated by Claude did a number of `if-then-else-if` statements to process this input.

My expectation was that it would use a jump table of some type (possibly a dictionary holding function pointers, or a match/switch/case statement).

I think a good analogy is self-driving cars: if the SDC requires no human intervention, then sure it's safe. If the SDC requires the human to keep their hand on the wheel at all time because it might disengage with sub-second warnings, then I'm going to be more tired after a long drive than if I simply turned it off.

quotemstr · 2 days ago
> I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.

It's often that just getting started at all on a task is the hardest part. That's why writers often produce a "vomit draft" (https://thewritepractice.com/vomit-first-draft/) just to get into the right frame of mind to do real writing.

Using a coding agent to fix something trivial serves the same purpose.

skdhshdd · 2 days ago
> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation

At some point you realize if you want people to trust you you have to do this. Otherwise you’re just gambling, which isn’t very trustworthy.

It’s also got the cumulative effect of making you a good developer if done consistently over the course of your career. But yes, it’s annoying and slow in the short term.

rightbyte · 2 days ago
> I don't want to be that contrarian guy, but I find it energizing to go faster.

Is that contrarian though? Seems like pretty normal corparate setting bragging to me. (Note: I am not accusing you of it since your boss or collegues does not read this).

On the variant of "I am bad at not working too hard".

Animats · 2 days ago
> I don't want to be that contrarian guy, but I find it energizing to go faster.

You, too, can be awarded the Order of Labor Glory, Third Class.[1]

[1] https://en.wikipedia.org/wiki/Order_of_Labour_Glory

Zigurd · 2 days ago
Had I been doing other than interesting exploratory coding, I would agree with you. I can readily imagine standups where the "scrum master" asks where our AI productivity boost numbers are. Big dystopia potential.
pyrophane · 2 days ago
> I recently used a coding agent on a project where I was using an unfamiliar language, framework, API, and protocol.

You didn’t find that to be a little too much unfamiliarity? With the couple of projects that I’ve worked on that were developed using an “agent first” approach I found that if I added too many new things at once it would put me in a difficult space where I didn’t feel confident enough to evaluate what the agent was doing, and when it seemed to go off the rails I would have to do a bunch of research to figure out how to steer it.

Now, none of that was bad, because I learned a lot, and I think it is a great way to familiarize oneself with a new stack, but if I want to move really fast, I still pick mostly familiar stuff.

Zigurd · 2 days ago
SwiftKotlinDartGo blur together by now. That's too many languages but what are you gonna do?

I was ready to find that it was a bit much. The conjunction of ATProto and Dart was almost too much for the coding agent to handle and stay useful. But in the end it was OK.

I went from "wow that flutter code looks weird" to enjoying it pretty quickly.

QuercusMax · 2 days ago
I'm assuming this is the case where they are working in an existing codebase written by other humans. I've been in this situation a lot recently, and Copilot is a pretty big help to figure out particularly fiddly bits of syntax - but it's also really stupid suggests a lot of stuff that doesn't work at all.
stonemetal12 · 2 days ago
> I didn’t feel confident enough to evaluate what the agent was doing

So don't. It is vibe coding, not math class. As long as it looks like it works then all good.

Rperry2174 · 2 days ago
I think both experience are true.

AI removes boredome AND removes the natural pauses where understanding used to form..

energy goes up, but so does the kind of "compression" of cognitive things.

I think its less a quesiton of "faster" or "slower" but rather who controls the tempo

visarga · 2 days ago
After 4 hours of vibe coding I feel as tired as a full day of manual coding. The speed can be too much. If I only use it for a few minutes or an hour, it feels energising.
agumonkey · 2 days ago
> the kind of "compression" of cognitive things

compression is exactly what is missing for me when using agents, reading their approach doesn't let me compress the model in my head to evaluate it, and that was why i did programming in the first place.

Avicebron · 2 days ago
Can you share why it was non-trivial? I'm curious about how folks are evaluating the quality of their solutions when the project space is non trivial and unfamiliar
Zigurd · 2 days ago
It's a real, complete, social media client app. Not a huge project. But the default app was clearly written by multiple devs, each with their own ideas. One goal was to be cleaner and more orthogonal, among other goals.
ares623 · 2 days ago
A little bit of Dunning-Kruger maybe?
sixothree · 2 days ago
I am currently only vibe-coding my hobby projects. So if that changes, my view could very well change.

But I 100% agree. It's liberating to focus on the design of my project, and my mental model can be of how I want things to work.

It feels like that switch to test driven development where you start from the expected result and worry about the details later.

blitz_skull · 2 days ago
I think it's less "going fast" and more "going fast forever."

To your point, you can blow through damn-near anything pretty quickly now. Now I actually find myself problem-solving for nearly 8 hours every day. My brain feels fried at the end of the day way more than it used to.

wiether · 2 days ago
Same feeling here!

I used to be like: "well, this thing will take me at least half a day, it's already 16:00, so I'll better do something quiet to cooldown to the end of the day and tackle this issue tomorrow". I'll leave the office in an regular mood and take the night to get ready for tomorrow.

Now I'm like: "17:30? 30 minutes? I have time to tackle another issue today!" I'll leave the office exhausted and take the night to try and recover from the day I had.

solumunus · 2 days ago
This. I’m able to be more productive for long hours more consistently than before. The occasions where I’m engineering for 8 solid hours are much more frequent now, and I’m certainly more tired. Almost all of my time is now dedicated to problem solving and planning, the LLM executes and I sit there thinking. Not everyone’s brain or project is well suited for this, but for me as a personality combined with the way my product is structured, it’s a total game changer.
WhyOhWhyQ · 2 days ago
Everyone in this conversation talks about different activities. One version of vibe coding happens with Netflix open and without ever opening a text editor, and another happens with thoroughly reviewing every change.
stuffn · 2 days ago
I think the counter-point to that is what I experience.

I agree it can be energizing because you can offload the bullshit work to a robot. For example, build me a CRUD app with a bootstrap frontend. Highly useful stuff especially if this isn't your professional forte.

The problems come afterwards:

1. The bigger the base codebase generation the less likely you're going to find time or energy to refactor LLM slop into something maintainable. I've spent a lot of time tailoring prompts for this type of generation and still can't get the code to be as precise as something an engineer would write.

2. Using an unfamiliar language means you're relying entirely on the LLM to determine what is safe. Suppose you wish to generate a project in C++. An LLM will happily do it. But will it be up to a standard that is maintainable and safe? Probably not. The devil is in the mundane details you don't understand.

In the case of (2) it's likely more instructive to have the LLM make you do the leg work, and then it can suggest simple verifiable changes. In the case of (1) I think it's just an extension of the complexity of any project professional or not. It's often better to write it correct the first time than write it fast and loose and then find the time to fix it later.

OptionOfT · 2 days ago
Ergo instant tech debt.
etothet · 2 days ago
I 100% agree. It's been incredible for velocity and the capabilities and accuracy of the models I've been (mostly from Anthropic) have improved immensely over the last few months.
louthy · 2 days ago
> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation

Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job. So the work you have committed may work or it may have subtle artefacts/bugs that you’re not aware of, because doing the job properly isn’t of interest to you.

This is ‘phoning it in’, not professional software engineering.

jmalicki · 2 days ago
Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform, and are probably going to even insert more footguns than the AI.

At least when the AI does it you can review it.

bongodongobob · 2 days ago
It sounds like you've never worked a job where you aren't just supporting 1 product that you built yourslef. Fix the bug and move on. I do not have the time or resources to understand it fully. It's a 20 year old app full of business logic and MS changed something in their API. I do not need to understand the full stack. I need to understand the bug and how to fix it. My boss wants it fixed yesterday. So I fix it and move onto the next task. Some of us have to wear many hats.
visarga · 2 days ago
>Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job.

Red flag again! If your protection is to "understand the implementation" it means buggy code. What makes a code worthy of trust is passing tests, well designed tests that cover the angles. LGTM is vibe testing

I go as far as saying it does not matter if code was written by a human who understands or not, what matters is how well it is tested. Vibe testing is the problem, not vibe coding.

OptionOfT · 2 days ago
If you do this on your personal stuff, eh, I wouldn't do it, but you do you.

But we're seeing that this becomes OK in the workplace, and I don't believe it is.

If you propose these changes that would've normally taken you 2 weeks as your own in a PR, then I, as the reviewer, don't know where your knowledge ends and the AI's hallucinations begin.

Do you need to do all of these things? Or is it because the most commonly forked template of this piece of code has this in its boilerplate? I don't know. Do you?

How can you make sure the code works in all situations if you aren't even familiar with the language, let alone the framework / API and protocol?

    * Do you know that in Java you have to do string.Equals instead of == for equality? 
    * Do you know in Python that you assigning a new value to a function default persists beyond the function? 
        * And in JavaScript it does not? 
    * Do you know that the C# && does not translate to VB.NET's And?

frio · 2 days ago
This is far and away the biggest problem I have atm. Engineers blowing through an incredible amount of work in a short time, but when an inevitable bug bubbles up (which would happen without AI!), there's noone to question. "Hey, you changed the way transactionality was handled here, and that's made a really weird race condition happen. Why did you change it? What edge case were you trying to handle?" -- "I don't know, the AI did it". This makes chasing things down exponentially harder.

This has always been a problem in software engineering; of course -- sometimes staff have left, so you have to dig through tickets, related commits and documentation to intuit intent. But I think it's going to make for very weird drags on productivity in _new_ code that may not counter the acceleration LLMs provide, but will certainly exist.

QuercusMax · 2 days ago
A lot of that stuff can be handled by linters and static analysis tools.
JohnMakin · 2 days ago
There probably needs to be some settled discussion on what constitutes "vibe coding." I interpret this term as "I input text into $AI_MODEL, I look at the app to see my change was implemented. I iterate via text prompts alone, rarely or never looking at the code generated."

vs. what this author is doing, which seems more like agent assisted coding than "vibe" coding.

With regard to the subject matter, it of course makes sense that managing more features than you used to be able to manage without $AI_MODEL would result in some mental fatigue. I also believe this gets worse the older you get. I've seen this within my own career, just from times of being understaffed and overworked, AI or not.

happytoexplain · 2 days ago
Yes, I'm getting increasingly confused as to why some people are broadening the use of "vibe" coding to just mean any AI coding, no matter how thorough/thoughtful.
crazygringo · 2 days ago
It's because the term itself got overapplied by people critical of LLMs -- they dismissed all LLM-assisted coding as "vibe coding" because they were prejudiced against LLMs.

Then lots of people were introduced to the term "vibe coding" in these conversations, and so naturally took it as a synonym for using LLMs for coding assistance even when reading the code and writing tests and such.

Also because vibe coding just sounds cool.

loloquwowndueo · 2 days ago
Like people using “bricked” to signal recoverable situations. “Oh the latest update bricked my phone and I had to factory-reset it, but it’s ok now”. Bricked used to mean it turned into something as useful as a brick, permanently.
iLoveOncall · 2 days ago
Words don't have meaning in 2025.

A negative but courteous remark is "slamming", a tweet is an "attack", etc.

So yeah I'm not surprised that people conflate any use of AI with vibe-coding.

keeda · 2 days ago
There was a huge discussion on this a few weeks ago, seems still far from settled: https://news.ycombinator.com/item?id=45503867

Personally I think "vibe-coding" has semantically shifted to mean any AI-assisted coding and we should just run with it. For the original meaning of vibe-coding, I suggest YOLO-Coding.

unshavedyak · 2 days ago
> There probably needs to be some settled discussion on what constitutes "vibe coding." I interpret this term as "I input text into $AI_MODEL, I look at the app to see my change was implemented. I iterate via text prompts alone, rarely or never looking at the code generated."

Agreed. I've seen some folks say that it requires absolute ignorance of the code being generated to be considered "vibe coded". Though i don't agree with that.

For me it's more nuanced. I consider a lack of review to be "vibed" related to how little you looked at it. Considering LLMs can do some crazy things, even a few ignored LOC might end up with a pretty "vibe coded" feelings, despite being mostly reviewed outside of those ignored lines.

layer8 · 2 days ago
Maybe read the original definition: https://x.com/karpathy/status/1886192184808149383

Or here: https://en.wikipedia.org/wiki/Vibe_coding

Not looking at the code at all by default is essential to the term.

celeryd · 2 days ago
I don't see a distinction. Vibe coding is either agent assisted coding or using chatbots as interpreters for your design goals. They are the same thing.
christophilus · 2 days ago
No. One involves human quality control, and one omits it.
johnsmith1840 · 2 days ago
"Vibe" has connotations of easy and fun neither of which are true when building something difficult
wvenable · 2 days ago
> rarely or never looking at the code generated.

My interpretation is that you can look at the code but vibe coding means ultimately you're not writing the code, you're just prompting. It would make sense to prompt "I'd like variable name 'bar' to be 'foo' instead." and that would still be vibe coding.

Kiro · 2 days ago
I think the difference between the two is shrinking by the day. At this point I almost never need to address anything with the LLM's solution and could easily just go straight to testing for most things.

The key difference is still the prompts and knowing what to reference/include in the context.

lelanthran · 2 days ago
> I think the difference between the two is shrinking by the day. At this point I almost never need to address anything with the LLM's solution and could easily just go straight to testing for most things.

Do you believe that atrophy is not a real thing?

I've found that LLMs massively over-engineer things, regardless of the prompt used. How do you counter that without going back and forth at least a few times?

zephyrthenoble · 2 days ago
I've felt this too as a person with ADHD, specifically difficulty processing information. Caveat: I don't vibe code much, partially because of the mental fatigue symptoms.

I've found that if an LLM writes too much code, even if I specified what it should be doing, I still have to do a lot of validation myself that would have been done while writing the code by hand. This turns the process from "generative" (haha) to "processing", which I struggle a lot more with.

Unfortunately, the reason I have to do so much processing on vibe code or large generated chunks of code is simply because it doesn't work. There is almost always an issue that is either immediately obvious, like the code not working, or becomes obvious later, like poorly structured code that the LLM then jams into future code generation, creating a house of cards that easily falls apart.

Many people will tell me that I'm not using the right model or tools or whatever but it's clear to me that the problem is that AI doesn't have any vision of where your code will need to organically head towards. It's great for one shots and rewrites, but it always always always chokes on larger/complicated projects, ESPECIALLY ones that are not written in common languages (like JavaScript) or common packages/patterns eventually, and then I have to go spelunking to find why things aren't working or why it can't generate code to do something I know is possible. It's almost always because the input for new code is my ask AND the poorly structured code, so the LLM will rarely clean up it's own crap as it goes. If anything, it keeps writing shoddy wrapper around shoddy wrappers.

Anyways, still helpful for writing boilerplate and segments of code, but I like to know what is happening and have control over how my code is structured. I can't trust the LLMs right now.

Jeff_Brown · 2 days ago
Agreed. Some strategies that seem to help exist, though. Write extensive tests before writing the code. They serve as guidance. Commit tests separately from library code, so you can tell the AI didn't change the test. Specify the task with copious examples. Explain why yo so things, not just what to do.
habinero · 2 days ago
Yeah, this is where I start side-eying people who love vibe coding. Writing lots of tests and documentation and fixing someone else's (read: the LLM's) bad code? That's literally the worst parts of the job.
zephyrthenoble · 2 days ago
Interesting, I haven't tried tests outside of the code base the LLM is working on.

I could see other elements of isolation being useful, but this kind of feels like a lot of extra work and complexity which is part of the issue...

danielbln · 2 days ago
Also: detailed planning phase, cross-LLM reviews via subagents, tests, functional QA etc. There at more (and complimentary) ways to ensure the code does what it should then to comb through ever line.
xnorswap · 2 days ago
I feel this.

I take breaks.

But I also get drawn to overworking ( as I'm doing right now ), which I justify because "I'm just keeping an eye on the agent".

It's hard work.

It's hard to explain what's hard about it.

Watching as a machine does in an hour what would take me a week.

But also watching to stop the machine spin around doing nothing for ages because it's got itself in a mess.

Watching for when it gets lazy, and starts writing injectable SQL.

Watching for when it gets lazy, and tries to pull in packages it had no right to.

We've built a motor that can generate 1,000 horse power.

But one man could steer a horse.

The motor right now doesn't have the appropriate steering apparatus.

I feel like I'm chasing it around trying to keep it pointed forward.

It's still astronomically productive.

To abandon it would be a waste.

But it's so tiring.

lubujackson · 2 days ago
I think it taxes your brain in two different ways - the mental model of the code is updated in the same way as a PR from a co-worker updates code, but in a minute instead of every now and then. So you need to recalibrate your understanding and think through edge cases to determine if the approach is what you want or if it will support future changes etc. And this happens after every prompt. The older/more experienced you are, the harder it is to NOT DO THIS thinking even if you are intending to "vibe" something, since it is baked into your programming flow.

The other tax is the intermittent downtime when you are waiting for the LLM to finish. In the olden days you might have productive downtime waiting for code to compile or a test suite to run. While this was happening you might review your assumptions or check your changes or realize you forgot an edge case and start working on a patch immediately.

When an LLM is running, you can't do this. Your changes are being done on your behalf. You don't know how long the LLM will take, or how you might rephrase your prompt if it does the wrong thing until you see and review the output. At best, you can context switch to some other problem but then 30 seconds later you come back into "review mode" and have to think architecturally about the changes made then "prompt mode" to determine how to proceed.

When you are doing basic stuff all of this is ok, but when you are trying to structure a large project or deal with multiple competing concerns you quickly overwhelm your ability to think clearly because you are thinking deeply about things while getting interrupted by completed LLM tasks or context switching.

CPLX · 2 days ago
My least favorite part is where it runs into some stupid problem and then tries to go around it.

Like when I'm asking it to run a bunch of tests against the UI using a browser tool, and something doesn't work. Then it goes and just writes code to update the database instead of using the user element.

My other thing that makes me insane is when I tell it what to do, and it says, "But wait, let me do something else instead."

colechristensen · 2 days ago
Build tools to keep it in check.
vidarh · 2 days ago
Really, this. You still need to check its work, but it is also pretty good at checking its work if told to look at specific things.

Make it stop. Tell it to review whether the code is cohesive. Tell it to review it for security issues. Tell it to review it for common problems you've seen in just your codebase.

Tell it to write a todo list for everything it finds, and tell it fix it.

And only review the code once it's worked through a checklist of its own reviews.

We wouldn't waste time reviewing a first draft from another developer if they hadn't bothered looking over it and test it properly, so why would we do that for an AI agent that is far cheaper.

waltbosz · 2 days ago
> One reason developers are developers is the dopamine loop > You write code, it doesn’t work, you fix it, it works, great! Dopamine rush. Several dozens or a hundred times a day.

This statement resonates with me. Vibe coding gets the job done quickly, but without the same joy. I used to think that it was the finished product that I liked to create, but maybe it's the creative process of building. It's like LEGO kits, the fun is putting them together, not looking at the finished model.

On the flip side, coding sessions where I bang my head against the wall trying to figure out some black box were never enjoyable. Nor was writing POCOs, boilerplate, etc.

OptionOfT · 2 days ago
I see people with no coding experience now generating PRs to a couple of repos I manage.

They ask a business question to the AI and it generates a bunch of code.

But honestly, coding isn't the part that slowed me down. Mapping the business requirements to code that doesn't fail is the hard part.

And the generated PRs are just answers to the narrow business questions. Now I need to spend time in walking it all back, and try to figure out what the actual business question is, and the overall impact. From experience I get very little answer to those questions.

And this is where Software Engineering experience becomes important. It's asking the right questions. Not just writing code.

Next to that I'm seeing developers drinking the cool-aid and submitting PRs where a whole bunch of changes are made, but they don't know why. Well, those changes DO have impact. Keeping it because the AI suggested it isn't the right answer. Keeping it because you agree with the AI's reasoning isn't the right answer either.

damiangriggs · 2 days ago
Going fast is awesome. If you have a bit of caffeine in the morning, turn on some tunes, and get into your workflow it's awesome. I get so sucked into my coding projects that when I'm done I'm disoriented. Nothing quite like being in the zone.
simonw · 2 days ago
This morning I attended and paid attention to three separate meetings and at one point had three coding agents running in parallel solving some quite complex problems for me.

It's now 11:47am and I am mentally exhausted. I feel like my dog after she spends an hour at her sniff-training class (it wipes her out for the rest of the day.)

I've felt like that on days without the meetings too. Keeping up with AI tools requires a great deal of mental effort.