Readit News logoReadit News
cedilla · 6 months ago
Funny how the article starts with someone using AI — to develop more AI stuff.

This reminds me of web3, where almost all projects were just web3 infrastructure or services, to the point that the purpose of most start-ups was completely inscrutable to outsiders.

I'm having lots more hope for AI though.

sussmannbaka · 6 months ago
Hey now, there’s a lot of Hello World, Mandelbrots and Ugly Tailwind Dashboards too :o)
greiskul · 6 months ago
Great point about web3. People always talk about during gold rushes, selling shovels instead of being the ones digging for gold. And it might be a reasonable idea, but there needs to be actual people getting rich from the gold to be an actual gold rush, if everyone is just selling shovels, it is probably a sign of a bubble.
jfil · 6 months ago
A "Shovel Rush"

Deleted Comment

simianwords · 6 months ago
Not really no. ChatGPT is used by 100s of millions of people directly and not for creating more AI apps like web3. So it is nothing like web3.
samgranieri · 6 months ago
As an experienced dev who’s gotten his feet wet a little with AI, I feel like , and does anyone else feel this way, that I spend more time telling AI what to do than it would spend actually writing this all out myself?
turtletontine · 6 months ago
You may have seen headlines about this paper, which found that while most devs (apparently) feel like AI makes them 20% faster, they’re actually 20% SLOWER with it: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
srcreigh · 6 months ago
> We do not provide evidence that:

> AI systems do not currently speed up many or most software developers

> AI systems in the near future will not speed up developers in our exact setting

> There are not ways of using existing AI systems more effectively to achieve positive speedup in our exact setting

askonomm · 6 months ago
Yup, entirely gave up on AI as a result, and it didn't help that reviewing and fixing AI code made me incredibly bored to the point that if this somehow becomes the new norm of development, I'll have to find a new career, because I want to actually build things, not manage AI.
jennyholzer · 6 months ago
I agree with this almost completely
lethologica · 6 months ago
Not a dev but I’ve had similar experiences testing AI to help me craft various documents.

I spend ages crafting the perfect prompt only to cross my fingers and hope that the output is what I want it to be when I could have just spent all that time actually crafting the document exactly as I want it to be in the first place. Often I then need to spend more time editing and tweaking the output from the AI anyway.

I’m honestly starting to feel a little crazy because I’m the only one at my workplace that sees it this way.

colonCapitalDee · 6 months ago
I've found AI to be a poor fit for writing when the task is getting an idea out of my head and on to the page. Works well enough when the task is taking some collection of already written information and creating writing based on it though.
tfandango · 6 months ago
Absolutely, much harder to describe what I want the software to do in English than logical programming language, and then add to that the time taken to understand the code and make sure it does what was intended. Perhaps the worst part, takes the joy out of it all.

It is nice as a tool to help solve some hairy stuff sometimes.

JeremyNT · 6 months ago
I usually find this to be the case with an existing codebase, unless I'm doing something extremely generic.

It's just really hard to convert requirements to English when there are a bunch of land mines in there that you know how to avoid.

rich_sasha · 6 months ago
I find it depends. I just "vibe coded" (I hate this phrase) a simple mobile app for something I needed, without writing a line or code, or indeed ant knowledge of any web/mobile technology. No way realistically I would spend a few days learning Flutter then a few more writing my app.
pennomi · 6 months ago
That’s the key, really. It’s excellent for small, highly scoped code. But AI doesn’t have the context or experience a senior developer does in the full system architecture of a mid-sized company.
cs702 · 6 months ago
Here's the approach I've seen so far at a few startups:

1. Replace junior developers with AI, reducing costs today.

2. Wish and hope that senior developers never retire in the future.

3. ?

kermatt · 6 months ago
3. Hire more developers to decode / devibe code, paying double in the end.
Analemma_ · 6 months ago
Sure, but by that time you've IPO'd and dumped the bag on the public, founders and seed-round investors make bank and are long gone by then.

I'm only half-joking: personally I'll be looking very closely at the IPO prospectus of any company founded in/after 2024 or so, to know how much vibe coding risk I can expect in its long-term prospects.

Spivak · 6 months ago
I don't think this behavior has anything to do with AI although it seems like it's been used as an excuse to justify it. Everyone seems to be in a belt-tightening risk averse mode right now and that means cutting junior positions and leaning on smaller teams of senior staff. You can see this behavior in more than just tech and more than just positions that can be replaced by AI. The job boards betray it as well, job postings for junior staff have dried up.
anonymousiam · 6 months ago
1. Replace junior developers with AI, reducing costs today.

2. ?

3. Profit!

https://www.youtube.com/watch?v=tO5sxLapAts

pyuser583 · 6 months ago
It worked with COBOL.
micromacrofoot · 6 months ago
you'll be retired by 3, good luck kids
matthewfcarlson · 6 months ago
As someone who spends his non work hours convincing a half baked quasi person to not do dumb things (a two year old), I have zero interest in convincing a half baked quasi person to not do dumb things during work hours (most coding agents).

I’ve had good results with Claude, it just takes too long. I also don’t think I can context switch fast enough to do something else while it’s churning away.

EdwardDiego · 6 months ago
At least your two year old learns.
bodhi_mind · 6 months ago
I think it’s allowed me to spend more time being an architect and thinking about processes, problem solving. To put it another way, I’m still a developer, possibly to a higher degree (because I can spend more time doing it), and less of a coder.
lordnacho · 6 months ago
I have very few issues with Claude. If I just tell it what the goal is, it will make some sensible suggestions, and I can tell it to start coding towards it. It rarely messes up, and when it does I catch it in the act.

You don't necessarily want to completely tune out while you're using the AI. You want to know what it's up to, but you don't need to be at your highest level of attention to do it. This is what makes it satisfying for me, because often it eats up several minutes to hunt down trivial bugs. Normally when you have some small thing like that, you have to really concentrate to find it, and it's frustrating.

When the AI is on a multi-file edit that you understand, that's when you can tune out a bit. You know that it is implementing some edit across several instances of the same interface, so you can be confident that in a few minutes everything will build and you will get a notification.

It's as if I can suddenly make all the architecture level edits without paying the cost in time that I had previously.

hopelite · 6 months ago
I was going to point out that what you are describing is exactly like what it is to be a leader/director of people working in most efforts, i.e., managing people, when it occurred to me that maybe what we are dealing with in this conflict and mud slinging around AI is the similar conflict of coders not wanting to become managers as they are often even not really good at being managers. Devs work well together at a shared problem solving (and even that often only sometimes), but it strikes me as the same problem as when devs are forced to become managers and they really don't like it, they hate it even, sometimes even leaving their company for that reason.

When you are working with AI, you are effectively working with a group of novice people, largely with basic competence, but lacking many deeper skills that are largely developed from experience. You kind of get what you put into it with proper planning, specificity in requests/tasks, proper organization based on smart structuring of skillsets and specializations, etc.

This may ruffle some feathers, but I feel like even though AI has its issues with coding in particular, this issue is really a leadership question; lead and mentor your AI correctly, adequately, and appropriately and you end up with decent, workable outcomes. GIGO

whatevaa · 6 months ago
Yeah, sounds about like babysitting a toddler.
_c7zm · 6 months ago
If you add an AGENTS.md, the AI agent will work more efficiently, and there will be far fewer problems like the ones you’re facing. You can include sections such as Security, coding style guidelines, writing unit tests, etc.
siliconc0w · 6 months ago
Vibe Coding is so early 2025, I only check in context, prompt instructions, and seed values to deterministically generate the machine-code.
fzzzy · 6 months ago
LLMs aren’t deterministic even with a seed.
jsheard · 6 months ago
Doesn't that depend on the implementation? There's a trade-off between performance and determinism for sure, but if determinism is what you want then it should be possible.
geor9e · 6 months ago
what if you set top_p=1, temperature=0, and always run it on the same local hardware
worble · 6 months ago
Yes, that's the joke
jb1991 · 6 months ago
This. I’m still amazed how many people don’t understand how this technology actually works. Even those you would think would have a vested interest in understanding it.
sethetter · 6 months ago
I sure wish I could tell if this is a joke or not.