"By allowing non-technical people and domain experts to use English as the programming language, AI blurs the line between specification and implementation."
This is a non sequitur. You are saying that some PMs can update the prompts for an AI application. But it does not follow that AI can now specify and implement software. If you are talking about specifically "LLM Applications that just pre-prompt a model can be updated by a PM instead of an engineer". Then yes, that I would agree with. But you've extrapolated this wildly and close out with marketing for your tool.
I think HN knows that anyone can prompt LLMs. I do think its interesting though that this has allowed PMs/SMEs to direclty influence products that are deployed to millions of people. That seems genuinely novel. Maybe I over egged it
I love ChatGPT [1]. I use it all the time. I use it for coding, I use it to generate stuff like form letters, I use it for parsing out information from PDFs. Point is, I'm not a luddite with this stuff; I'm perfectly happy to play with and use new tech, including but not limited to AI.
Which makes me confident when I say this: Anyone who thinks that AI in its current state is "blurring the line between PMs and Engineers" doesn't know what they are talking about. ChatGPT is definitely very useful, but it's nowhere near a replacement for an engineer.
ChatGPT is really only useful if you already kind of know what you want. Like, if I asked it "I have a table with the columns name (a string), age (an integer), location (string), can you write me an upsert statement for Postgres for the values 'tom', 34, 'new york'?". This will likely give you exactly what you want, will give you the proper "ON CONFLICT" command, and it's cool and useful.
If I ask it "I want to put a value into a table. I also want to make sure that if there's a value in there, we don't just put the value in there, but instead we get the value, update it, and then put the new value back in", it's not as guaranteed to be correct. It might give you the upsert command, but it also might fetch the value from the database, check if it exists, and if it doesn't do an "insert" and if it does do an "update", which is likely incorrect because you risk race conditions.
My point is, the first example required knowing what an upsert is, and how to word it in a technical and precise way.
It certainly doesn't "blur the line" between PM and engineer for me. I have to pretty heavily modify and babysit its outputs, even when it is giving me useful stuff. You might be saying "well that's what a PM does!!", but not really; project managers aren't typically involved in the technical minutia of a project in my experience, they're not going to correct me for using the wrong kind of upsert.
These kinds of articles always seem to be operating on a theoretical "what if AIs could do this??" plane of existence.
[1] Deepseek is cool too, what I'm saying applies to that as well.
ETA:
Even if I wasn't a fan, this article definitely shouldn't have been flagged.
wrt did you read the article? I was quite specific about the ways I think LLMs are blurring the lines. I don't think its true for general engineering but I do think its true for applications being built with LLMs.
Also its still very early
But I guess I need to up my game if you can't tell the difference
> LLMs break traditional software development
> Develop your Prompts and Agents in code or UI
...
So of course the blog post is an ad, peddling what they want potential customers to think.
I don't doubt that we'll eventually come to a point where most code is written by AI, but that point is not at all where we are at right now, and for quite a while we'll need developers to drive the development process.
I totally agree that we're not at a point where AI can write most code. Though, I didn't ever say that. I just think its blurring the boundary between engineers and PMs with both taking on more of the others role.
Also, it shouldn't be surprising that the product we're building is aligned with what we believe about the world :)
R
OP here. Thanks for the (harsh!) feedback, I'll take it in a growth mindset.
The post does genuinely reflect my experiences and I do believe what I said.How would you advise I change the post to make it better?
Which parts do you think are untrue?
Thanks!