Increasing the quantity of something that is already an issue without automation involved will cause more issues.
That's not moving the goalposts, it's pointing out something that should be obvious to someone with domain experience.
Every post like this has a tone like they are describing a new phenomenon caused by AI, but it's just a normal professional code quality problem that has always existed.
Consider the difference between these two:
1. AI allows programmers to write sloppy code and commit things without fully checking/testing their code
2. AI greatly increases the speed at which code can be generated, but doesn't nearly improve as much the speed of reviewing code, so we're making software harder to verify
The second is a more accurate picture of what's happening, but comes off much less sensational in a social media post. When people post the 1st example, I discredit them immediately for trying to fear-monger and bait engagement rather than discussing the real problems with AI programming and how to prevent/solve them.
What would have happened if someone without your domain expertise wasn't reviewing every line and making the changes you mentioned?
People aren't concerned about you using agents, they're concerned about the second case I described.
Are you aware that your wording here is implying that you are describing a unique issue with AI code that is not present in human code?
>What would have happened if someone without your domain expertise wasn't reviewing every line and making the changes you mentioned?
So, we're talking about two variables, so four states: human-reviewed, human-not-reviewed, ai-reviewed, ai-not-reviewed.
[non ai]
*human-reviewed*: Humans write code, sometimes humans make mistakes, so we have other humans review the code for things like critical security issues
*human-not-reviewed*: Maybe this is a project with a solo developer and automated testing, but otherwise this seems like a pretty bad idea, right? This is the classic version of "YOLO to production", right?
[with ai]
*ai-reviewed*: AI generates code, sometimes AI hallucinates or gets things very wrong or over-engineers things, so we have humans review all the code for things like critical security issues
*ai-not-reviewed*: AI generates code, YOLO to prod, no human reads it - obviously this is terrible and barely works even for hobby projects with a solo developer and no stakes involved
I'm wondering if the disconnect here is that actual professional programmers are just implicitly talking about going from [human-reviewed] to [ai-reviewed], assuming nobody in their right mind would just _skip code reviews_. The median professional software team would never build software without code reviews, imo.
But are you thinking about this as going from [human-reviewed] straight to [ai-not-reviewed]? Or are you thinking about [human-not-reviewed] code for some reason? I guess it's not clear why you immediately latch onto the problems with [ai-not-reviewed] and seem to refuse to acknowledge the validity of the state [ai-reviewed] as being something that's possible?
It's just really unclear why you are jumping straight to concerns like this without any nuance for how the existing industry works regarding similar problems before we used AI at all.