Readit News logoReadit News
develoopest · 5 months ago
My company decided to fire all junior developers because "With AI seniors don't need them anymore", honestly I'm on the verge of quitting, deciding to cut devs on a software company while keeping agile coaches and product owners that basically hinder development has to be one of the dumbest decisions I have seen.

Sorry for the rant.

mywittyname · 5 months ago
> Sorry for the rant.

Part of what makes this community great is that people share their personal experiences. It helps the rest of us understand what's going on in our industry, because most of us can't at several different places at once.

kjkjadksj · 5 months ago
Snake eats itself too as doing so closed the pipeline to get more of their necessary seniors. Never mind that the people who made that call will be retired when the talent issue rears its head in 10-15 years.
rsynnott · 5 months ago
Yeah, that sounds like a "get out if you can" moment tbh. Like, the best case is that you take them on their word, in which case the company is merely extremely poorly run, but realistically it's more likely to be cover for "shit, we are running out of money", particularly if it's venture-funded.
grafmax · 5 months ago
Their stated rationale is obviously BS. They’re just trying to extract more from fewer workers by intensifying your working conditions. That’s why the managers are retained but not the juniors. Sure seems like they view you guys as adversaries not partners.
dontlaugh · 5 months ago
Exactly. The best time to have unionised was before this move, the second best time is right now.
poisonborz · 5 months ago
What? Which company thinks juniors are just "little helpers that seniors need"?
carlmr · 5 months ago
Companies that think that Scrum rituals are mysterious and important.
viccis · 5 months ago
Just anecdotally, I recognize and dread working with AI code nowadays. In the past when I see something stupid being done, my thought was "I wonder if there's a good reason they did it this way" but now it's increasingly "I wonder what other AI written problems are in this module".

Being forced to use AI would be a nightmare, as it means I would have to turn that same distrust onto my own code. It's one thing to use AI to basically just customize a scaffold like people do when using it to bootstrap some Next.js website or something. It's a completely different matter to have it writing code in huge and data driven existing codebases.

Dead Comment

tfandango · 5 months ago
We get to check a box on what AI we use when we close a ticket. I used to select "none" because most of the time that was the case, sometimes I would pick one if I used it to build some scaffolding or explore some performance issue.

But then we started having AI demos with the CTO where the presenters would say things like "I don't know how to code in python but now I don't need to! yay!" and the C level people would be very excited about this. That's when I realized that these poor developers who just want to brown nose and show off to big cheese are instead making an argument for their own demise.

Meanwhile I asked AI to make me a test and it mocked out everything I wanted to test, testing nothing, but passing. I wonder how much of these kinds of tests we have now...

bluefirebrand · 5 months ago
> Meanwhile I asked AI to make me a test and it mocked out everything I wanted to test, testing nothing, but passing. I wonder how much of these kinds of tests we have now

This sort of thing is what I'm most worried about with programmers who are very bullish on AI.

I don't think it's too controversial to say that most developers are much worse at reviewing code than they are at writing code

AI generated code is a code review exercise. If you accept my premise that most devs are worse at reviewing code than writing it, this should ring some alarm bells

icedchai · 5 months ago
You're spot on about developers being worse at reviewing code. With generated code, you still need to understand it if you want to maintain it.

I had another person send me some AI generated code that was close to 90% working. It was missing something simple (something like appending to an array instead of overwriting it...) The original developer could not understand or debug it. I'm afraid of the crap we're going to see over the next few years.

bakuninsbart · 5 months ago
There are multiple factors exacerbating this even further:

1) AI generated code is "soulless". Reviewing code can be engaging if you are trying to understand your colleagues thought process. Sometimes you learn something new, sometimes you can feel superior, or politely disagree. AI ain't worth the hassle.

2) You unlearn how to write code by relying heavily on LLMs. On some level, it is similar to an IDE giving you context clues and definitions. On another, it can replace the hard work of thinking things through with mediocre yet usually workable solutions. And that's the big trap.

carlmr · 5 months ago
I think I'm somewhat decent at code reviewing, what I'm seeing currently is though that the PRs become larger and larger. Because AI just rewrites everything, even if you tell it to fix things. This leads to devs pushing more and bigger code changes. I'm convinced no company has enough senior developers to review these vast amounts of sloppy code.
wjholden · 5 months ago
You could make an analogy to the engineer babysitting a self-driving car during testing: you'd need to be a better driver than most to recognize when the machine is about to make a mistake and intervene.
tfandango · 5 months ago
This is a great point. This mocked test, easy to literally click a button, look like you did something good, but potentially add massive negative value when the code fails but the tests all pass. It can be a very helpful tool, but you need to understand the code it produces, and it seems like people are missing that point.
christkv · 5 months ago
Pff we will just use another AI to do the review /s
lokar · 5 months ago
I think a lot of confusion and frustration about this is the assumption that all programming is the same thing.

I have seen areas with just tons of boilerplate, straight forward UI stuff, basic test skeletons, etc that I guess could work with AI.

But personally, in 30 years I’ve just never done much of that kind of work. Probably because I find it boring. I go look for hard or novel problems, or data structures and APIs that are causing problems (overgrown and hard to use, buggy, etc). A lot of the time is just figuring out what the authors expected the code to do, and anticipating what we will need for the next few years. I don’t see AI helping much with that.

bluefirebrand · 5 months ago
> I have seen areas with just tons of boilerplate, straight forward UI stuff, basic test skeletons, etc that I guess could work with AI

The problem is that even this basic straightforward boilerplate CRUD stuff, AI is only "kind of ok" at doing

lokar · 5 months ago
I’m probably just old, but IMO the main problem I see with Jr devs is a lack of “taste”

I don’t see AI helping

franktankbank · 5 months ago
Management should not be telling dev the tools to use. Tell me why I'm wrong.
nlawalker · 5 months ago
Not that you're wrong, but here's the argument from the other side:

- AI is supposed to be a great productivity booster, but no one has figured out a way of actually measuring developer productivity, so we'll just use the proxy of measuring whether they're using AI.

- Developers won't want to use AI because it's threatening to them, so we'll mandate it.

- Consistency and fungibility of human resources is incredibly valuable. 0.5x developers can use AI to get them up to at least 1x. And 10x developers? We're actually better off if the AI "slows them down" and makes their work output look more like everyone else's. 10x developers are actually a liability, because it can be hard to quantify and measure their contributions, and so when things become worse when they end up leaving, it feels intangible and impossible to fix.

hnthrow90348765 · 5 months ago
If you have lazy devs, sometimes you need to convince (or coerce, or force) them to get them out of their comfort zone and try something new. But typically this is done because management wants more while giving nothing in return which is why no one tolerates these suggestions.

The other reason, which has been used to prevent devs from fixing bad code, is that making money is more important and sometimes that means moving away from costlier tools, or choosing tools because of specific business arrangements.

bluefirebrand · 5 months ago
Couldn't agree more, but when the company is paying big money for a tool you can bet they're going to make sure people are using it
javcasas · 5 months ago
Is that the RTO mandates because we have so many unused 5 year office leases?

I see, it's part deux.

itishappy · 5 months ago
Standardization is valuable.

An extreme example of your position would be a shop where everyone uses a different language. That's obviously untenable.

A more reasonable example might be choice of editor. I'm of the opinion that management should provide everyone a standard editor, then not mandate it's use. This means anyone can hop onto any device and know how to use it, even if the device owner has a super-customized vim setup. Folks who become familiar with the standard editor are also better positioned to help their colleagues troubleshoot their setups.

I don't see how this applies to AI assistants much if at all. It feels like mandating devs run all their ideas by a particular intern. If said intern isn't an architect then there's no value in having a single point of contact.

charcircuit · 5 months ago
There is organizational efficiency around everyone using the same tools. Management can pick a set of tools to focus on.

Not all developers know how to use all tools. Management providing education on tools can increase their efficiency.

Developers do not stay up to date with all available tools that exist. Management providing better tools can make people more efficient.

zdragnar · 5 months ago
That justifies using common tools and standards, but not why management in particular should be doing the selection.

Everywhere mature that I've worked, management delegated the task of choosing common tools to the senior / staff engineers, as well as decisions around when to make changes or exceptions.

The places that didn't do this were engineer-founded, or were dysfunctional enough that managers could splurge on massive contracts and then force people to use the paid-for tools to justify the contract post-hoc. Every one of these were examples of what not to do.

hooverd · 5 months ago
/can/ is doing some heavy lifting here. Management is just as capable of forcing tools that look good in 15 minute demos but suck for power users on developers.
amiantos · 5 months ago
I mean, ideally, management should not be telling me anything and should let me do my job in splendid, peaceful isolation. In practice, however, management tells me how to do all sorts of things, all the time, and they're responsible for me continuing to get paid, so I have to listen.
tartoran · 5 months ago
You're also responsible if they get paid so if the product sinks it's game over. So they have to listen too.
tmpz22 · 5 months ago
[flagged]
mfitton · 5 months ago
You're answering a question that wasn't asked so you can bring your view on unionization into the conversation. The implicit question is whether management should, not whether they can.
c0redump · 5 months ago
Unionization is not possible while there are hundreds of millions of eager scab workers in other countries.
srvaroa · 5 months ago
"When AI is able to deliver coding tasks based on a prompt, there won’t be enough copies of the Mythical Man Month to dissuade business folks from trying to accelerate road maps and product strategies by provisioning fleets of AI coding agents."

Submitted yesterday, but no luck :D

https://varoa.net/2025/04/07/ai-generated-code.html

hooverd · 5 months ago
I like your blog!
srvaroa · 5 months ago
Thanks!
jasonthorsness · 5 months ago
LLMs require comparatively little training to use, and in fact training (like how to optimize prompts etc.) is probably a waste of time because the model behavior and interfaces change so frequently.

This puts them in the class of tools where the benefit is obvious once there is actually a real benefit (and not, say, the foreshadowing of some future benefit). This sort of tool doesn't require a mandate to gain adoption.

So, in the subdomains of software development where the LLMs are already useful, developers will naturally pick them up, assuming the business has secured the policy support and funding. In the areas where they aren't useful, businesses should trust developers and not waste their time with mandates.

And I say this as someone who uses LLMs all day every day for Python, Go, Bash, C/C++, pretty much everything. But as an active user I see the limitations constantly and wouldn't mandate their use upon anyone.

tfandango · 5 months ago
prediction: Prompt engineering will get so complicated. There will be a movement towards somehow telling the computer exactly what to do using a logical language of some sort.
Terr_ · 5 months ago
Yes, some sort of... language for programs, if you will. It could be carefully structured in a way that avoids ambiguity and encourages consistency.

Dead Comment