Readit News logoReadit News
ang_cire · a year ago
I sometimes wonder about whether the decline in IT worker quality is down to companies trying to force more and more roles onto each worker to reduce headcount.

Developers, Operations, and Security used to be dedicated roles.

Then we made DevOps and some businesses took that to mean they only needed 2/3 of the headcount, rather than integrating those teams.

Then we made DevSecOps, and some businesses took that to mean they only needed 1/3 the original roles, and that devs could just also be their operations and appsec team.

That's not a knock on shift-left and integrated operations models; those are often good ideas. It's just the logical outcome of those models when execs think they can get a bigger bonus by cutting costs by cutting headcounts.

Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.

Is anyone really surprised they are using ChatGPT to keep up?

This is going to keep happening until IT companies stop cutting headcounts to make line go up (instead of good business strategy).

nostrademons · a year ago
One person can do the work of 3 and regularly does in startups.

I think that what the MBAs miss is this phenomena of overconstraint. Once you have separate the generic role of "developer" into "developer, operations, and security", you've likely specified all sorts of details about how those roles need to be done. When you combine them back into DevSecOps, all the details remain, and you have one person doing 3x the work instead of one person doing the work 3x more efficiently. To effectively go backwards, you have to relax constraints, and let that one person exercise their judgment about how to do the job.

A corollary is that org size can never decrease, only increase. As more employees are hired, jobs become increasingly specialized. Getting rid of them means that that job function is simply not performed, because at that level of specialization, the other employees cannot simply adjust their job descriptions to take on the new responsibilities. You have to throw away the old org and start again with a new, small org, which is why the whole private equity / venture capital / startup ecosystem exists. This is also Gall's Law exists:

https://en.wikipedia.org/wiki/John_Gall_(author)#Gall's_law

IgorPartola · a year ago
I think there is another bit to this which is cargo cult tendencies. Basically DevOps is a great idea under certain circumstances and works well for specific people in that role. Realistically if you take a top talent engineer they can likely step into any of the three roles or even some others and be successful. Having the synergy of one person being responsible for the boundary between two roles then makes your ROI on that person and that role skyrocket.

And then you evangelize this approach and every other company wants to follow suit but they don’t really have top talent in management or engineering or both (every company claims to hire top talent which obviously cannot be true). So they make a poor copy of what the best organizations were doing and obviously it doesn’t go well. And the thing is that they’ve done it before. With Agile and with waterfall before that, etc. There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.

makeitdouble · a year ago
> One person can do the work of 3 and regularly does in startups.

Startup architecture and a 500+ engineer org's architecture are fundamentally different. The job titles are the same, but wont reflect the actual work.

Of course that's always been the case, and applies to the other jobs as well. What a "head of marketing" does at a 5 person startup has nothing to do with what the same titled person does at Amazon or Coca Cola.

I've also seen many orgs basically retitling their infra teams members as "devops" and call it a day. Communication between devs from different part of the stack has become easier, and there will be more "bridge" roles in each teams with an experienced dev also making sure it all works well, but I don't see any company that cared about their stack and performance fire half of their engineers because of a new fancy trend in the operation community.

nox101 · a year ago
I think it less to do with judgement and more to do with one person can do the work of 3 in startups because there's several orders of magnitude less coordination and communication that needs to happen.

If you have 5 people in a startup you have 10 connections between them, 20 people = 190 connections, 100 = 4950 connections, 1000 people = 499500 connections.

Sure you split then in groups with managers and managers managers etc to break down the connections to less than the max but it's still going to be orders of magnitude more communication and coordination needed than in a startup.

namaria · a year ago
How does you explain the cyclical lay offs, if organizations aren't able to decrease in size?
renegade-otter · a year ago
In a startup you get significantly more focus time than in a large company. Especially if there is no production yet - there are no clients, no production bugs, no-call.

In a larger company, literally 80% of your job is meetings, Slack, and distractions.

romanows · a year ago
This is a great observation, thanks.
crystal_revenge · a year ago
> whether the decline in IT worker quality

I think it entirely has to do with a generation of software people getting into the field (understandably) because it makes them a lot of money, rather than because they're passionate about software. These, by-and-large, are mediocre technical people and they tend to hire other mediocre technical people.

When I look around at many of the newer startups that are popping up they're increasing filled with really talented people. I chalk this up largely to the fact that people that really know what they're doing are increasingly necessary to get a company started in a more cash constrained environment and those people are making sure they hire really talented people.

Tech right now reminds me so much more of tech around 2004-2008, when almost everyone one that was interested in startups was in it because they loved hacking on technical problems.

My experience with Cursor is that it is excellent at doing things mediocre engineers can do, and awful at anything more advanced. It also requires the ability to very quickly understand someone else's code.

I could be wrong, but my suspicion is this will allow a lot of very technical engineers, that don't focus on things like front-end or web app development, to forgo needing to hire as many junior webdev people. Similar to how webmasters disappeared once we have frameworks and tools for quickly building the basic HTML/CSS required for a web page.

bluepizza · a year ago
While you have a good point, I think the experts also branched off with some unreasonable requirements. I remember reading Yegge's blog post, years ago, saying that an engineer needed to know bitwise operators, otherwise they were not good enough.

I don't know. Curiosity, passion, focus, creative problem solving seem to me much more important criteria for an engineer to have, rather than bitwise operations. An engineer that has these will learn everything needed to get the job done.

So it seems like we all got off the main road, and started looking for shibboleths.

htrp · a year ago
> I think it entirely has to do with a generation of software people getting into the field (understandably) because it makes them a lot of money, rather than because they're passionate about software. These, by-and-large, are mediocre technical people and they tend to hire other mediocre technical people.

It also goes back to if you're passionate about the technology, you'll be willing to spend a weekend learning something new vs just doing the bare minimum to check off this week's sprint goals.

TeMPOraL · a year ago
This is happening across the board, not just to IT workers, and I suspect it's the major factor as for why the expected productivity improvements from technology didn't materialize.

Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.

None of these are part of their job descriptions - in fact, all of these are actively distracting and disproportionally compromise the workers' ability to do their actual jobs. All of these also used to have dedicated specialists, that could do it 10x as efficiently, for fraction of the price.

My hypothesis is this: those specialists like secretaries, internal graphics departments, financial staff, etc. they all were visible on the balance sheet. Eliminating those roles does not eliminate the need for their work to be done - just distributes it to everyone in pieces (in big part thanks to self-serve office software "improving" productivity). That slows everyone down across the board disproportionally, but the beancounters only see the money saved on salaries of the eliminated roles - the slowdown only manifests as a fuzzy, generic sense of loss of productivity, a mysterious costs disease that everyone seems to suffer from.

I say it's not mysterious; I say that there is no productivity gain, but rather productivity loss - but because it turns the costs from legible, overt, into diffuse and hard to count, it's easy to get fooled that money is being saved.

weweweoo · a year ago
I agree with the hypothesis. My country has a significant shortage of doctors, and guess what the few doctors spend a large amount of their day on? Paperwork that used to be done by secretaries, whose salary would be maybe 1/3 or the doctor's. It's a massive waste of both money and doctor's potential, but somehow that's what the free market prefers.
Viliam1234 · a year ago
> Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.

It feels like you are working at the same company as me.

Companies complain all the time about how difficult it is to find competent developers, which is their excuse for keeping most of the teams understaffed. Okay, then how about increasing the developers' productivity by letting them focus on, you know, development?

Why does the paperwork I need to do after visiting a dentist during the lunch break take more time than the visit itself? It's not enough just to bring the receipt to HR; I need to scan the paper, print the scan, get it signed by a manager, start a new process in a web application, ask the manager to also sign it in the web application, etc. I need to check my notes every time I am doing it, because the web application asks me a lot of information that in theory it should already know, and the attached scan needs to have a properly formatted file name, and I need to figure out the name of the person I should forward this process to, rather than the application figuring it out itself. Why??? The business travels were even worse, luckily my current company doesn't do those frequently.

My work is defined by Jira tickets that mostly contain a short description like "implement XY for Z", and it's my job to figure out wtf is "Z", who is the person in our company responsible for "Z", what exactly they meant by "XY", where is any specification, when is the deadline, who am I supposed to coordinate with, and who will test my work. I miss the good old days when we had some kind of task description and the definition of done, but those were the days when we had multiple developers in a team, and now it's mostly just me.

I get invitations to meetings that do not concern me or any of my projects, but it's my job to figure that out, not the job of the person who sent the invitations. Don't get me started on e-mails, because my inbox only became manageable after I wrote dozen rules that put various junk in the spam folder. No, I don't need a notification every time someone in the company made an edit on any Confluence page. No, I don't need notifications about people committing code to projects I am not working on. The remaining notifications often come in triplicate, because first I get a message in Teams, then an e-mail saying that I got a message in Teams, and finally a Windows notification saying that I got a new e-mail. When I return from a vacation, I spend my first day or two just sorting out the flood in my e-mails.

On some days, it is lunchtime before I had an opportunity to write my first line of code. So it's the combination of being Agile-Full-Stack-Dev-Sec-Ops-Cloud-Whatever and the fact that everything around me seems designed to make my work harder that is killing me. This is a system that slows down 10x developers to 1x developers, and us lesser mortals to mere 0.1x developers.

sgt101 · a year ago
And yet we are told that competition will determine that capital and talent will flow towards that most efficient organisations over time. Thus, surely organisations that eschewed this practice would emerge and dominate?

So, either capitalism doesn't work, or your thesis isn't quite right...

I have two other counters to offer, first we have seen GDP per capita gradually increasing in major economies for the last 50 years (while the IT revolution has played out). There have been other technical innovations over this time, but I believe that GDP per capita has more than quadrupled in G8 economies. The USA and Canada have, at the same time, enjoyed a clear extra boost from fracking and shale extraction, and the USA has arguably enjoyed an extra extra boost from world dominance - but arguably.

The second one is simple anecdote. Hour for hour I now can do far more in terms of development than I did when I was a hard core techie in the 90's and 2000's. In addition I can manage and administer systems that are far more complex than those it took teams of people to run at that time (try running a 10gb size db under load on oracle 7 sitting on top of spinning rust and 64mb ram store for fun) I can also manage a team of 30's expenses, timesheets, travel requests and so on that again would have taken a person to do. I can just do these things and my job as well and I do it mostly in about 50 hrs a week. If I wasn't involved in my people's lives and happy to argue with customers to get things better I could do it in 40 hrs regularally, for sure. But I put some discretion in.

My point is - we are just more productive. It is hard to measure, and anecdote / "lived experience" is a bad type of evidence, but I think it's clearly there. This is why then accountants have been able to reorganise modern business organisations to use fewer people to do more. Have they destroyed value while doing this - totally - but they have managed to get away with it because 7/10 they have been right.

Personally I've suffered from the 3/10 errors. I know many of us on here have, but we shouldn't shut our eyes because of that.

trashtester · a year ago
They're realizing that 10x (+) developers exist, but think they can hire them at 1x developer salaries.

Btw, they key skill you're leaving out, is to understand the business your company is in.

If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.

swader999 · a year ago
This 100%. It's rare to find anyone that wants to learn a complex biz domain.
intelVISA · a year ago
> If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.

By 'stay relevant' you mean run your own? Ability to build + align to business objectives = $$$, no reason to be an Agile cog at that point.

neom · a year ago
Biz side of the house here: for sure, it's always been the way that really what you're "weeding in" is the IC's who are skilled and aware. (and:If I had to do layoffs, my stack rank would be smack dab in here also, btw)
jajko · a year ago
So its not going to stop. Typical C-suite who holds real power has absolutely 0 clue about IT complexity, we are overpriced janitors to them. Their fault, their blame, but they are probably long gone when these mistakes manifest fully.

In my banking corp, in past 13 years I've seen massive rise of complexity, coupled with absolutely madly done bureaucracy increase. I still could do all stuff that is required but - I dont have access. I cant have access. Simple task became 10 steps negotiating with obscure Pune team that I need to chase 10x and escalate till they actually recognize there is some work for them. Processes became beyond ridiculous, you start something and it could take 2 days or 3 months, who knows. Every single app will break pretty quickly if not constantly maintained - be it some new network stuff, unchecked unix update, or any other of trillion things that can and will go wrong.

This means - paper pushers and folks at best average at their primary job (still IT or related) got very entretched in processes and won, and business gets served subpar IT, projects over time and thus budget, perpetuating the image of shitty tolerated evil IT.

I stopped caring, work to live is more than enough for me, that 'live' part is where my focus is and life achievements are.

godelski · a year ago
- my laundry app (that I must use) takes minutes to load, doesn't cache my last used laundry room, and the list of rooms isn't even fucking sorted (literally: room 3, room 7, room 1, room 2, ...)

- my AC's app takes 45 s to load even if I just used it, because it needs to connect. Worse, I'll bring the temp down in my house and in the evening raise it, but it'll come on even when 5F below my target value, staying on for 15+ minutes leaving us freezing (5F according to __it's thermometer__!)

- my TV controls are slow. Enough that I buffer inputs and wait 2-3 seconds for the commands to play. That pressing the exit button in the same menu (I turn down brightness at night because auto settings don't work, so it's the exact same behavior), idk if I'm exciting to my input, exiting the menu, or just exiting the sub menu. It's inconsistent!

There's so much that I can go on and on and I'm sure you can too. I think one of the worst parts about being a programmer is that I'm pretty sure I know how to solve many of these issues, and in fact sometimes I'll spend days to tear apart the system to actually fix it. Of course to only be undone by updates that are forced (app or whatever won't connect because why is everything done server side ( ┛ ◉ Д ◉ ) ┛ 彡 ┻ ━ ┻ ). Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)

I don't know how to stop caring because these things directly affect me and are slowing me down. I mean how fucking hard is it to use sort? It's not even one line!

What the fuck is wrong with us?

GeoAtreides · a year ago
> 10 steps negotiating with obscure Pune team that I need to chase 10x

why are you doing the chasing? unless you're the project manager, comment "blocked by obscure pune team" on the ticket and move on

Deleted Comment

Deleted Comment

yifanl · a year ago
They're cutting headcount because they have no conception of how to make a good product, so reducing costs is the only of making the bottom line go up.

That's across the board, from the startups whose business plan is to be acquired at all costs, to the giant tech companies, whose business plan is to get monopoly power first, then figure out how to extract money later.

solidninja · a year ago
Probably a lot of that is to do with the short-term profit mindset. There is tons of software that is far from optimal, breaks frequently and has a massive impact on human lives (think: medical record systems, mainframes at banks, etc.). None of it is sexy, none of it is something you can knock up a PoC for in a month, and none of it is getting the funding to fix it (instead funding is going to teams of outsourced consultants who overpromise and just increase their budgets year on year). Gen AI won't make this better I think.
WalterBright · a year ago
> They're cutting headcount because they have no conception of how to make a good product, so reducing costs is the only of making the bottom line go up.

The field is wide open for a startup to do it right. Why not start one?

goatlover · a year ago
Failures of capitalism that probably need regulation. If the middle class and workers are to be protected.
ozim · a year ago
For me DevOps/DevSecOps is reaction movement against toxic turf wars and silos of functions from ambitious people - not some business people scheme to reduce headcount and push more responsibilities.

I have received e-mails "hey thats DB don't touch that stuff you are not a DBA" or "hey developers shouldn't do QA" while the premise might be right, lots of things could be done much quicker.

I have seen throwing stuff over the wall "hey I am a developer, I don't care about some server stuff", "hey my DB is working fine it must be you application" or months of fixing an issue because no one would take the responsibility across the chain.

ang_cire · a year ago
Like I said, I have no issue with integrated operations models, but if you try to have one person be a Jack of All Trades, you will end up with them all being 'masters of none'.

DevSecOps works very well when you have your coding specialists, operations specialists (including DBAs), and Security specialists all on the same team together, rather than being different silos with different standups and team meetings, etc. But it doesn't work at all well if you just ask the devs to also be Ops and Security, and lay off the rest.

elric · a year ago
> For me DevOps/DevSecOps is reaction movement against toxic turf wars and silos of functions from ambitious people

Well the DevOps grandfathers (sorry, Patrick & Kris, but you're both grey now) certainly wanted to tear down the walls that had been put up between Devs & Ops. Merging Dev & Ops practices has been a fundamentally good change. Many tasks that used to be dedicated Ops/Infra work are now so automated that a Dev can do them as part of their daily work (e.g. spinning up a test environment or deploying to production). This has been, in a sense, about empowerement.

The current "platform engineering"-buzz builds on top of that.

> - not some business people scheme to reduce headcount and push more responsibilities

I imagine that many business people don't understand tech work well enough to deliberately start such schemes. Reducing toil could probably result in lower headcount (no one likes being part of a silo that does the same manual things over and over again just to deploy to production), but by the same count the automations don't come free. They have to be set up and maintained. Once one piece of drudgery has been automated, something else will rear its ugly head. Automating the heck out of boring shit is not only more rewarding work, it's also a force multiplier for a team. I hope business people see those benefits and aim for them, instead of the aforementioned scheming.

lispisok · a year ago
Like most things the decline in quality is probably multi-faceted. There is also the component where tech became a hot attractive field so people flooded in who only cared about the paycheck and not the craft.
trashtester · a year ago
That definitely happend in the dotcom bubble. Plenty of "developers" were crowding the field, many of which neither had any real technical ability or interest.

The nerds who were into programming based on personal interest were really not affected.

Those who have tech as a passion will generally outpeform those who have it as a job, by a large margin.

But salary structures tend to ignore this.

ipaddr · a year ago
Developers of the past worked towards the title of webmaster. A webmaster can manage a server, write the code and also be a graphic artist. Developers want to do it all.

What has changed is micromanaging of daily standup which reshapes work into neat packages for conversation but kills a non linear flow and limits exploration making things exactly as requested instead of what could be better.

chrismarlow9 · a year ago
What has also changed in my opinion is the vast landscape of tooling and frameworks at every level of the stack.

We now have containers that run in VMs that run on physical servers. And languages built on top of JavaScript and backed by Shadow Dom and blah blah. Now sure I could easily skip all that and stick a static page on a cdn that points to a lambda and call it a day. But the layers are still there.

I'm afraid only more of this is coming with AI in full swing and fully expect a larger scale internet outage to happen at some point that will be the result of a subtle bug in all this complexity that no single person can fix because AI wrote it.

There's too much stuff in the stack. Can we stop adding and remove some of it?

SoftTalker · a year ago
I never thought of a webmaster as meaning that. When I think of webmaster I think of a person who updates the content on a website using a CMS of some sort. Maybe they know a bit of HTML and CSS, how to copy assets to the web server, that sort of thing. But they are not sysadmins or programmers in the conventional sense. Maybe it just varies by employer.
neom · a year ago
This comment had me laughing pretty hard, and thanks for making me feel old. Anyway, I guess I'm a "webmaster" in theory even tho I've not worked on web since early 2000s, lamp stack handy++ tho. made me laugh because: was a Supabase meetup recently and some kid told me he was full stack, but he can't ssh into a server? I'm supa confused what fullstack means these days.
ang_cire · a year ago
> Developers of the past

I'm only 36, but you're making me feel extremely old.

To me, "developers of the past" were the people working on COBOL and JCL and FORTRAN and DB2, on z/OS or System 390/370/360, to whom "RPG" was only a 4GL[1], not a type of game, and there was no webmaster or graphic designer involved... not some dotcom era dev in the 90s when "webmasters" became a widespread thing.

Here's an interesting article on webmasters and their disappearance below[2].

1: https://en.wikipedia.org/wiki/IBM_RPG 2: https://thehistoryoftheweb.com/postscript/what-happened-to-t...

moi2388 · a year ago
From my experience it’s agile/scrum being poorly implemented.

So many companies no longer think about quality or design. “Just build this now and ship it, we can modify it later”, not thinking about the ramifications.

No thinking about design at all anymore, then having tech debt but not allocating any sprints to mitigate it.

Viliam1234 · a year ago
Sometimes it feels like there is no planning... and as a consequence also no documentation. Or maybe there are tons of outdated documentation because the project is so agile that it was redesigned a few dozen times, but instead of updating the original documents and diagrams, the architects always only produced a diff "change this to this", and those diffs are randomly placed in Confluence, most of them as attached Word or PowerPoint documents.

"But developers hate to write documentation" they say. Okay genius, so why don't you hire someone who is not a developer, someone who doesn't spend their entire time sprinting from one Jira task to another, someone who could focus on understanding how the systems work and keeping the documents up to date. It would be enough to have one such person for the entire company; just don't also give them dozen extra roles that would distract them from their main purpose. "Once we hired a person like this, but they were not competent for their job and everyone complained about docs, so we gave up." Yeah, I suspect you hired the cheapest person available, and you probably kept giving them extra tasks that were always higher priority than this. But nice excuse. Okay, back to having no idea how anything works, which is not a problem because our agile developers can still somehow handle it, until they burn out and quit.

Deleted Comment

yieldcrv · a year ago
Partially agree

It is so much easier to deploy now (and for the last 5-10 years) without managing an actual server and OS

It just gets easier, with new complexities added on top

In 2014 I was enamored that I didn’t need to be a DBA because platforms as a service were handling all of it in a NoSQL kind of way. And exposed intuitive API endpoints for me.

This hasn’t changed, at worst it was a gateway drug to being more hands on

I do fullstack development because it’s just one language, I do devops because it’s not a fulltime job and cloud formation scripts and further abstraction is easyish, I can manage the database and I haven’t gotten vendor locked

You don’t have to wait 72 hours for DNS and domain assignments to propagate anymore it’s like 5 minutes, SSL is free and takes 30 minutes tops to be added to your domain, CDNs are included. Over 10 years ago this was all so cumbersome

rgblambda · a year ago
At my company, our sprint board is copy/pasted across teams, so there's columns like "QA/Testing" that just get ignored because our team has no testers.

There's also no platform engineers but IaC has gotten that good that arguably they've become redundant. Architecture decisions get made on the fly by team members rather than by the Software Architect who only shows up now and again to point out something trivial. No Product Owner so again the team work out the requirements and write the tickets (ChatGPT can't help there).

spacecadet · a year ago
This is happening across most industries. My friends in creative fields are experiencing the same "optimizations". I call this the great lie of productivity. The worker (particularly tech workers) have dreamed of reducing their time spent doing "shit jobs" while amplifying their productivity, so that we can spend more time doing what is meaningful to us... In reality businesses sell out to disconnected private equity and chase "growth" just to boost the wealth of the top. Ultimately this has played out as optimizing work force by reducing head count and spreading workers over more roles.
ang_cire · a year ago
Yep, it's 100% MBA-bros who don't understand the actual technical foundations of their companies "optimizing" away their specialized knowledge workers. They get stuck in fire-and-hire cycles of boom-and-bust, and tend to slowly degrade into obscurity.
ThinkBeat · a year ago
I think this is a valid and important point.

You can categorize get people who are average at several things devOps. But you will not get someone with a deep background and understanding in all the fields at the same time.

I come from a back end background. and I appalled at how little the devOps I have worked with know about even SQL.

Having teams with people who are experts at different things will give a lot better output. It will be more expensive.

Most devOps I have met, with a couple of exceptions, are front end devs who knows a couple of Javascript, knows Javascript and Typescript. When it comes to the back-end it is getting everything possible form npm and stringing it together.

badpun · a year ago
In every team I've worked with, DevOps didn't do any of the development, while being called DevOps. The only thing they were developing were automation for build, tests and deployment. Other than that, they're still a separate role from development. The main difference between pre-devops days and now, is that operations people used to work in dedicated operations teams (that's still the case in some highly regulated places, e.g. banks), and now they work alongside developers.
stuckkeys · a year ago
Sadly. It is an onward trend. I have become so discouraged from this subject, that I am evaluating my career choices. Farming is a the safest bet for now.
malfist · a year ago
Have you actually done farming? I grew up on a farm. It's no where close to a safe bet.

It's capital intensive, high risk, hard work with low margins. Not at all like stardew.

smellybigbelly · a year ago
I wonder how you view farming as the safest bet. Farming is quite challenging and the competition will drive any noob into the ground. Not just the knowledge but also capital.
huuhee3 · a year ago
Nursing is pretty safe too, simply due to demographics and limitations of robotics.
stuckinhell · a year ago
i'm worried this is the case as well
stogot · a year ago
Anyone that says they are the “devsecops” person is doing it wrong. Companies should still hire for all three roles they just collaborate together. Someone sold that company a mountain of lies
KronisLV · a year ago
> Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.

If fewer people need to undertake more roles, I think the simplest things you can get away with should be chosen, yet for whatever reason that's not what's happening.

Need a front end app? Go for the modern equivalent of jQuery/Bootstrap, e.g. something like Vue, Pinia and PrimeVue (you get components out of the box, you can use them, you don't have to build a whole design system, if needed can still do theming). Also simpler than similar setups with Vuex or Redux in React world.

Need a back end app? A simple API only project in your stack of choice, whether that's Java with Dropwizard (even simpler than Spring Boot), C# with ASP.NET (reasonably simple out of the box), PHP with Laravel, Ruby with Rails, Python with Flask/Django, Node with Express etc. And not necessarily microservices but monoliths that can still horizontally scale. A boring RESTful API that shuffles JSON over the wire, most of the time you won't need GraphQL or gRPC.

Need a database? PostgreSQL is pretty foolproof, MariaDB or even SQLite can also work in select situations. Maybe something like Redis/Valkey or MinIO/SeaweedFS, or RabbitMQ for specific use cases. The kinds of systems that can both scale, as well as start out as a single container running on a VPS somewhere.

Need a web server? Nginx exists, Caddy exists, as does Apache2.

Need to orchestrate containers? Docker Compose (or even Swarm) still exist, Nomad is pretty good for multi node deployments too, maybe some relatively lightweight Kubernetes clusters like K3s with Portainer/Rancher as long as you don't go wild.

CI/CD? Feed a Dockerfile to your pipeline, put the container in Nexus/Artifactory/Harbor/Hub, call a webhook to redeploy, let your monitoring (e.g. Uptime Kuma) make sure things remain available.

Architectures that can fit in one person's head. Environments where you can take every part of the system and run it locally in Docker/Podman containers on a single dev workstation. This won't work for huge projects, but very few actually have projects that reach the scale where this no longer works.

Yet, this is clearly not what's happening, that puzzles me. If we don't have 20 different job titles involved in a project, then the complexity covered under the "GlassFish app server configuration manager" position shouldn't be there in the first place (once had a project like that, there was supposed to be a person involved who'd configure the app server for the deployments, until people just went with embedded Tomcat inside of deployable containers, that complexity suddenly dissipated).

agumonkey · a year ago
devops is a great idea but if the design/engineering part is not actually easier, it ends up as additional mental effort
skinney6 · a year ago
Remember 10x? "Are you a 10x developer?!" lol

Dead Comment

Dead Comment

dimal · a year ago
It matters what you measure. The studies only looked at Copilot usage.

I’m an experienced engineer. Copilot is worse than useless for me. I spend most of my time understanding the problem space, understanding the constraints and affordances of the environment I’m in and thinking about the code I’m going to write app. When I start typing code, I know what I’m going to write, and so a “helpful” Copilot autocomplete is just distraction for me. It makes my workflow much much worse.

On the other hand, AI is incredibly useful for all of those steps I do before actually coding. And sometimes getting the first draft of something is as simple as a well crafted prompt (informed by all the thinking I’ve done prior to starting. After that, pairing with an LLM to get quick answers for all the little unexpected things that come up is extremely helpful.

So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.

brandall10 · a year ago
Copilot isn't particular useful. At best it comes up with small snippets that may or may not be correct, and rarely can I get larger chunks of code that would be working out of the gate.

But Claude Sonnet 3.5 w/ Cursor or Continue.dev is a dramatic improvement. When you have discrete control over the context (ie. being able to select 6-7 files to inject), and with the superior ability of Claude, it is an absolute game changer.

Easy 2-5x speedup depending on what you're doing. In an hour you can craft a production ready 100 loc solution, with a full complement of tests, to something that might otherwise take a half day.

I say this as someone with 26 yoe, having worked in principal/staff/lead roles since 2012. I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.

throwup238 · a year ago
> I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.

Agreed. I feel like coding with AI is distilling the process back to the CS fundamentals of data structures and algorithms. Even though most of those DS&As are very simple it takes experience to know how to express the solution using the language of CS.

I've been using Cursor Composer to implement code after writing some function signatures and types, which has been a dream. If you give it some guardrails in the context, it performs a lot better.

CMCDragonkai · a year ago
Which do you prefer Cursor or Continue.dev?
snissn · a year ago
Human + ai writing tests >> human writing tests
maxrecursion · a year ago
For me, AI is like a documentation/Googlefu accelerant. There are so many little things that I know exactly what I want to do, but can't remember the syntax or usage.

For example, writing IaC especially for AWS, I have to look up tons of stuff. Asking AI gets me answers and examples extremely fast. If I'm learning the IaC for a new service I'll look over the AWS docs, but if I just need a quick answer/refresher, AI is much faster than going and looking it up.

cyrialize · a year ago
This is exactly how I think of it as well.

Search is awful when you can't remember the exact term with your language/framework/technology - but highlighting code and asking AI helps out a ton.

Before, I'd search over and over fine-tuning my search until I get what I want. Tools like copilot make that fine-tuning process much shorter.

foobarian · a year ago
I find that for AWS IaC specifically with a high pace of releases and a ton of versions dating back more than a decade the AI answers are a great spring board but require a bit of care to avoid mixing APIs.
jiiam · a year ago
My experience with IaC output is that it's so broken to not only be unhelpful but actively harmful.
dfee · a year ago
Contrarian take: I feel that copilot rewards me for writing patterns that it can then use to write an entire function given a method signature.

The more you lean into functional patterns: design some monads, don’t do I/O except at the boundaries, use fluent programming, then it’s highly effective.

This is all in Java, for what it’s worth. Though, I’ll admit, I’m 3.5y into Java, and rely heavily on Java 8+ features. Also, heavy generic usage in my library code gives a lot of leash to the LLM to consistently make the right choice.

I don’t see these gains as much when using quicker/sloppier designs.

Would love to hear more from true FP users (Haskell, OCaml, F#, Scala).

scruple · a year ago
I used the Copilot trial. I found myself waiting to see what it would come up with, analyzing it, and most often time throwing it away for my own implementation. I quickly realized how much of a waste of time it was. I did find use for it in writing unit tests and especially table-driven testing boilerplate but that's not enough to maintain a paid subscription.
4b11b4 · a year ago
copilot isn't a worthwhile example
dclowd9901 · a year ago
I think my experience mirrors your own. We have access at my job but I’ve turned it off recently as it was becoming too noisy for my focus.

I found the tool to be extremely valuable when working in unfamiliar languages, or when doing rote tasks (where it was easy for me to identify if the generated code was up to snuff or not).

Where I think it falters for me is when I have a very clear idea of what I want to do, and its _similar_ to a bog standard implementation, but I’m doing something a bit more novel. This tends to happen in “reduce”s or other more nebulous procedures.

As I’m a platform engineer though, I’m in a lot of different spaces: Bash, Python, browser, vanilla JS, TS, Node, GitHub actions, Jenkins Java workflows, Docker, and probably a few more. It gives my brain a break while I’m context switching and lets me warm up a bit when I move from area to area.

memorylane · a year ago
> (where it was easy for me to identify if the generated code was up to snuff or not).

I think you have nailed it with this comment. I find copilot very useful for boilerplate - stuff that I can quickly validate.

For stuff that is even slightly complicated, like simple if-then-else, I have wasted hours tracking down a subtle bug introduced by copilot (and me not checking it properly)

For hard stuff it is faster and more reliable for me to write the code than to validate copilots code.

mlinhares · a year ago
the fact that Copilot hallucinates methods/variables/classes that do not exist in compiled languages where it could know they do not exist is just unbelievable to me.

it really feels like people building the product do not care about the UX.

huijzer · a year ago
> So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.

A psychology professor I know says this holds in general. For any new tool, who will be able to get the most benefits out of it? Someone with a lot of skill already or someone with fewer skill? With less skill, there is even a chance that the tool has a negative effect.

Sakos · a year ago
I only use Copilot and Claude to do all the boilerplate and honestly just the mechanical part of writing code. But I don't use it to come up with solutions. I'll do my thing understanding the problem, figuring out a solution, etc. and once I've done everything to ensure I know what needs to be written, I use to AI to do most of that. It saves a hell of a lot of time and typing.
kbaker · a year ago
Yeah, Copilot is meh. Aider-chat for things with GPT-4 earlier this year was a huge step up.

But recently using Claude Sonnet + Haiku through OpenRouter also with aider, and it is like a new dimension of programming.

Working on new projects in Rust and a separate SPA frontend, it just ... implements whatever you ask like magic. Gets it about 90-95% right at the first prompt. Since I am pretty new to Rust, there are a lot of idiomatic things to learn, and lots of std convenience functions I don't yet know about, but the AI does. Figuring out the best prompt and context for it to be effective is now the biggest task.

It will be crazy to see where things go over the next few years... do all junior programmers just disappear? Do all programmers become prompt engineers first?

risyachka · a year ago
Thats the point. The less experienced you are the more gains you see, and vice versa.

The issue in the first case is that you have no idea if it tells you good stuff or garbage.

Also in simple projects it shines, when the project is more complex - it becomes mostly useless.

ugh123 · a year ago
One thing i've been doing more of lately with Copilot is using prompts directly in a //comment. Although I distinguish this from writing a detailed comment doc about a function and then let Copilot write the function. Theres "inline prompting" and "function prompting".
scotty79 · a year ago
I noticed that AI is a bit like having a junior dev in time capsule. He won't solve your problem however he can Google, find stuff and write simple stuff, all you'd be forced to do otherwise. And does it in minutes rather than weeks or months.
danielmarkbruce · a year ago
Just for clarity - are you saying that going back and forth with ChatGPT is more useful than Co-pilot? The reason I ask is I have both and 95% of the benefit is ChatGPT.
heed · a year ago
I like to use copilot when writing tests. It's not always perfect but makes things less tedious for me.
eric_h · a year ago
I recently switched to cursor, and am in the process of wrangling an inherited codebase that had effectively no tests and cursor has saved me _hours_ it's generally terrible at any actual code refactoring, but it has saved me a great deal of typing while adding the missing test coverage.
lagrange77 · a year ago
Excuse my ignorance, i have avoided copilot until now..

Does it have (some of) the other files of the project in it's context, when you use it in a test file?

KaoruAoiShiho · a year ago
Canceled copilot, using cursor now
ein0p · a year ago
I wish I worked at a place where it’d be enough for me to “understand the problem space” as I pull down seven figures. But those bastards also want me to code, and Copilot at least helps with the boilerplate
hereme888 · a year ago
But despite the theory/wish that "if experienced developers use AI well", at the present, inexperienced developers are benefitting more, which is what the study found.
alternatex · a year ago
I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts. Because my personal experience has involved a lot of that in one of the companies listed in the study.

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

nerdjon · a year ago
I am curious about this also. I have now seen multiple PR's that I had to review that a method was clearly completely modified by AI with no good reason and when asked why something was changed we just got silence. Not exaggerating, literal silence and then trying to ignore the question and explain the thing we were first asking them to do. Clearly having no idea what is actually in this PR.

This was done because we asked for a minor change to be done (talking maybe 5 lines of code) and tested. So now not only are we dealing with new debt, we are dealing with code that no one can explain why it was completely changed (and some of the changes were changes for the sake of change), and we are dealing with those of us that manage this code now looking at completely foreign code.

I keep seeing this with people that are using these tools and they are not higher level engineers. We finally got to the point of denying these PR's and saying to go back and do it again. Loosing any of the time that was theoretically gained from doing it in the first place.

Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.

okwhateverdude · a year ago
> Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.

It is worse than that. We're all maintaining in our heads the mental sand castle that is the system the code base represents. The abuse of the autocoder erodes that sand castle because the intentions of the changes, which are crucial for mentally updating the sand castle, are not communicated (because they are unknowable). This is same thing with poor commit messages, or poor documentation around requirements/business processes. With enough erosion, plus expected turn over in staff, the sand castle is actually gone.

jonnycomputer · a year ago
When using code assist, I've occasionally found some perplexing changes to my code I didn't remember making (and wouldn't have made). Can be pretty frustrating.
sgarland · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

Thank you, this says what I have been struggling to describe.

The day I lost part of my soul was when I asked a dev if I could give them feedback on a DB schema, they said yes, and then cut me off a few minutes in with, “yeah, I don’t really care [about X].” You don’t care? I’m telling you as the SME for this exactly what can be improved, how to do it, and why you should do so, but you don’t care. Cool.

Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out. I’m not even talking about doing micro-benchmarks (though you should…), I’m talking about dead-simple stuff like “maybe use this data structure instead of that one.”

jcgrillo · a year ago
In a similar vein, some days I feel like a human link generator into e.g. postgres or kafka documentation. When docs are that clear, refined, and just damn good but it seems like nobody is willing to actually read them closely enough to "get it" it's just a very depressing and demotivating experience. If I could never again have to explain what a transaction isolation level is or why calling kafka a "queue" makes no sense at all I'd probably live an extra decade.

At the root of it, there's a profound arrogance in putting someone else in a position where they are compelled to tell you you're wrong[1]. Curious, careful people don't do this very often because they are aware of the limits of their knowledge and when they don't know something they go find it out. Unfortunately this is surprisingly rare.

[1] to be clear, I'm speaking here as someone who has been guilty of this before, now regrets it, and hopes to never do it again.

noisy_boy · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

They are/will be the management's darling because they too are all about delivering without any interest in technology either.

Well designed technology isn't seen as foundation anymore; it is merely a tool to just keep the machine running. If parts of the machine are being damaged by the lack of judgement in the process, that shouldn't come in the way of this year's bonus; it'll be something to worry about in the next financial year. Nobody knows whats going to happen in the long-term anyway, make hay while the sun shines.

The age of short-term is upon us.

dartos · a year ago
I’ve noticed this more and more since 2020.

A lot of people who entered the field in the past 6 or so years are here for the money, obviously.

Nothing wrong with that at all, but as someone with a long time programming and technology passion, it’s sad to see that change.

carlmr · a year ago
>Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out.

Similarly Docker is an amazing technology, yet it enabled the dependency tower of babels that we have today. It enabled developers that don't care about cleaning up their depencies.

Kubernetes is amazing technology, yet it enabled the developers that don't care to ship applications that constantly crash, but who cares, kubernetes will automatically restart everything.

Cloud and now AI are similar enabler technologies. They could be used for good, but there are too many people that just don't care.

Deleted Comment

Dead Comment

fer · a year ago
Disclaimer: I work at a company who sells coding AI (among many other things).

We use it internally and the technical debt is an enormous threat that IMO hasn't been properly gauged.

It's very very useful to carpet bomb code with APIs and patterns you're not familiar with, but it also leads to insane amounts of code duplication and unwieldy boilerplate if you're not careful, because:

1. One of the two big bias of the models is the fact that the training data is StackOverflow-type training data, which are examples and don't take context and constraints into account.

2. The other is the existing codebase, and it tends to copy/repeat things instead of suggesting you to refactor.

The first is mitigated by, well, doing your job and reviewing/editing what the LLM spat out.

The second can only be mitigated once diffs/commit history become part of the training data, and that's a much harder dataset to handle and tag, as some changes are good (refactorings) but other might be not (bugs that get corrected in subsequent commits) and no clear distinction as commit messages are effectively lies (nobody ever writes: bug introduced).

Not only that, merges/rebases/squashes alter/remove/add spurious meanings to the history, making everything blurrier.

dboreham · a year ago
Consider myself very fortunate to have lived long enough that I'm reading a thread where the subject is the quality of the code generated by software. Decades of keeping that lollypop ready to be given, and now look where we are!
trashtester · a year ago
If you're in a company that is valued at $10M+ per developer, the technical debt is not a big concern.

Either you will go bust, OR you will be able to hire enough people to pay those debts, once you get traction in the market.

disconcision · a year ago
> (nobody ever writes: bug introduced)

it's usually written "feature: ...

acedTrex · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

Bingo, this, so much this. Every dev i know who loves AI stuff was a dev that I had very little technical respect for pre AI. They got some stuff done but there was no craft or quality to it.

Delk · a year ago
For what it's worth, a (former) team mate who was one of the more enthusiastic adopters of gen AI at the time was in fact a pretty good developer who knew his stuff and wrote good code. He was also big on delivering and productivity.

In terms of directly generating technical content, I think he mostly used gen AI for more mechanical stuff such as drafting data schemas or class structures, or for converting this or that to JSON, and perhaps not so much for generating actual program code. Maybe there's a difference to someone who likes to have lots of program logic generated.

simonw · a year ago
I’m a developer who loves using AI, and I’m confident that I was a capable developer producing high quality code long before I started using AI tools.
chasd00 · a year ago
i'm certainly skeptical of genai but your argument sounds very familiar to assembler dev feelings towards c++ devs back in the day.
trashtester · a year ago
> was a dev that I had very little technical respect for pre AI

There are two interpretations of this:

1) Those people are imposters

2) It's about you, not them

I've been personally interested in AI since the early 80's, neural nets since the 90's, and vigilant about "AI" since Alexnet.

I've also been in a tech lead role for the past ~25 years. If someone is talking about newer "AI" models in a nonsensical way, I cringe.

wahnfrieden · a year ago
You're playing status games
Workaccount2 · a year ago
SWE has a huge draw because frankly it's not that hard to learn programming, and the bar to clear in order to land a $100-120k work-from-home salary is pretty low. I know more than a few people who career hopped into software engineering after a lackluster non-tech career (that they paid through the nose to get a degree in, but were still making $70k after 5 years). By and large these people seem to just not be "into it", and like you said are more about delivering than actually making good products/services.

However, it does look like LLM's are racing to make these junior devs unnecessary.

anonzzzies · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

I found this too. But I also found the opposite, including here on HN; people who are interested in technology have almost an aversion against using AI. I personally love tech and I would and do write software for fun, but even that is objectively more fun for me with AI. It makes me far more productive (very much more than what the article states) and, more importantly, it removes the procrastination; whenever I am stuck or procrastinating getting to work or start, I start talking with Aider and before I know it, another task was done that I probably wouldn't have done that day without.

That way I now launch bi weekly open and closed source projects while before that would take months to years. And the cost of having this team of fast experienced devs sitting with me is max a few $ per day.

skydhash · a year ago
> people who are interested in technology have almost an aversion against using AI

Personally, I don't use LLMs. But I don't mind people using them as interactive search engines or code/text manipulations as long as they're aware of the hallucination risks and took care of what they're copying into the project. My reasons for it is mostly that I'm a journey guy, not a destination guy. And I love reading books and manuals as they give me an extensive knowledge map. Using LLMs feels like taking guidance from someone who has not ventured 1km outside their village, but heard descriptions from passersby. Too much vigilance required for the occasional good stuff.

And the truth is, there are a lot of great books and manuals out there. And while they teach you how to do stuff, they often teach you why you should not do it. I strongly doubt Copilot imparting architectural and technical reminders alongside the code.

trashtester · a year ago
> people who are interested in technology have almost an aversion against using AI

I wonder if this is an age thing, for many people. I'm old enough to have started reading these discussions on Slashdot in the early 90s.

But between 2000 and 2010, Slashdot changed and became much less open to new ideas.

The same may be happening to HN right now.

It feels like a lot of people are stuck in the tech of 10 years ago.

andybak · a year ago
Yeah. There is a psychological benefit to using AI that I find very beneficial. A lot of tasks that I would have avoided or wasted time doing task avoidance on suddenly become tractable. I think Simon Willison said something similar.
godelski · a year ago

  > I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts. 
It does not

You also may find this post from the other day more illuminating[0], as I believe the actual result strongly hints at what you're guessing. The study is high schoolers doing math. While GPT only has an 8% error rate for the final answer, it gets the steps wrong half the time. And with coding (like math), the steps are the important bits.

But I think people evaluate very poorly when there's ill defined metrics but some metric exists. They over inflate it's value since it's concrete. Like completing a ticket doesn't mean you made progress. Introducing technical debt would mean taking a step back. A step forward in a very specific direction but away from the actual end goal. You're just outsourcing work to a future person and I think we like to pretend this doesn't exist because it's hard to measure.

[0] https://news.ycombinator.com/item?id=41453300

kraftman · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

Is this a bad thing? Maybe I'm misunderstanding it, but even when I'm working on my own projects, I'm usually trying to solve a problem, and the technology is a means to an end to solving that problem (delivering). I care that it works, and is maintainable, I don't care that much about the technology.

never_inline · a year ago
Read it as "closing a ticket" that makes sense. They don't care if the code explodes after the sprint.
layer8 · a year ago
Caring that it works and is maintainable is caring about the technology.
mrighele · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

For them programming is a means to an end, and I think it is fine, in a way. But you cannot just ask an AI to write you tiktok clone and expect to get the finished product. Writing software is an iterative process, and LLMs currently used are not good enough for that, because they need not only to answer the questions, but at the very minimum to start asking questions: "why do want to do that ?" "do you prefer this or that", etc., so that they can actually extract all the specification details that the user happily didn't even know he needed before producing an appropriate output. (It's not too different from how some independent developers has to handle their clients, isn't it ?). Probably we will get there, but not too soon.

I also doubt that current tools can keep a project architecturally sound long-term, but that is just an hunch.

I admit though that I may be biased because I don't like much tools like copilot: when I write software, I have in my mind a model of the software that I am writing/I want to write, the AI has another model "in mind" and I need to spend mental energy understanding what it is "thinking". Even if 99/100 it is what I wanted, the remaining 1% is enough to hold me back from trusting it. Maybe I am using it the wrong way, who knows.

The AI tool that work for me would be a "voice controller AI powered pair programmer": I write my code, then from time to time I ask him questions on how to do something, and I can get either an contextual answer depending on the code I am working on, or generate the actual code if I wish so". Are there already plugins working that way for vscode/idea/etc ?

liminalsunset · a year ago
I've been playing with Cursor (albeit with a very small toy codebase) and it does seem like it could do some of what you said - it has a number of features, not all of which necessarily generate code. You can ask questions about the code, about documentation, and other things, and it can optionally suggest code that you can either accept, accept parts of, or decline. It's more of a fork of vscode than a plugin right now though.

It is very nice in that it gives you a handy diffing tool before you accept, and it very much feels like it puts me in control.

anonzzzies · a year ago
> had to tackle after the less experienced devs have contributed their AI-driven efforts.

So, like before AI then? I haven't seen AI deliver illogical nonsense that I couldn't even decipher like I have seen some outsourcing companies deliver.

nottorp · a year ago
> I haven't seen AI deliver illogical nonsense

I have. If you're doing niche-er stuff it doesn't have enough data and hallucinates. The worst is when it spits two screens of code instead of 'this cannot be done at the level you want it'.

> that I couldn't even decipher

That's unrelated to code quality. Especially with C++ which has become as write only as perl.

trashtester · a year ago
> that I couldn't even decipher

This is one of the challenges of being a tech lead. Sometimes code is hard to comprehend.

In my experience, AI delivered code is no worse than entry level developer code.

htrp · a year ago
AI seems to give more consistent results than outsourcing (not necessarily better, but at least more predictable failure modes)
prophesi · a year ago
From my experience, most of it is quickly caught in code review. And after a while it occurs less and less, granted that the junior developer puts in the effort to learn why their PRs aren't getting approved.

So, pretty similar to how it was before. Except that motivated junior developers will improve incredibly fast. But that's also kind of always been the case in software development these past two decades?

YeGoblynQueenne · a year ago
It's interesting to see your comment currently right next to that of nerdjon here:

https://news.ycombinator.com/item?id=41465827

(the two comments might have moved apart by the time you read this).

Edit: yep, they just did.

jonnycomputer · a year ago
Code quality is the hardest thing to measure. Seems like they were measuring commits, pull-requests, builds, and build success rate. This sort of gets at that, but is probably inadequate.

The few attempts I've made at using genAI to make large-scale changes to code have been failures, and left me in the dark about the changes that were made in ways that were not helpful. I needed suggestions to be in much smaller chunks. paragraph sized. Right now I limit myself to using the genAI line completion suggestions in Pycharm. It very often guesses my intentions and so actually is helpful, particularly when laboriously typing out lots of long literals, e.g. keys in a dictionary.

collyw · a year ago
WTFs per minute is the standard way of measuring code quality. Lower being better.
elric · a year ago
I don't remember who said it, but "AI generated code turns every developer into a legacy code maintainer". It's pithy and a bit of an exaggeration, but there's a grain of truth in there that resonates with me.
jacobsenscott · a year ago
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

You get what you measure. Nobody measure software quality.

wahnfrieden · a year ago
Maybe not at your workplaces, but at mine, we measured bugs, change failure rate, uptime, "critical functionality" uptime, regressions, performance, CSAT, etc. in addition to qualitative research on quality in-team and with customers
add-sub-mul-div · a year ago
Maintaining bad AI code is the new maintaining bad offshore code.
Kiro · a year ago
I don't think it's that clear cut. I personally think the AI often delivers a better solution than the one I had in mind. It always contains a lot more safe guards against edge cases and other "boring" stuff that the AI has no problem adding but others find tedious.
trashtester · a year ago
If you're building a code base where AI is delivering on the the details of it, it's generally a bad thing if the code provided by AI provide safeguards WITHIN your code base.

Those kinds of safeguards should instead be part of the framework you're using. If you need to prevent SQL injection, you need to make sure that all access to the SQL type database pass through a layer that prevents that. If you are worried about the security of your point of access (like an API facing the public), you need to apply safeguards as close to the point of entry as possible, and so on.

I'm a big believer in AI generated code (over a long horizon), but I'm not sure the edge case robustness is the main selling point.

noobermin · a year ago
It doesn't, it looks at PRs, commits, and builds only. They remark at this lacking context.
creativeSlumber · a year ago
This. Even without AI, we have inexperienced developers rolling out something that "just works" without thinking about many of the scaling/availability issues. Then you have to spend 10x the time fixing those issues.
Sakos · a year ago
If inexperienced developers can commit code like that to master, then there's a culture issue, a process issue or both. This isn't a problem with AI.

Deleted Comment

EvkoGS · a year ago
>Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

You are not a fucking priest in the temple of engineering, go to fucking CS dep at the local uni and be the one and preach it there. You are worker of the company with customers, which pays you a salary from customers money.

theflyinghorse · a year ago
> I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

If I don't deliver my startup burns in a year. In my previous role if I didn't deliver the people who were my reports did not get their bonuses. The incentives are very clear, and have always been clear - deliver.

rurp · a year ago
Succesful companies aren't built in a sprint. I doubt there has ever been a successful startup that didn't have at least some competent people thinking a number of steps ahead. Piling up tech debt to hit some short term arbitrary goals is not a good plan for real success.

Deleted Comment

infecto · a year ago
I hear this but I don't think this is a new issue that AI brought, it simply magnified it. That's a company culture issue.

It reminds me of a talk Raymond Hettinger put on a while ago about rearranging the flowers in the garden. There is a tendency from new developers to rearrange for no good reason, AI makes it even easier now. This comes down to a culture problem to me, AI is simply the tool but the driver is the human (at least for now).

Delk · a year ago
It's probably worth going a bit deeper into the paper before picking up conclusions. And I think the study could really do a bit of a better job of summarizing its results.

The abstract and the conclusion only give a single percentage figure (26.08% increase in productivity, which probably has too many decimals) as the result. If you go a bit further, they give figures of 27 to 39 percent for juniors and 8 to 13 percent for seniors.

But if you go deeper, it looks like there's a lot of variation not only by seniority, but also by the company. Beside pull requests, results on the other outcome measures (commits, builds, build success rate) don't seem to be statistically significant at Microsoft, from what I can tell. And the PR increases only seem to be statistically significant for Microsoft, not for Accenture. And even then possibly only for juniors, but I'm not sure I can quite figure out if I've understood that correctly.

Of course the abstract and the conclusion have to summarize. But it really looks like the outcomes vary so much depending on the variables that I'm not sure it makes sense to give a single overall number even as a summary. Especially since statistical significance seems a bit hit-and-miss.

edit: better readability

svnt · a year ago
To get a better picture of how this comes about: Microsoft has a study for their own internal product use, and wants to show its efficacy. The results aren’t as broadly successful as one would hope.

Accenture is the kind of company that cooperates and co-markets with large orgs like Microsoft. With ~300 devs in the pool they hardly move the population at all, and they cannot be assumed to be objective since they are building a marketing/consulting division around AI workflows.

The third anonymous company didn’t actually have a randomized controlled trial, so it is difficult to say how one should combine their results with the RCTs. Additionally, I am sure that more than one large technology company went through similar trials and were interested in knowing the efficacy of them. That is to say, we can assume other data exist than just those included in the results.

Why did they select these companies, from a larger sample set? Probably because Microsoft and Accenture are incentivized by adoption, and this third company was picked through p-hacking.

In particular, this statement in the abstract is a very bad sign:

> Though each separate experiment is noisy, combined across all three experiments

It is essentially an admission that individually, the companies don’t have statistically significant results, but when we combine these three (and probably only these three) populations we get significant results. This is not science.

Delk · a year ago
Yeah, it does seem fishy.

The third company seems a bit weird to include in other ways as well. In raw numbers in table 1, there seem to be exactly zero effects from the use of CoPilot. Through the use of their regression model -- which introduces other predictors such as developer-fixed and week-fixed effects -- they somehow get an estimated effect of +54%(!) from CoPilot in the number of PRs. But the standard deviations are so far through the roof that the +54% is statistically insignificant within the population of 3000 devs.

Also, they explain the introduction of the week fixed effect as a means of controlling for holidays etc., but to me it sounds like it could also introduce a lot of unwarranted flexibility into the model. But this is a part where I don't understand their methodology well enough to tell whether that's a problem or not.

I generally err towards the benefit of the doubt when I don't fully understand or know something, which is why I focused more on the presentation of the results than on criticizing the study and its methodology in general. I'd have been okay with the summary saying "we got an increase of 27.3% for Microsoft and no statistically significant results for other participants".

But perhaps I should have been more critical.

baxtr · a year ago
Re 26.08%: I immediately question any study (outside of physics etc) that provides two decimal places.
kkyr · a year ago
Why?
throwthrowuknow · a year ago
Personally, my feeling is that some of the difference is accounted for by senior devs who are applying their experience in code review and testing to the generated code and are therefore spending more time pushing back asking for changes, rejecting bad generations and taking time to implement tests to ensure the new code or refactor works as expected. The junior devs are seeing more throughput because they are working on tasks that are easier for the LLM to do right or making the mistake of accepting the first draft because it LGTM.

There is skill involved in using generative code models and it’s the same skill you need for delegating work to others and integrating solutions from multiple authors into a cohesive system.

arexxbifs · a year ago
My hunch - it's just a hunch - is that LLM-assisted coding is detrimental to one's growth as a developer. I'm fairly certain it can only boost productivity to a certain level - one which may be tedium for more senior developers, but formative for juniors.

My experience is that the LLM isn't just used for "boilerplate" code, but rather called into action when a junior developer is faced with a fairly common task they've still not (fully) understood. The process of experimenting, learning and understanding is then largely replaced by the LLM, and the real skill becomes applying prompt tweaks until it looks like stuff works.

JackMorgan · a year ago
For myself it's an incredible tool for learning. I learn both broader and deeper using chat tools. If anything it gives me a great "sounding board" for exploring a subject and finding additional resources.

E.g last night I setup my first Linux raid. A task that isn't too hard, but following a tutorial or just "reading the docs" isn't particularly helpful given it takes a few different tools (mount, umount, fstab, blkid, mdadm, fdisk, lsblk, mkfs) and along the way things might not follow the exact steps from a guide. I asked dozens of questions about each tool and step, where previously I would have just "copy paste and prayed".

Two nights ago I was similarly able to fully recover all my data from a failed ssd also using chatgpt to guide my learning along the way. It was really cool to tackle a completely new skill having a "guide" even if it's wrong 20% of the time, that's way better than the average on the open Internet.

For someone who loves learning, it feels like thousand league boots compared to just endlessly sifting through internet crap. Of course everything it says is suspect, just like everything else on the Internet, but boy it cuts out a lot of the hassle.

manmal · a year ago
You’ve given two examples for “broad”, but none for “deep”. I’ve also used LLMs for setting up my homelab, and they were really helpful since I was basically at beginner level in most linux admin topics (still am). But trying to eg setup automatic snapshot replications for my zfs pool had me go back to reading blog posts, as ChatGPT just couldn’t provide a solution that worked for me.
Delk · a year ago
My approach is typically to follow some kind of a guide or tutorial and to look into the man pages or other documentation for each tool as I go, to understand what the guide is suggesting.

That's how I handled things e.g. when I needed to resize a partition and a filesystem in a LVM setup. Similarly to your RAID example, doing that required using a bunch of tools on multiple levels of storage abstraction: GPT partitions, LUKS tools, LVM physical and logical volumes, file system tools. I was familiar with some of those but didn't remember the incantations by heart, and for others I needed to learn new tools or concepts.

I think I use a similar approach in programming when I'm getting into something I'm not quite familiar with. Stack Overflow answers and tutorials help give the outline of a possible solution. But if I don't understand some of the details, such as what a particular function does, I google them, preferring to get the details either from official documentation or from otherwise credible-sounding accounts.

iLoveOncall · a year ago
Okay but this is not programming, more about an enhanced tutorial. This isn't at all what the original commenter is talking about.
DanHulton · a year ago
I was really hoping this study would be exploring that. Really, it's examining short-term productivity gains, ignoring long-term tech debt that can occur, and _completely_ ignoring effects on the growth of the software developers themselves.

I share your hunch, though I would go so far as to call it an informed, strong opinion. I think we're going to pay the price in this industry in a few years, where the pipeline of "clueful junior software developers" is gonna dry way up, replaced by a firehose of "AI-reliant junior software developers", and the distance between those two categories is a GULF. (And of course, it has a knock-on effect on the number of clueful intermediate software developers, and clueful senior software developers, etc...)

tensor · a year ago
I think it really depends on the users. The same people who would just paste stackoverflow code until it seems to work and call it a day will abuse LLMs. However, those of us who like to know everything about the code we write will likely research anything an LLM spits out that we don't know about.

Well, at least that's how I use them. And to throw a counter to your hypothesis, I find that sometimes the LLM will use functions or library components that I didn't know of, which actually saves me a lot of time when learning a new language or toolkit. So for me, it actually accelerates learning rather than retarding it.

diob · a year ago
I think it can be abused like anything else (copy paste from stack overflow until it works).

But for folks who are going to be successful with or without it, it's a godsend in terms of being able to essentially ask stack overflow questions and get immediate non judgemental answers.

Maybe not correct all the time, but that was true with stack overflow as well. So as always, it comes back to the individual.

Workaccount2 · a year ago
With the progress LLM's having been making in the last two years, is it actually a bad bet to not want to really get into it?

How many contemporary developers have no idea how to write machine code, when 50 years ago it was basically mandatory if you wanted to be able to write anything?

Are LLM's just going to become another abstraction crutch turned abstraction solid pillar?

arexxbifs · a year ago
Abstraction is beneficial and profitable up to a certain point, after which upkeep gets too hard or expensive, and knowledge dwindles into a competency crisis - for various reasons. I'm not saying we are at that point yet, but it feels like we're closing in on it (and not just in software development). 50 years ago isn't even 50 years ago anymore, if you catch my drift: In 1974, the real king of the hill was COBOL - a very straight-forward abstraction.

I'm seeing a lot of confusion and frustration from beginner programmers when it comes to abstraction, because a lot of abstractions in use today just incur other kinds of complexity. At a glance, React for example can seem deceptively easy, but in truth it requires understanding of a lot of advanced concepts. And sure, a little knowledge can go a long way in E.G. web development, but to really write robust, performant code you have to know a lot about the browser it runs in, not unlike how great programmers of yesteryear had entire 8-bit machines mapped out in their heads.

Considering this, I'm not convinced the LLM crutch will ever solidify into a pillar of understanding and maintainable competence.

jprete · a year ago
LLMs aren't an abstraction, even a very leaky one, so the analogies with compilers and the like really fall flat for me.
pphysch · a year ago
It all depends of one is using it scientifically, i.e. having a hypothesis for the generated code before it exists, so that you can evaluate it.

When used scientifically, coding copilots boost productivity AND skills.

arealaccount · a year ago
Why I recommend something like CoPilot over ChatGPT to juniors wanting to use AI.

At least Copilot you’re still more or less driving, whereas ChatGPT you’re more the passenger and not growing intuition.

throwaway314155 · a year ago
My experience has been that indeed, it is detrimental to juniors. But unlike your take, it is largely a boon to experienced developers. That you suggest "tedium" is involved for more senior developers suggests to me that you haven't given the tooling a fair chance or work with a relatively obscure technology/language.
layer8 · a year ago
I think you’ve misunderstood the GP. They are saying AI is useful to seniors for tasks that would otherwise be tedious, but doing those tedious tasks by hand would be formative for juniors, and it is detrimental to their growth when they do them using AI.
saintradon · a year ago
Like any other tool, it depends on how you use it.
lolinder · a year ago
The most interesting thing about this study for me is that when they break it down by experience levels, developers who are above the median tenure show no statistically significant increase in 'productivity' (for some bad proxies of productivity), with the 95% confidence intervals actually dipping deep into the negatives on all metrics (though leaning slightly positive).

This tracks with my own experience: Copilot is nice for resolving some tedium and freeing up my brain to focus more on deeper questions, but it's not as world-altering as junior devs describe it as. It's also frequently subtly wrong in ways that a newer dev wouldn't catch, which requires me to stop and tweak most things it generates in a way that a less experienced dev probably wouldn't know to. A few years into it I now have a pretty good sense for when to use Copilot and when not to—so I think it's probably a net positive for me now—but it certainly wasn't always that way.

I also wonder if the possibly-decreased 'productivity' for more senior devs stems in part from the increase in 'productivity' from the juniors in the company. If the junior devs are producing more PRs that have more mistakes and take longer to review, this would potentially slow down seniors, reducing their own productivity gains proportionally.

danielvaughn · a year ago
A 26% productivity increase sounds inline with in my experience. I think one dimension they should explore is whether you're working with a new technology or one that you're already familiar with. AI helps me much more with languages/frameworks that I'm trying to learn.
hobofan · a year ago
I'd also expand it to "languages/frameworks that I'll never properly learn".

I'm not great at remembering specific quirks/pitfalls about secondary languages like e.g. what the specific quoting incantations are to write conditionals in Bash, so I rarely wrote bash scripts for automation in the past. Basically only if that was a common enough task to be worth the effort. Same for processing JSON with jq, or parsing with AWK.

Now with LLMs, I'm creating a lot more bash scripts, and it has gotten so easy that I'll do it for process-documentation more often. E.g. what previously was a more static step-by-step README with instructions is now accompanied with an interactive bash script that takes user input.

sgarland · a year ago
While I’ll grant you that LLMs are in fact shockingly good with shell scripts, I also highly recommend shellcheck [0]. You can get it as a treesitter plugin for nvim, and I assume others, so it lints as you write.

[0]: https://www.shellcheck.net/

danielvaughn · a year ago
Oh god yes, LLM for bash scripting is fantastic. Bash is one of those languages that I simply don't want to learn. Reading it makes my eyes hurt.
moolcool · a year ago
I've found Copilot pretty good at removing some tedium, like it'll write docstrings pretty well most of the time, but it does almost nothing to alleviate the actual mental labour of software engineering
jacobsenscott · a year ago
Removing the tedium frees your brain up to focus on the interesting parts of the problem. The "actual mental labor" is the fun part. So I like that.
jstummbillig · a year ago
That might be. It might also be a consequence of how flexible people at different stages in their career are.

I have seen mostly senior programmers argue why ai tools don't work. Juniors just use them without prejudice.

infecto · a year ago
I wish I could upvote this comment more than once. There does appear to be a prejudice with more senior programmers arguing why it cannot work, how they just cause more trouble, and other various complaints. The tools today are not perfect but they still amaze me at what is being accomplished, even a 10% gain is incredible for something that costs $10/month. I believe progress will be made in the space and the tooling in 5 years will be even better.
SketchySeaBeast · a year ago
As a senior, I find that trying to use copilot really only gives me gains maybe half the time, the other half the time it leads me in the wrong direction. Googling tends to give me a better result because I can actually move through the data quicker. My belief is this is because when I need help I'm doing something uncommon or hard, as opposed to juniors who need help doing regular stuff which will have plenty of examples in the training data. I don't need help with that.

It certainly has its uses - it's awesome at mocking and filling in the boilerplate unit tests.

marginalia_nu · a year ago
I find their value depends a lot on what I'm doing. Anything easy I'll get insane leverage, no exaggeration I'll slap together that shit 25x faster. It's seen likely billions of lines of simple CRUD endpoints, so yeah it'll write those flawlessly for you.

Anything the difficult or complex, and it's really a coinflip if it's even an advantage, most of the time it's just distracting and giving irrelevant suggestions or bad textbook-style implementations intended to demonstrate a principle but with god-awful performance. Likely because there's simply not enough training data for these types of tasks.

With this in mind, I don't think it's strange that junior devs would be gushing over this and senior devs would be raising a skeptical eyebrow. Both may be correct, depending on what you work on.

GoToRO · a year ago
As a senior, you know the problem is actually finishing a project. That's the moment all those bad decisions made by junior need to be fixed. This also means that a 80% done project is more like 20% done, because in the state it is, it can not be finished: you fix one thing and break 2 more.
jncfhnb · a year ago
My experience is juniors using them without prejudice and then not understanding why their code is wrong
tuyiown · a year ago
Flexible, or just that only juniors have real benefits
SoftTalker · a year ago
Juniors don't know enough to know what problems the AI code might be introducing. It might work, and the tests might pass, but it might be very fragile, full of duplicated code, unnecessary side-effects, etc. that will make future maintenance and debugging difficult. But I guess we'll be using AI for that too, so the hopefully the AI can clean up the messes that it made.
nuancebydefault · a year ago
Now some junior dev can quickly make something new and fully functional in days, without knowing in detail what they are doing. As opposed to weeks by a senior originally.

Personally I think that senior devs might fear a conflict within their identity. Hence they draw the 'You and the AI have no cue' card.

insane_dreamer · a year ago
I haven't found it that useful in my main product development (which while python based, uses our own framework and therefore there's not much code for CoPilot to go on and it usually suggests methods and arguments that don't exist, which just makes extra work).

Where I do find it useful are

1) questions about frameworks/languages that I don't work in much and for which there is a lot of example content (i.e., Qt, CSS);

2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python") - the result is pointing me to a library or some example rather than directly generating code that I can copy/paste

3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors. I have the CoPilot plugin for PyCharm so I'll write it as a comment in the file and then it'll complete the next few lines. Again best results is something that is very short and specific. With anything longer I almost always have to iterate so much with CoPilot that it's not worth it anymore.

4) a quick way to search documentation

Some people have said it's good at writing unit tests but I have not found that to be the case (at least not the right kind of unit tests).

If I had to quantify it, I'd probably give it a 5-10% increase in productivity. Much less than I get from using a full featured IDE like PyCharm over coding in Notepad, or a really good git client over typing the git commands in the CLI. In other words, it's a productivity tool like many other tools, but I would not say it's "revolutionary".

nuancebydefault · a year ago
It is revolutionary no doubt. How many pre-AI tools would accomplish an amount of 4 big features you mentioned at once?
pqdbr · a year ago
Same impression here.

I've been using Cursor for around 10 days on a massive Ruby on Rails project (a stack I've been coding in for +13 years).

I didn't enjoy any productivity boost on top of what GitHub Copilot already gave me (which I'd estimate around the 25% mark).

However, for crafting a new project from scratch (empty folder) in, say, Node.js, it's uncanny; I can get an API serving requests from a OpenAPI schema (serving the OpenAPI schema via swagger) in ~5 minutes just by prompting.

Starting a project from scratch, for me at least, is rare, which probably means going back to Copilot and vanilla VSCode.

SketchySeaBeast · a year ago
I feel like this whole "starting a new project" might be the divide between the jaded and excited, which often (but not always) falls between senior and junior lines. I just don't do that anymore. Coding is no longer my passion, it's my profession. I'm working in an established code base that I need to thoughtfully expand and improve. The easy and boilerplate problems are solved. I can't remember the last time I started up a new project, so I never see that side of copilot or cursor. Copilot might at its best when tinkering.
yunwal · a year ago
If I struggle on a particularly hard implementation detail in a large project, often I'll use an LLM to set up a boilerplate version of the project from scratch with fewer complications so that I can figure out the problem. It gets confused if it's interacting with too many things, but the solutions it finds in a simple example can often be instructive.
CuriouslyC · a year ago
To fully extract maximum value out of LLMs, you need to change how you architect and organize software to make it easy for them to work with. LLMs are great with function libraries and domain specific languages, so the more of your code you can factor into those sorts of things the greater a speed boost they'll give you.
gamerDude · a year ago
I start new projects a lot and just have a template that has everything I would need already set up for me. Do you think there is a unique value prop that AI gives you when setting up that a template would not have?

Deleted Comment

dartos · a year ago
How do you use AI in your workflow outside of copilot?

I haven’t been able to get any mileage out of chat AI beyond treating it like a search engine, then verifying what it said…. Which isn’t a speedy workflow

jakub_g · a year ago
I only use (free) ChatGPT sporadically, and it works best for me in areas where I'm familiar enough to call bullshit, but not familiar enough to write things myself quickly / confidently / without checking a lot of docs:

- writing robust bash and using unix/macos tools

- how to do X in github actions

- which API endpoint do I use to do Y

- synthesizing knowledge on some topic that would require dozens of browser tabs

- enumerating things to consider when investigating things. Like "I'm seeing X, what could be the cause, and how I do check if it's that". For example I told it last week "git rebase is very slow, what can it be?" and it told me to use GIT_TRACE=1 which made me find a slow post-commit hook, and suggested how to skip this hook while rebasing.

thiht · a year ago
Same for me. I also use it for some SQL queries involving syntax I’m unfamiliar with, like JSONB operators in Postgres. ChatGPT gives me better results, faster than Google.
empath75 · a year ago
Sounds about right to me, which is why the hysteria about AI wiping out developer jobs was always absurd. _Every_ time there has been a technology that improved developer productivity, developer jobs and pay have _increased_. There is not a limited amount of automation that can be done in the world and the cheaper and easier it gets to automate stuff, the more stuff will be economically viable to automate that wasn't before. Did IDE's eliminate developer jobs? Compilers? It's just a tool.
randomdata · a year ago
The automated elevator is just a tool, but it "wiped out" the elevator operator. Which is really to say not that the elevator operator was wiped out, but that everyone became the elevator operator. Thus, by the transitive properties of supply and demand, the value of operating an elevator declined to nothing.

Said hysteria was built on the same idea. After all, LLMs themselves are just compilers for a programming language that is incredibly similar to spoken language. But as the programming language is incredibly similar to the spoken language that nearly everyone already knows, the idea was that everyone would become the metaphorical elevator operator, "wiping out" programming as a job just as elevator operators were "wiped out" of a job when operating an elevator became accessible to all.

The key difference, and where the hysteria is likely to fall flat, is that when riding in an elevator there isn't much else to do but be the elevator operator. You may as well do it. Your situation would not be meaningfully improved if another person was there to press the button for you. When it comes to programming, though, there is more effort involved. Even when a new programming language makes programming accessible, there remains a significant time commitment to carry out the work. The business people are still best to leave that work to the peons so they can continue to focus on the important things.

rm_-rf_slash · a year ago
AI speeds my coding up 2x-4x, depending on the breadth of requirements and complexity of new tech to implement.

But coding is just a fraction of my weekly workload, and AI has been less impactful for other aspects of project management.

So overall it’s 25%-50% increase in productivity.

itchyjunk · a year ago
Maybe that's what the new vs experienced difference is indirectly capturing.
MBCook · a year ago
It lets people make more PRs. Woohoo. Who cares?

Does it increase the number of things that pass QA?

Do the things done with AI assistance have fewer bugs caught after QA?

Are they easier to extend or modify later? Or do they have rigid and inflexible designs?

A tool that can help turn developers into unknown quality code monkeys is not something I’m looking for. I’m looking for a tool that helps developers find bugs or design flaws in what they’re doing. Or maybe write well designed tests.

Just counting PRs doesn’t tell me anything useful. But it triggers my gut feeling that more code per unit time = lower average quality.

anthomtb · a year ago
Dev - “Hey copilot, please split this commit into 5 separate commits”

Copilot - “okay I can do that for you! Here are your new commits!”

Senior Dev - “why? Your change is atomic. I’ll tell management to f-off, in a kind way, if they bring up those silly change-per-month metrics again”