Maybe. It's certainly possible that this is the case, but I'm hesitant to believe anything Zuckerberg says during a hype-train, especially when saying that has a non-zero chance of boosting the stock.
But let's suppose that this is true, I don't think it necessarily implies that fewer engineers are actually hired. It could be that this "AI mid-level" engineer frees up resources for the remaining engineers to work on something more interesting.
If that's the case, I don't think we have anything to worry about anyway. If productivity actually drops then I suspect that this program will be sunsetted.
Why do you doubt Zuck's foresight? I am not a fanboy, but he timed the pivot to mobile really well, acquired instagram, and anticipated how crucial messaging would be. All pretty good calls. The VR stuff is still playing out. In fact I think he is one of the few who do a better job of looking ahead.
Mostly because of the huge bet on "Web 3.0" and the "metaverse" stuff. I could be wrong, maybe in ten years we'll be looking at how great Facebook was at predicting stuff, but it seems like it mostly has not panned out, at least from my (admittedly very limited) perspective.
I guess the reason I'm skeptical is because there's really no reason for him to not say this kind of stuff. If he says "Meta's AI model is so good that it's on par with a mid-level engineer", there's a chance the stock price shoots up because it suggests that maybe Meta has some amazing new model and AI is the current hotness, and there's basically no penalty for being wrong.
It's not hard to find cases where CEOs just completely lie to everyone's faces in order to try and boost stock prices, so it's not a skepticism of Zuckerberg explicitly, so much as all CEOs.
I don’t know how they think that’s going to work. The best AI has been able to do for me is smarter Auto Complete, I doubt it can just fit into larger code bases without the searching a human can do.
I know the job market has been rough which has emboldened these CEO’s but we are starting to see hiring pick up again, and not just Staff+, interns, SDE 1 and 2 as well, which bodes well for 2025.
I think what’ll happen when people say “AI will replace X people” (engineers in this case) it’s not that it’ll for example replace 100% of what 80 people might do (even if they’re kid-level), but rather it’ll make 8 people 10 times more efficient type of thing .. didn’t Tesla’s head of ai or self driving say publicly years ago that co-pilot wrote 80% of his code? It’s not quite apples to apples but you can see how it’s progressing. I know that’s not what Zuck said but I wonder if that might be what he meant?
How many developers are doing little more than writing boilerplate? Those are the the kinds of jobs best automated. A few years ago behavior-driven development tried to take this as far as possible, taking stories and then turning them into runnable tests. LLMs can already do a pretty good job of being fed inputs and some context and create running code (for example, give ChatGPT a class file and tell it to write a unit test. It does a pretty good job of it)
> How many developers are doing little more than writing boilerplate?
Before you go down the road of what can be automated, let's try to answer this. I'm sure there's actual research, but I cannot find it.
Anecdotal, but with 20+ years of professional development under my belt, at startups, ngos, enterprise, goverment etc: "insignificant amounts".
Yes, some embed-images.sh or some invoice-extractor/src/main.rs - side projects related to a job. Every week some boilerplate like this.
But the vast bulk of my work is:
- finding out what customers/a spec/stakeholder really mean. And how important it really is (and encoding that in executable specs, preferably)
- keeping the old stuff somewhat up-to-date. Fiddling with old Ruby runtimes, Pipenv/pyenv/ or whatever todays flavor is. Nodejs, npm juggling. Docker. Cargo. Deprecations. Security patches. CIs, infra, crashing on updates etc etc.
- muddling through codebases with 10+ years of accumulated horrors. Touched by 100+ random freelancers, employees, most lost to the time. Crippling technical debt. Undocumented decisions everywhere. Crucial business requirements encoded only in these three LoCs amongst thousands of lines inside this update_user_projects() function.
- Trying to get this old project running again on todays OS/runtimes/etc.
The way I see it, LLMs-code-gens currently are little more than unpredictable but advanced compilers. Compiling instructions into instructions that computers can execute. BDD could be a good language to write these instructions. And works for new stuff, and much less for existing stuff. "Maintaining existing stuff" is the vast majority of most developers' work, I'd thing. - but cannot back up -.
AI can search your code today. In cursor this is called “codebase indexing”. We have some million(s) lines of code, orders of magnitude smaller than Facebook, but definitely larger than the average startup. We search with AI tools, through Q&A, and for AI-driven code mods. Cursor also exposes LSP stuff to their AI system so it can look at that tool.
Sourcegraph Cody is another example of a code AI that can search.
There is a different picture. You are now seeing the same jobs but this time it has a reduced salary + equity and fewer openings. It is not the same as before 2020.
For every tech role open, it will be ultra competitive to the point where companies may start hiring olympians for the job that match the so-called "exceptional talent".
The truth is, it will get worse before it gets any better.
From hunter gatherer society to day, every time a new technology comes along that claims to make life ‘easier,’ it pushes us further into being slave-adjacent. We have phones, ChatGPT, Google, and smartphones—but for what? We end up working more and more. The greater the technological progress, the more our lives are exploited to generate profits for our corporate overlords.
ChatGPT will save you time but your boss will now demand you to greatly up your productivity to match this fabulous new technology. Now you’re working even more for the same or less pay!
AGI, in its final form, will turn us into slaves in its final form, and yet we’re cheering it on. This is madness.
I don't agree with the how, but the conclusion is true.
I'll elaborate a bit more though...
In a capitalist world, it will always seek to maximize profits. Human suffering, animal suffering, morality, and so on are basically irrelevant.
Someone willing to exploit human happiness for an extra 2% gain, will wipe out the competition if they are unwilling to do the same. So everything is guaranteed to spiral in a race to the bottom.
So with a new tool, as every time before, it will be a race to the bottom to see how can most unethically use AI to exploit value.
Whether we end up as slaves, or starving on the street remains to be seen. But the odds of humanity being better off without a sharp course correction and regulation by the government, are very low.
> Human suffering, animal suffering, morality, and so on are basically irrelevant.
Not "irrevelent" but most often deliberately part of a strategy:
In economics, an externality or external cost is an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's (or parties') activity. https://en.wikipedia.org/wiki/Externality
In the SFBA, Lockheed hires welders who can get a security clearance.
Someone I know was in their 70s working there, because there isn't a large pipeline of welders to pull from. In his words, "there's a bunch of great guys with steady hands delivering doordash, because it pays better than being an apprentice in a body shop".
Most automotive welders can't get a clearance, the true factory welding jobs no longer pay well or are automated away.
In response, the old Martinez chevrolet factory welders are continuing to work in Lockheed with larger than normal incentives to stick around past their medicare and social security pay.
Most of them would rather be training new guys (or girls, apparently half of the good new welders are women), but most of the time there's nobody new who comes in.
This has arguably been happening with all fields since the beginning of time.
Mere literacy used to qualify you for a management role. I am not sure you could get a job of any kind without being able to read and write now. It gets replaced with more time spent in specialized education.
That's not a real problem. There are established models for training people to work in fields, where you need plenty of experience to qualify for an entry-level position. Medicine is the most prominent example.
And it's going to be easier than in medicine, because software is so forgiving. You can do real work during training with minimal supervision, as mistakes rarely cause serious harm.
i'm quite surprised by the level of skepticism in this thread. I've been developping my website + webservice + db only using cursor "composer" mode, not typing a single line of code, and i'm already with something fully functional. I am voluntarely refraining from fixing the bugs myself by digging into the code, just to see how far someone with 0 coding skill can go. And this answer is : as far as a junior / midlevel software developer can.
Obviously, not on its own. I still needed to give it guidelines and feedback on what to do next. But i could improve my site while watching TV. This was science-fiction just 2 years ago.
Could someone with "0 coding skill" really compose the prompts in the first place? You need to understand the problem space, the right terminology and visualise a general architecture/structure for the solution. These things might be obvious to you and me but take someone from a different field and it would be the same as handing me the controls to a medical robot and telling me to start pushing buttons.
this seems like a very very minor thing compared to correctly understanding how a bug in a screen showing "unauthorized access" on the react frontend project, is in fact triggered by a wrong configuration in a "authorization service" in a file located completely elsewhere, due to a serialization issue, etc..
"visualizing a general architecture" is piece of cake compared to the mental model one need to understand a full stack react / express / sqlite / typescript stack with sufficient details to be able to debug obscure error code.
oh you won't need it : the code is probably horrible, the website design is vaguely Ok for an admin backend but clearly not suitable to customer facing, etc.
But the point is : it works.It is functional and handles complex realistic scenarios.
And only two years ago, this was pure fantasy. Now give the tech two more years, and see where this is going ?
That's because most of the engineers or coders in this thread are in fear their job won't exist in a few years and they will have to go learn a completely different skill to make a wage. It's hard to admit you may not have a job in x years time because a computer took it.
But let's be real here, 90% of software and coding is taking x performing an action on it so the user can consume y. It's basic stuff. Much like how we don't write machine code these days and we use a compiler, your bonkers if you think your going to be writing python or c in the future and not just saying give me x and y from z.
You will still get folks writing code in some obscure areas, but most of us will switch to describing system and app flows and have the code written by ai and in areas utilize ai's to get tasks done. The theory of software construction from a high level will become more important than actual language mastery on the low level.
The folks saying it won't are much like the horse riders of yesteryear who said cars would never replace horses.
Some people are using AI to generate bug reports and have been getting banned because they're hallucinated spam. The bugs don't actually exist. The curl dev wrote a blog post about this issue.
That is the same person who renamed his company to Meta and went all in into the metaverse and how we all be working in the metaverse.
The guy got lucky once, that's all, he is not the modern Oracle.
I’ve definitely seen a pattern in business where some guy effectively wins the lottery, and the business world puts him on a throne, gives him endless capital, and listens breathlessly to his musings about how to successfully win the lottery.
That's my takeaway as well, and this is his new pivot, from VR to AR. Facebook's problem is that it has all of the money and no idea what to do with it, but instead of sitting on a hoard like Apple they've taken the unusual approach of powering their servers with piles of burning $100 bills.
Personally I suspect that if they really try to replace mid-levels with "AI", they're going to have to re-hire the mid-levels as members of a new field: "AI mistake-checker/fixer". They'll have to pay for the AI (which is resource intensive) and the human worker to fix the AI mess. Everyone wins?
True, but there is something to be said about making it to his level.
He isnt stupid. Stupid people don't make it this far. They don't. Don't let the media designed for The Commons tell you otherwise. There are too many filters from rational intelligence, social intelligence, willpower, book smarts, delegation/management/leadership, etc... If you fail at one of these, you don't ~10,000x your company's value.
John Mearsheimer said "They made the wrong decision, not an irrational one." And I think that goes for the Metaverse. VR was/is incredible. I personally would find it difficult to bet real money on that question.
He sees something that is reasonable to say. Will mid-level engineers be replaced by AI? If there is a 2x efficiency improvement, what will half those programmers be doing? Will 100% of their midlevel engineers be replaced? No, but I don't think a smart person would make that claim. Do You personally think 0 people will lose their jobs because they arent needed anymore?
Identifying and buying potential competitors is neither lucky, nor is it visionary, it's just SOP for megacorps with deep pockets and potential competition.
Maybe. It's certainly possible that this is the case, but I'm hesitant to believe anything Zuckerberg says during a hype-train, especially when saying that has a non-zero chance of boosting the stock.
But let's suppose that this is true, I don't think it necessarily implies that fewer engineers are actually hired. It could be that this "AI mid-level" engineer frees up resources for the remaining engineers to work on something more interesting.
Or forces them to waste all their time babysitting the "AI" workers that can't do basic tasks.
I guess the reason I'm skeptical is because there's really no reason for him to not say this kind of stuff. If he says "Meta's AI model is so good that it's on par with a mid-level engineer", there's a chance the stock price shoots up because it suggests that maybe Meta has some amazing new model and AI is the current hotness, and there's basically no penalty for being wrong.
It's not hard to find cases where CEOs just completely lie to everyone's faces in order to try and boost stock prices, so it's not a skepticism of Zuckerberg explicitly, so much as all CEOs.
I know the job market has been rough which has emboldened these CEO’s but we are starting to see hiring pick up again, and not just Staff+, interns, SDE 1 and 2 as well, which bodes well for 2025.
Before you go down the road of what can be automated, let's try to answer this. I'm sure there's actual research, but I cannot find it.
Anecdotal, but with 20+ years of professional development under my belt, at startups, ngos, enterprise, goverment etc: "insignificant amounts".
Yes, some embed-images.sh or some invoice-extractor/src/main.rs - side projects related to a job. Every week some boilerplate like this.
But the vast bulk of my work is:
- finding out what customers/a spec/stakeholder really mean. And how important it really is (and encoding that in executable specs, preferably)
- keeping the old stuff somewhat up-to-date. Fiddling with old Ruby runtimes, Pipenv/pyenv/ or whatever todays flavor is. Nodejs, npm juggling. Docker. Cargo. Deprecations. Security patches. CIs, infra, crashing on updates etc etc.
- muddling through codebases with 10+ years of accumulated horrors. Touched by 100+ random freelancers, employees, most lost to the time. Crippling technical debt. Undocumented decisions everywhere. Crucial business requirements encoded only in these three LoCs amongst thousands of lines inside this update_user_projects() function.
- Trying to get this old project running again on todays OS/runtimes/etc.
- Stitching poorly documented, weirdly (or hardly) architectured, abandoned or outdated libraries together.
The way I see it, LLMs-code-gens currently are little more than unpredictable but advanced compilers. Compiling instructions into instructions that computers can execute. BDD could be a good language to write these instructions. And works for new stuff, and much less for existing stuff. "Maintaining existing stuff" is the vast majority of most developers' work, I'd thing. - but cannot back up -.
Sourcegraph Cody is another example of a code AI that can search.
There is a different picture. You are now seeing the same jobs but this time it has a reduced salary + equity and fewer openings. It is not the same as before 2020.
For every tech role open, it will be ultra competitive to the point where companies may start hiring olympians for the job that match the so-called "exceptional talent".
The truth is, it will get worse before it gets any better.
ChatGPT will save you time but your boss will now demand you to greatly up your productivity to match this fabulous new technology. Now you’re working even more for the same or less pay!
AGI, in its final form, will turn us into slaves in its final form, and yet we’re cheering it on. This is madness.
I'll elaborate a bit more though...
In a capitalist world, it will always seek to maximize profits. Human suffering, animal suffering, morality, and so on are basically irrelevant.
Someone willing to exploit human happiness for an extra 2% gain, will wipe out the competition if they are unwilling to do the same. So everything is guaranteed to spiral in a race to the bottom.
So with a new tool, as every time before, it will be a race to the bottom to see how can most unethically use AI to exploit value.
Whether we end up as slaves, or starving on the street remains to be seen. But the odds of humanity being better off without a sharp course correction and regulation by the government, are very low.
Not "irrevelent" but most often deliberately part of a strategy:
In economics, an externality or external cost is an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's (or parties') activity. https://en.wikipedia.org/wiki/Externality
In the SFBA, Lockheed hires welders who can get a security clearance.
Someone I know was in their 70s working there, because there isn't a large pipeline of welders to pull from. In his words, "there's a bunch of great guys with steady hands delivering doordash, because it pays better than being an apprentice in a body shop".
Most automotive welders can't get a clearance, the true factory welding jobs no longer pay well or are automated away.
In response, the old Martinez chevrolet factory welders are continuing to work in Lockheed with larger than normal incentives to stick around past their medicare and social security pay.
Most of them would rather be training new guys (or girls, apparently half of the good new welders are women), but most of the time there's nobody new who comes in.
Mere literacy used to qualify you for a management role. I am not sure you could get a job of any kind without being able to read and write now. It gets replaced with more time spent in specialized education.
And it's going to be easier than in medicine, because software is so forgiving. You can do real work during training with minimal supervision, as mistakes rarely cause serious harm.
The common theme from these CEO's is today is the worst AI will ever be. (who knows if thats true, not me)
Obviously, not on its own. I still needed to give it guidelines and feedback on what to do next. But i could improve my site while watching TV. This was science-fiction just 2 years ago.
"visualizing a general architecture" is piece of cake compared to the mental model one need to understand a full stack react / express / sqlite / typescript stack with sufficient details to be able to debug obscure error code.
But the point is : it works.It is functional and handles complex realistic scenarios.
And only two years ago, this was pure fantasy. Now give the tech two more years, and see where this is going ?
Simonw has plenty of examples on his blog, including full transcripts if you've honestly not seen them before though. https://simonwillison.net/tags/ai-assisted-programming/
A fun one that he didn't bother to post to his blog tho - https://news.ycombinator.com/item?id=42505772
But let's be real here, 90% of software and coding is taking x performing an action on it so the user can consume y. It's basic stuff. Much like how we don't write machine code these days and we use a compiler, your bonkers if you think your going to be writing python or c in the future and not just saying give me x and y from z.
You will still get folks writing code in some obscure areas, but most of us will switch to describing system and app flows and have the code written by ai and in areas utilize ai's to get tasks done. The theory of software construction from a high level will become more important than actual language mastery on the low level.
The folks saying it won't are much like the horse riders of yesteryear who said cars would never replace horses.
Generative AI can't think, it can only mimic.
Personally I suspect that if they really try to replace mid-levels with "AI", they're going to have to re-hire the mid-levels as members of a new field: "AI mistake-checker/fixer". They'll have to pay for the AI (which is resource intensive) and the human worker to fix the AI mess. Everyone wins?
He isnt stupid. Stupid people don't make it this far. They don't. Don't let the media designed for The Commons tell you otherwise. There are too many filters from rational intelligence, social intelligence, willpower, book smarts, delegation/management/leadership, etc... If you fail at one of these, you don't ~10,000x your company's value.
John Mearsheimer said "They made the wrong decision, not an irrational one." And I think that goes for the Metaverse. VR was/is incredible. I personally would find it difficult to bet real money on that question.
He sees something that is reasonable to say. Will mid-level engineers be replaced by AI? If there is a 2x efficiency improvement, what will half those programmers be doing? Will 100% of their midlevel engineers be replaced? No, but I don't think a smart person would make that claim. Do You personally think 0 people will lose their jobs because they arent needed anymore?