The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
I didn't want to believe in it, but I think it's here. And even arguments like "feeding proprietary code" will be eventually solved by companies hosting their own isolated LLMs as they become better and hardware becomes more available.
My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.
So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?
--- EDIT ---
I want to clarify something, because there seems to be slight misunderstanding.
A lot of people have been talking about SWE being not only about code, and I agree with that. But it's also easier to sell this idea to a young person who is just starting in this career. And while I want this Ask HN to be helpful to young/fresh engineers as well, I'm more interested in getting help for myself, and many others who are in a similar position.
I have almost two decades of SWE experience. But despite that, I seem to have missed the party where they told us that "coding is not a means to an end", and realized it in the past few years. I bet there are people out there who are in a similar situations. How can we future-proof our career?
I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.
And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.
All the while re-evaluating all my assumptions and that of others.
If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.
Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.
U.S. (and German) automakers were absolutely sure that the Japanese would never be able to touch them. Then Koreans. Now Chinese. Now there are tariffs and more coming to save jobs.
Betting against AI (or increasing automation, really) is a bet against not against robots, but against human ingenuity. Humans are the ones making progress, and we can work with toothpicks as levers. LLM's are our current building blocks, and people are doing crazy things with them.
I've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math, to building a soccer team generator that uses Google's OR tools to optimise across various axes, to spinning up four different test apps with Replit's agent to try multiple approaches to a task I'm working on. All the while skilling up in React and friends.
I don't really have time for those side-quests but LLM's make them possible. Easy, even. The amount of time and energy I'd need pre-LLM's to do this makes this a million miles away from "a waste of time".
And even if LLM's get no better, we're good at finding the parts that work well and using that as leverage. I'm using it to build and check datasets, because it's really good at extraction. I can throw a human in the loop, but in a startup setting this is 80/20 and that's enough. When I need enterprise level code, I brainstorm 10 approaches with it and then take the reins. How is this not valuable?
LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.
LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).
You seem to confuse one-off projects that have zero to little need for maintenance for actual commercial programming, perhaps?
Your analogy with the automakers seems puzzlingly irrelevant to the discussion at hand, and very far from transferable to it. Also I am 100% convinced nobody is going to protect the programmers; business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic
Like your parent commenter, if LLMs get on my level, I'd be _happy_ to retire. I don't have a super vested interest in commercial programming, in fact I became as good at it in the last several years because I started hating it and wanted to get all my tasks done with minimal time expended; so I am quite confident in my assessment that LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.
Your take is rather romantic.
Truly hit the nail on the head there. We HAD no business with these side-quests, but now? They're all ripe for the taking, really.
LLM pair programming is unbelievably fun, satisfying, and productive. Why type out the code when you can instead watch it being typed while thinking of and typing out/speaking the next thing you want.
For those who enjoy typing, you could try to get a job dictating letters for lawyers, but something tells me that’s on the way out too.
Why are half the replies like this?
Yea i built bunch of apps when RoR blog demo came out like 2 decades ago. So what?
I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?
> LLM’s never provide code that pass my sniff test
This is ego speaking.
I definitely expect them to improve. But I also think the point at which they can actually replace a senior programmer is pretty much the exact point at which they can replace any knowledge worker, at which point western society (possibly all society) is in way deeper shit than just me being out of a job.
> This is ego speaking.
It definitely isn't. LLMs are useful for coding now, but they can't really do the whole job without help - at least not for anything non-trivial.
No, it really isn't. Repeatedly, the case is that people are trying to pass off GPT's work as good without actually verifying the output. I keep seeing "look at this wonderful script GPT made for me to do X", and it does not pass code review, and is generally extremely low quality.
In one example, a bash script was generated to count number SLoC changed by author; it was extremely convoluted, and after I simplified it, I noticed that the output of the simplified version differed, because the original was omitted changes that were only a single line.
In another example it took several back & forths during a review to ask "where are you getting this code? / why do you think this code works, when nothing in the docs supports that?" and after several back and forths, it was admitted that GPT wrote it. The dev who wrote it would have been far better served RTFM, than a several cycle long review that ended up with most of GPT's hallucinations being stripped from the PR.
Those who think LLM's output is good have not reviewed the output strenuously enough.
> Is there some expectation that these things won't improve?
Because randomized token generation inherently lacks actual reasoning about the behavior of the code. My code generator does not.
A human junior developer can learn from this tutoring and rarely regress over time. But the LLM’s all by design cannot and do not rewire their understanding of the problem space over time, nor do they remember examples and lessons from previous iterations to build upon. I have to handhold them forever, and they never learn.
Even when they use significant parts of the existing codebase as their context window they’re still blind to the whole reality and history of the code.
Now just to be clear, I do use LLM’s at my job. Just not to code. I use them to parse documents and assist users with otherwise repetitive manual tasks. I use their strength as language models to convert visual tokens parsed by an OCR to grasp the sentence structure and convert that into text segments which can be used more readily by users. At that they are incredible, even something smaller like llama 7b.
Is there any expectations that things will? Is there more untapped great quality data that LLMs can ingest? Will a larger model perform meaningfully better? Will it solve the pervasive issue of generating plausibly sounding bullshit?
I used LLMs for a while, I found them largely useless for my job. They were helpful for things I don't really need help with, and they were mostly damaging for things I actually needed.
> This is ego speaking.
Or maybe it was an accurate assessment for his use case, and your wishful thinking makes you think it was his ego speaking.
> This is ego speaking.
That's been my experience of LLM-generated code that people have submitted to open source projects I work on. It's all been crap. Some of it didn't even compile. Some of it changed comments that were previously correct to say something untrue. I've yet to see a single PR that implemented something useful.
Why do we expect that LLMs are going to buck this trend? It's not for accuracy--the previous attempts, when demonstrating their proof-of-concepts, actually reliably worked, whereas with "modern LLMs", virtually every demonstration manages to include "well, okay, the output has a bug here."
As soon as you have something less common, it will give you widely incorrect garbage that does not make any sense. Even worse, it APPEARS correct, but it won’t work or will do something else completely.
And I use 4o and o1 every day. Mostly for boilerplate and boring stuff.
I have colleagues that submit ChatGPT generated code and I’ll immediately recognize it because it is just very bad. The colleague would have tested it, so the code does work, but it is always bad, weird or otherwise unusual. Functional, but not nice.
ChatGPT can give very complicated solutions to things that can be solved with a one-liner.
Sure. But the expectation is quantitative improvement - qualitative improvement has not happened, and is unlikely to happen without major research breakthroughs.
LLMs are useful. They still need a lot of supervision & hand holding, and they'll continue to for a long while
And no, it's not "ego speaking". It's long experience. There is fundamentally no reason to believe LLMs will take a leap to "works reliably in subtle circumstances, and will elicit requirements as necessary". (Sure, if you think SWE work is typing keys and make some code, any code, appear, then LLM are a threat)
Bitcoin was released in what year? I still cannot use it for payments.
No-code solutions exist since when? And still programmers work...
I dont think all hyped techs are fads. For instance: we use SaaS now instead of installing software locally. This transition took the world by storm.
But those tech that needs lots of ads, and lots of zealots, and make incredible promises: they usually are fads.
Yes. The current technology is at a dead end. The costs for training and for scaling the network are not sustainable. This has been obvious since 2022 and is related to the way in which OpenAI created their product. There is no path described for moving from the current dead end technology to anything that could remotely be described as "AGI."
> This is ego speaking.
This is ignorance manifest.
LLM WILL change the job market dynamics in the coming years. Engineers have been vastly overpaid over the last 10 years. There is no reason to not see a reversal to the mean here. Getting a 500k offer from a FAANG because you studied leetcode for a couple weeks is not going to fly anymore.
In addition to this, most companies aren't willing to give away all off their proprietary IP and knowledge through 3rd party servers.
It will be awhile before engineering jobs are at risk.
Deleted Comment
Most of the noise I hear about LLMs is about expectations that things will improve. It's always like that: people extrapolate and get money from VCs for that.
The thing is: nobody can know. So when someone extrapolates and says it's likely to happen as they predict, they shouldn't be surprised to get answers that say "I don't have the same beliefs".
It absolutely isn't. I have yet to find an area where LLM-generated code solves the kinds of problems I work on more reliably, effectively, of efficiently than I do.
I'm also not interested in spending my mental energy on code reviews for an uncomphrehending token-prediction golem, let alone finding or fixing bugs in the code it blindly generates. That's a waste of my time and a special kind of personal hell.
I suspect you haven't seen code review at a 500+ seat company.
I'm in my 40s with a pretty interesting background and I feel like maybe I'll make it to retirement. There are still mainframe programmers after all. Maintaining legacy stuff will still have a place.
But I think we'll be the last generation where programmers will be this prevalent and the job market will severely contract. No/low code solutions backed by llms are going to eat away at a lot of what we do, and for traditional programming the tooling we use is going to improve rapidly and greatly reduce the number of developers needed.
Very much so. These things are moving so quickly and agentic systems are already writing complete codebases. Give it a few years. No matter how 1337 you think you are, they are very likely to surpass you in 5-10 years.
I mean, in a way, yeah.
Last 10 years were basically one hype-cycle after another filled with lofty predictions that never quite panned out. Besides the fact that many of these predictions kind of fell short, there's also the perception that progress on these various things kind of ground to a halt once the interest faded.
3D printers are interesting. Sure, they have gotten incrementally better after the hype cycle died out, but otherwise their place in society hasn't changed, nor will it likely ever change. It has its utility for prototyping and as a fun hobbyist machine for making plastic toys, but otherwise I remember people saying that we'd be able to just 3D print whatever we needed rather than relying on factories.
Same story with VR. We've made a lot of progress since the first Oculus came out, but otherwise their role in society hasn't changed much since then. The latest VR headsets are still as useless and still as bad for gaming. The metaverse will probably never happen.
With AI, I don't want to be overly dismissive, but at the same time there's a growing consensus that pre-training scaling laws are plateauing, and AI "reasoning" approaches always seemed kind of goofy to me. I wouldn't be surprised if generative AI reaches a kind of equilibrium where it incrementally improves but improves in a way where it gets continuously better at being a junior developer but never quite matures beyond that. The world's smartest beginner if you will.
Which is still pretty significant mind you, it's just that I'm not sure how much this significance will be felt. It's not like one's skillset needs to adjust that much in order to use Cursor or Claude, especially as they get better over time. Even if it made developers 50% more productive, I feel like the impact of this will be balanced-out to a degree by declining interest in programming as a career (feel like coding bootcamp hype has been dead for a while now), a lack of enough young people to replace those that are aging out, the fact that a significant number of developers are, frankly, bad at their job and gave up trying to learn new things a long time ago, etc etc.
I think it really only matters in the end if we actually manage to achieve AGI, once that happens though it'll probably be the end of work and the economy as we know it, so who cares?
I think the other thing to keep in mind is that the history of programming is filled with attempts to basically replace programmers. Prior to generative AI, I remember a lot of noise over low-code / no-code tools, but they were just the latest chapter in the evolution of low-code / no-code. Kind of surprised that even now in Anno Domini 2024 one can make a living developing small-business websites due to the limitations of the latest batch of website builders.
> This is ego speaking.
Consider this, 100% of AI training data is human-generated content.
Generally speaking, we apply the 90/10 rule to human generated content: 90% of (books, movies, tv shows, software applications, products available on Amazon) is not very good. 10% shines.
In software development, I would say it's more like 99 to 1 after working in the industry professionally for over 25 years.
How do I divorce this from my personal ego? It's easy to apply objective criteria:
- Is the intent of code easy to understand?
- Are the "moving pieces" isolated, such that you can change the implementation of one with minimal risk of altering the others by mistake?
- Is the solution in code a simple one relative to alternatives?
The majority of human produced code does not pass the above sniff test. Most of my job, as a Principal on a platform team, is cleaning up other peoples' messes and training them how to make less of a mess in the future.
If the majority of human generated content fails to follow basic engineering practices that are employed in other engineering disciplines (i.e: it never ceases to amaze me how much of an uphill battle it is just to get some SWEs just to break down their work into small, single responsibility, easily testable and reusable "modules") then we can't logically expect any better from LLMs because this is what they're being trained on.
And we are VERY far off from LLMs that can weigh the merits of different approaches within the context of the overall business requirements and choose which one makes the most sense for the problem at hand, as opposed to just "what's the most common answer to this question?"
LLMs today are a type of magic trick. You give it a whole bunch of 1s and 0s so that you can input some new 1s and 0s and it can use some fancy proability maths to predict "based on the previous 1s and 0s, what are the statistically most likely next 1s and 0s to follow from the input?"
That is useful, and the result can be shockingly impressive depending on what you're trying to do. But the limitations are so limited that the prospect of replacing an entire high-skilled profession with that magic trick is kind of a joke.
Dead Comment
Dead Comment
Every SaaS, marketplace is at risk of extinction, superseded by AI agents communicating ad-hoc. Management and business software replaced by custom, one-off programs built by AI. The era of large teams painstakingly building specialized software for niche use cases will end. Consequently we’ll have millions of unemployed developers, except for the ones maintaining the top level orchestration for all of this.
What systems do you think are going to start disappearing? I'm unclear how LLMs are contributing to systems becoming redundant.
As an aside, at the top of the front page right now is a sprawling essay titled "Why is it so hard to buy things that work well?"...
A more tangible pitfall I see people falling into is testing LLM code generation using something like ChatGPT and not considering more involved usage of LLMs via interfaces more suited for software development. The best results I've managed to realize on our codebase have not been with ChatGPT or IDEs like Cursor, but a series of processes that iterate over our full codebase multiple times to extract various levels of resuable insights, like general development patterns, error handling patterns, RBAC-related patterns, extracting example tasks for common types of tasks based on git commit histories (i.e. adding a new API endpoint related to XYZ), common bugs or failure patterns (again by looking through git commit histories), which create a sort of library of higher-level context and reusable concepts. Feeding this into o1, and having a pre-defined "call graph" of prompts to validate the output, fix identified issues, consider past errors in similar types of commits and past executions, etc has produced some very good results for us so far. I've also found much more success with ad-hoc questions after writing a small static analyzer to trace imports, variable references->declarations, etc, to isolate the portions of the codebase to use for context rather than RAG-based searching that a lot of LLM-centric development tools seem to use. It's also worth mentioning that performance quality seems to be very much influenced by language; I thankfully primarily work with Python codebases, though I've had success using it against (smaller) Rust codebases as well.
A person with experience knowing how to push LLMs to output the perfect little function or utility to solve a problem, and collect enough of them to get somewhere is the interesting piece.
I'd love it if more companies were actually considering real engineering needs to provide products in this space. Until then, I have yet to see any compelling evidence that the current chatbot models can consistently produce anything useful for my particular work other than the occasional SQL query.
When you consider LLMs to be building blocks in bigger, more complex systems, their potential increases dramatically. That's where mid/senior engineers would chip in and add value to a company, in my point of view. There's also different infrastructure paradigms involved that have to be considered carefully, so DevOps is potentially necessary for years to come.
I see a lot of ego in this comment, and I think this is actually a good example of how to NOT safeguard yourself against LLMs. This kind of skepticism is the most toxic to yourself. Dismiss them as novelty for the masses, as bullshit tech, keep doing your same old things and discard any applications. Textbook footgun.
Do you have any examples of where/how that would work? It has seemed for me like lot of the hype is "they'll be good" with no further explanation.
Regardless of how sharp you keep yourself you're still at subject to the macro environment.
I'm far more worried about mental degradation due to any number of circumstances -- unlucky genetics, infections, what have you. But "future proofing" myself against some of that has the same answer: Remain curious, remain mentally ambidextrous, and don't let other people (or objects) think for me.
My brain is my greatest asset both for my job and my private life. So I do what I can to keep it in good shape, which incidentally also means replacing me with a parrot is unlikely to be a good decision.
I know a few people who have been primarily programming for 10 years but are not seniors. 5 of them (probably 10 or more, but let's not overdo it), with AI, cannot replace one senior developer unless you make that senior do super basic tasks.
Unfortunately it won't be your sniff test that matters. It's going to be an early founder that realizes they don't need to make that extra seed round hire, or the resource limited director that decides they can forgo that one head count and still deliver the product on time, or the in house team that realizes they no longer need a dedicated front end dev because, for their purposes, AI is good enough.
Personally, the team I lead is able to ship much faster with AI assistants than without, which means in practice we can out compete much larger teams in the same space.
Sure their are things that AI will always struggle with, but those things aren't merely "senior" in nature, they're much closer to the niche expert type of problems. Engineers working on generally cutting edge work will likely be in demand and hard to replace, but many others will very likely be impacted by AI from multiple directions.
So, you're giving away your company's IP to AI firms, does your CEO understand this?
I feel like of all the things in this thread, this one is on them. It absolutely is something that LLMs are good at. They have the sample size, examples and docs, all the things. LLMs are particularly good at speaking "their" language, the most surprising thing is that they can do so much more beyond that next token reckoning.
So, yeah, I'm a bit surprised that a shop like Shopify is so sloppy, but absolutely I think they should be able to provide you an LLM to answer your questions. Particularly given some of the Shopify alumni I've interviewed.
That said, some marketing person might have just oversold the capabilities of an LLM that answers most of their core customer questions, rather than one that knows much at all about their API integrations.
That's why the question is future proof. Models get better with time, not worse.
The job of a software engineer is to first understand the real needs of the customer/user, which requires a level of knowledge and understanding of the real world that LLMs simply don't have and will never do because that is simply not what it does.
The second part of a software engineer's job is to translate those needs into a functional program. Here the issue is that natural languages are simply not precise enough for the kind of logic that is involved. This is why we invented programming languages rather than use plain English. And since the interface of LLMs is by definition human (natural) languages, it fundamentally will always have the same flaws.
Any precise enough interface for this second part of the job will by definition just be some higher level programming language, which not only involves an expert of the tool anyway, but also is unrealistic given the amount of efforts that we already collectively invest into getting more productive languages and frameworks to replace ourselves.
There's plenty of low-code, no-code solutions around, and yet still lots of software. The slice of the pie will change, but it's very hard to see it being eliminated entirely.
Ultimately it's going to come down to "do I feel like I can trust this?" and with little to no way to be certain you can completely trust it, that's going to be a harder and harder sell as risk increases with the size, complexity, and value of the business processes being managed.
A decade (almost?) ago, people were saying "look at how self-driving cars improved in the last 2 years: in 3-5 years we won't need to learn how to drive anymore". And yet...
Thanks to this thread I have signed up again for Copilot and I am blown away. I think this easily makes me 2x productive when doing implementation work. It does not make silly mistakes anymore and it's just faster to have it write a block of code than doing it myself.
And the experience is more of an augmentation than replacement. I don't have to let it run wild. I'm using it locally and refactor its output if needed to match my code quality standards.
I am as much concerned (job market) as I am excited (what I will be able to build myself).
I'm not sure whether you mean human generations or LLM generations, but I think it's the latter. In that case, I agree with you, but also that doesn't seem to put you particularly far off from OP, who didn't provide specific timelines but also seems to be indicating that the elimination of most engineers is still a little ways away. Since we're seeing a new generation of LLMs every 1-2 years, would you agree that in ~10 years at the outside, AI will be able to do the things that would cause you to gladly retire?
I don’t think that’s impossible but I think we’re quite a few human generations away from that. And scaling LLM’s is not the solution to that problem; an LLM is just a small but important part of it.
I’m curious what’s so special about that blitting?
BTW, pixel shaders in D3D11 can receive screen-space pixel coordinates in SV_Position semantic. The pixel shader can cast .xy slice of that value from float2 to int2 (truncating towards 0), offset the int2 vector to be relative to the top-left of the sprite, then pass the integers into Texture2D.Load method.
Unlike the more commonly used Texture2D.Sample, Texture2D.Load method delivers a single texel as stored in the texture i.e. no filtering, sampling or interpolations. The texel is identified by integer coordinates, as opposed to UV floats for the Sample method.
I do hear some of my junior colleagues use them now and again, and gain some value there. And if llm's can help get people up to speed faster that'd be a good thing. Assuming we continue to make the effort to understand the output.
But yeah, agree, I raise my eyebrow from time to time, but I don't see anything jaw dropping yet. Right now they just feel like surrogate googler's.
Deleted Comment
There will be human programmers in the future as well, they just won’t be ones who can’t use LLMs.
I think it's an interesting psychological phenomenon similar to virtue signalling. Here you are signalling to the programmer in-group how good of a programmer you are. The more dismissive you are the better you look. Anyone worried about it reveals themself as a bad coder.
It's a luxury belief, and the better LLMs get the better you look by dismissing them.
It's essentially like saying "What I do in particular, is much too difficult for an AI to ever replicate." It is always in part, humble bragging.
I think some developers like to pretend that they are exclusively solving problems that have never been solved before. Which sure, the LLM architecture in particular might never be better than a person for the novel class of problem.
But the reality is, an extremely high percentage of all problems (and by reduction, the lines of code that build that solution) are not novel. I would guesstimate that less than 1 out of 10,000 developers are solving truly novel problems with any regularity. And those folks tend to work at places like Google Brain.
That's relevant because LLM's can likely scale forever in terms of solving the already solved.
Defending AI with passion is nonsensical, at least ironic.
There is some tech that is getting progressively better.
I am high on the linear scale
Therefore I don't worry about it cathing up to me ever
And this is the top voted argument.
Is it still getting better? My understanding is that we're already training them on all of the publicly available code in existence, and we're running in to scaling walls with bigger models.
Deleted Comment
Deleted Comment
People really believe it will be generations before an AI will approach human level coding abilities? I don't know how a person could seriously consider that likely given the pace of progress in the field. This seems like burying your head in the sand. Even the whole package of translating high level ideas into robust deployed systems seems possible to solve within a decade.
I believe there will still be jobs for technical people even when AI is good at coding. And I think they will be enjoyable and extremely productive. But they will be different.
hmmmmm
The off-topic mentioning of graphics programming was because I tend to type as I think, then make corrections, and as I re-read it now the paragraph isn't great. The intent was to give an example of how I keep myself sharp, and challenging what many consider "settled" knowledge, where graphics programming happens to be my most recent example.
For what it's worth, English isn't my native language, and you'll have to take my word for it that no chat bots were used in generating any of the responses I've made in this thread. The fact that people are already uncertain about who's a human and who's not is worrisome.
source: been coding for 15+ years and using LLMs for past year.
Deleted Comment
I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.
Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:
- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans
Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.
And lastly - LLMs don't have to be "great," they just have to be "good enough."
See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.
Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.
Everyone cites these kind of examples as LLM beating some test or other as some kind of validation. It isn’t .
To me that just tells that the tests are poor, not the LLMs are good. Designing and curating a good test is hard and expensive.
Certifying and examination bodies often use knowledge as a proxy to understanding or reasoning or any critical thinking skills.they just need to filter enough people out, there is no competitive pressure to improve quality at all. Knowledge tests do that just as well and are cheaper.
Standardization is also hard to do correctly, common core is a classic example of how that changes incentives for both teachers and students . Goodhart's law also applies.
To me it is more often than not a function of poor test measurement practices rather than any great skill shown by the LLM.
Passing the CFA or the bar exam while daunting for humans by design does not teach you anything practicing law or accounting. Managing the books of a real company is nothing like what the textbook and exams teaches you .
—-
The best accountants or lawyers etc are not making partner because of their knowledge of the law and tax. They make money same as everyone else - networking and building customer relationships. As long as the certification bodies don’t flood the market they will do well which is what the test does.
I mean the same is true of leetcode but I know plenty of mediocre engineers still making ~$500k because they learned how to grind leetcode.
You can argue that the world is unjust till you're blue in the face, but it won't make it a just world.
That being said, there's also a massive low hanging fruit in dev work that we'll automate away, and I feel like that's coming sooner rather than later, yes even though we've been saying that for decades. However, I bet that the incumbents (Senior SWE) have a bit longer of a runway and potentially their economic rent increases as they're able to be more efficient, and companies need not hire as many humans as they needed before. Will be an interesting go these next few decades.
And this has been solve for years already with existing tooling. Debuggers, Intellisense, Linters, Snippets and other code generations tools, build systems, Framework Specific tooling.... There's a lot of tools for writing and maintaining code. The only thing left was always the understanding of the system that solves the problem and knowledge of the tools to build it. And I don't believe we can automate this away. Using LLMs is like riding a drugged donkey instead of a motorbike. It can only work for very short distances or the thrill.
In any long lived project, most modifications are only a few lines of codes. The most valuable thing is the knowledge of where and how to edit. Not the ability to write 400 lines of code in 5 seconds.
Trading is about doing very specific math in a very specific scenario with known expectations.
Software engineering is anything but like that.
I'd be impressed if the profession survived unscathed. It will mutate to some form or another. And it will probably shrink. Wages will go down to the levels that OpenAI will sell their monthly pro. And maybe 3rd world devs will stay competitive enough. But that depends a lot on whether AI companies will get abundant and cheap energy. IIUC that's their choke point atm.
If it happens it will be quite ironic and karmic TBH. Ironic as it is its our profession very own R&D that will kill it. Karmic because it's exactly what it did to numerous other fields and industries (remember Polaroid killed by instagram, etc).
OTOH there's no riskier thing than predicting the future. Who knows. Maybe AI will fall on it's own face due to insane energy economics, internet rot (attributed to itself) and what not.
It's possible everyone just stops hiring new folks and lets the incumbents automate it. Or it's possible they all washed cars the rest of their careers.
- post-9/11 stress and ‘08 was a big jolt, and pushed a lot of folks out.
- Managed their money fine (or not) for when the job slowed down and also when ‘08 hit
- “traders” became “salespeople” or otherwise managing relationships
- Saw the trend, leaned into it hard, you now have Citadel, Virtu, JS, and so on.
- Saw the trend, specialized or were already in assets hard to automate.
- Senior enough to either steer the algo farms + jr traders, or become an algo steerer themselves
- Not senior enough, not rich enough, not flexible enough or not interested anymore and now drive Uber, mobile dog washers, joined law enforcement (3x example I know of).
Any interesting question is "How is programming like trading securities?"
I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.
What trading software has a hard time doing is coming up with new securities. What LLMs absolutely cannot do (yet?) is come up with novel mechanisms. To illustrate that, consider the idea that an LLM has been trained on every kind of car there is. If you ask it to design a plane it will fail. Train it on all cars and plans and ask it to design a boat, same problem. Train it on cars, planes, and boats and ask it to design a rocket, same problem.
The sad truth is that a lot of programming is 'done' , which is to say we have created lots of compilers, lots of editors, lots of tools, lots of word processors, lots of operating systems. Training an LLM on those things can put all of the mechanisms used in all of them into the model, and spitting out a variant is entirely within the capabilities of the LLM.
Thus the role of humans will continue to be to do the things that have not been done yet. No LLM can design a quantum computer, nor can it design a compiler that runs on a quantum computer. Those things haven't been "done" and they are not in the model. The other role of humans will continue to be 'taste.'
Taste, as defined as an aesthetic, something that you know when you see it. It is why for many, AI "art" stands out as having been created by AI, it has a synthetic aesthetic. And as one gets older it often becomes apparent that the tools are not what determines the quality of the output, it is the operator.
I watched Dan Silva do some amazing doodles with Deluxe Paint on the Amiga and I thought, "That's what I want to do!" and ran out and bought a copy and started doodling. My doodles looked like crap :-). The understanding that I would have to use the tool, find its strengths and weaknesses, and then express through it was clearly a lot more time consuming than "get the tool and go."
LLMs let people generate marginal code quickly. For so many jobs that is good enough. People who can generate really good code taking in constraints that the LLM can't model, is something that will remain the domain of humans until GAI is achieved[2]. So careers in things like real-time and embedded systems will probably still have a lot of humans involved, and systems where every single compute cycle needs to be extracted out of the engine is a priority, that will likely be dominated by humans too.
[1] Very early on there were papers on 'genetic' programming. Its a good thing to read them because they arrive at a singularly important point, "How do you define 'Which is better'?" For a solid, qualitative and testable metric for 'goodness' genetic algorithms out perform nearly everything. When the ability to specify 'goodness' is not there, genetic algorithms cannot out perform humans. What is more they cannot escape 'quality moats' where the solutions on the far side the moat are better than the solutions being explored but they cannot algorithmically get far enough into the 'bad' solutions to start climbing up the hill on the other side to the 'better' solutions.
[2] GAI being "Generalized Artificial Intelligence" which will have to have some way of modelling and integrating conceptual systems. Lots of things get better then (like self driving finally works), maybe even novel things. Until we get that though, LLMs won't play here.
What's weird to me is why people think this is some kind of great benefit. Not once have I ever worked on a project where the big problem was that everyone was already maxed out coding 8 hours a day.
Figuring out what to actually code and how to do it the right way seemed to always be the real time sink.
right, but when python came into popularity it's not like we reduced the number of engineers 10 fold, even though it used to take a team 10x as long to write similar functionality in C++.
Deleted Comment
If LLMs become so good that everyone can just let an LLM go into the world and make them money, the way we do with our investments, won't that be good?
And, certainly, prob a good thing for some, bad thing for the money conveyor belt of the last 20 yrs of tech careers.
Sounds like it was written by someone trying to keep any grasp on the fading reality of AI.
NO NO NO NO NO NO NO!!!! It may be that some random script you run on your PC can be "good enough", but the software the my business sells can't be produced by "good enough" LLMs. I'm tired of my junior dev turning in garbage code that the latest and greatest "good enough" LLM created. I'm about to tell him he can't use AI tools anymore. I'm so thankful I actually learned how to code in pre-LLM days, because I know more than just how to copy and paste.
I think of LLMs like clay, or paint, or any other medium where you need people who know what they're doing to drive them.
Also, I might humbly suggest you invest some time in the junior dev and ask yourself why they keep on producing "garbage code". They're junior, they aren't likely to know the difference between good and bad. Teach them. (Maybe you already are, I'm just taking a wild guess)
1) I believe we need true AGI to replace developers.
2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.
3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.
Therefore, unless you for some reason believe you will be in the shrinking portion that cannot be replaced I think the question deserves more attention than “nothing”.
Short of genuine AGI I’ve yet to see a compelling argument why productivity eliminates jobs, when the opposite has been true in every modern economy.
Me and the other senior dev spent weeks reviewing these PRs. Here’s what we found:
- The feature wasn’t built to spec, so even though it worked in general the details were all wrong
- The code was sloppy and didn’t adhere to the repos guidelines
- He couldn’t explain why he did things a certain way versus another, so reviews took a long time
- The code worked for the happy path, and errored for everything else
Eventually this guy got moved to a different team and we closed his PRs and rewrote the feature in less than a week.
This was an awful experience. If you told me that this is the future of software I’d laugh you out of the room, because engineers make enough money and have enough leverage to just quit. If you force engineers to work this way, all the good ones will quit and retire. So you’re gonna be stuck with the guys who can’t write code reviewing code they don’t understand.
1. A lot more projects get the green light when the price is 5x less, and a many more organizations can afford custom applications.
2. LLMs unlock large amounts of new applications. A lot more of the economy is now automatable with LLMs.
I think jr devs will see the biggest hit. If you're going to teach someone how to code, might as teach a domain expert. LLMs already code much better than almost all jr devs.
Comparing only the amount of forward progress in a codebase and AI's ability to participate or cover in it might be better.
This is what will happen
At that point its total societal upheaval and losing my job will probably be the least of my worries.
2) I do see this, given the money poured into this cycle, as a potential possibility. It may not just be LLM's. To another comment you are betting against the whole capitalist systems, human ingenuity and billions/trillions? of dollars targeted at making SWE's redundant.
3) I think it can disrupt only knowledge jobs, and be some large time before it disrupts physical jobs. For SWE's this is the worst outcome - it means you are on your own w.r.t adjusting for the changes coming. Its only "world changing" as you put it to economic systems if it disrupts everyone at once. I don't think it will happen that way.
More to the point software engineers will automate themselves out before other jobs for only one reason - they understand AI better than other jobs (even if objectively it is harder to automate) and they tend not to protect the knowledge required to do so. They have the domain knowledge to know what to automate/make redundant.
The people that have the power to resist/slow down disruption (i.e. hide knowledge) will gain more pricing power, and therefore be able to earn more capital taking advantage of the efficiency gains made by jobs being redundant from AI. The last to be disrupted has the most opportunity to gain ownership of assets and capital from their economic profits preserved. The inefficient will win out of this - capital rewards scarcity/people that can remain in demand despite being inefficient relatively. Competition is for losers - its IMV the biggest flaw of the system. As a result people will see what has happened to SWE's and make sure their industry "has time" to adapt particularly since many knowledge professions are really "industry unions/licensed clubs" who have the advantage of keeping their domain knowledge harder to access.
To explain it further even if software is more complicated; there is just so much more capital it seems trying to disrupt it than other industries. Given IMV software demand is relatively inelastic to price due to scaling profits, making it cheaper to produce won't really benefit society all that much w.r.t more output (i.e. what was good economically to build would of been built anyway in an inelastic demand/scaling commodity). Generally more supply/less cost of a good has more absolute societal benefits when there is unmet and/or elastic demand. Instead costs of SWE's will go down and the benefit will be distributed to the jobs/people remaining (managers, CEO's, etc) - the people that dev's think "are inefficient" in my experience. When it is about inelastic demand its more re-distributive; the customer benefits and the supplier (in this case SWE's) lose.
I don't like saying this; but we gave AI all the advantage. No licensing requirements, open source software for training, etc.
What happens when a huge chunk of knowledge workers lose their job? Who is going to buy houses, roofs, cars, cabinets, furniture, amazon packages, etc. from all the the blue-collar workers?
What happens when all those former knowledge workers start flooding the job markets for cashiers and factory workers, or applying en masse to the limited spots in nursing schools or trade programs?
If GPTs take away knowledge work at any rate above "glacially slow" we will quickly see a collapse that affects every corner of the global economy.
At that point we just have to hope for a real revolution in terms of what it means to own the means of production.
The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.
Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.
Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?
At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.
Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.
This just seems like just the next level of abstraction. I don't forsee a "traders all disappeared" situation like the top comment, because at the end of the day someone needs to know WHAT they want to build.
So yes, less junior developers and development looking more like management/architecting. A lot more reliance on deeply knowledgable folks to debug the spaghetti hell. But also a lot more designers that are suddenly Very Successful Developers. A lot more product people that super-charge things. A lot more very fast startups run by some shmoes with unfundable but ultimately visionary ideas.
At least, that's my best case scenario. Otherwise: SkyNet.
I think it's important to note that there were a couple distinct markets for CASE:
1. Military/aerospace/datacomm/medical type technical development. Where you were building very complex things, that integrated into larger systems, that had to work, with teams, and you used higher-level formalisms when appropriate.
2. "MIS" (Management Information Systems) in-house/intranet business applications. Modeling business processes and flows, and a whole lot of data entry forms, queries, and printed reports. (Much of the coding parts already had decades of work on automating them, such as with WYSIWYG form painters and report languages.)
Today, most Web CRUD and mobile apps are the descendant of #2, albeit with branches for in-house vs. polished graphic design consumer appeal.
My teams had some successes with #1 technical software, but UML under IBM seemed to head towards #2 enterprise development. I don't have much visibility into where it went from there.
I did find a few years ago (as a bit of a methodology expert familiar with the influences that went into UML, as well as familiar with those metamodels as a CASE developer) that the UML specs were scary and huge, and mostly full of stuff I didn't want. So I did the business process modeling for a customer logistics integration using a very small subset, with very high value. (Maybe it's a little like knowing hypertext, and then being teleported 20 years into the future, where the hypertext technology has been taken over by evil advertising brochures and surveillance capitalism, so you have to work to dig out the 1% hypertext bits that you can see are there.)
Post-ZIRP, if more people start caring about complex systems that really have to work (and fewer people care about lots of hiring and churning code to make it look like they have "growth"), people will rediscover some of the better modeling methods, and be, like, whoa, this ancient DeMarco-Yourdon thing is most of what we need to get this process straight in a way everyone can understand, or this Harel thing makes our crazy event loop with concurrent activities tractable to implement correctly without a QA nightmare, or this Rumbaugh/Booch/etc. thing really helps us understand this nontrivial schema, and keep it documented as a visual for bringing people onboard and evolving it sanely, and this Jacobson thing helps us integrate that with some of the better parts of our evolving Agile process.
But I was definitely in camp #2 - the in-house business applications. I'd love to hear the experiences from those in camp #1. To your point, once IBM got involved it all went south. There was a place I was working for in the early 90s that really turned me off against anything "enterprise" from IBM. I had yet to learn that would apply to pretty much every vendor! :)
I'm no SWE and probably never will be. SWE probably don't consider what I do "building an app" but I don't really care
And where you do, no LLM is going to replace them because they are working in the dark mines where no compiler has seen and the optimizations they are doing involve arcane lore about the mysteries of some Intel engineer's mind while one or both of them are on a drug fueled alcoholic deep dive.
I can see people still learning assembly in a pedagogical setting, but not in a production setting. I'd be interested to hear otherwise.
I see LLMs as the next higher level of abstraction.
Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.
Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.
Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.
On the other side, I'm switching to game dev and it became a very useful companion, outputing well known algorithms. It's more like an universal API rather than a junior assistant.
Instead of me taking time to understand the algo in the details then implementing, I use GPT4o to expand the Unreal API with missing parts. It truly expands the scope I'm able to handle and it feels good to save hours that compounds in days and weeks of work.
Eg. 1. OOB and SAT https://stackoverflow.com/questions/47866571/simple-oriented...
2. Making a grid system using lat/long coordinates for a voxel planet.
As someone who knows web front end development only to the extent I need it for internal tools, it’s been turning day-long fights into single hour dare I say it pleasurable experiences. I tell it to make a row of widgets and it outputs all the div+css soup (or e.g. material components) that only needs some tuning instead of having to google everything.
It still takes experience to know when it doesn’t use a component when it should etc., but it’s a force multiplier, not a replacement. For now.
I think you have to be a developer to learn how to write those requirements well. And I don't even mean the concepts of data flows and logic flows. I mean, just learning to organise thoughts in a way that they don't fatally contradict themselves or lead to dead ends or otherwise tie themselves in unresolvable knots. I mean like non-lawyers trying to write laws without any understanding of the entire suite of mental furniture.
I didn't want to expand it, for fear of sounding like an elitist, and you said it better anyway. The same things that make a programmer excellent will be in a much better position to use an even better LLM.
Concise thinking and expression. At the moment LLMs will just kinda 'shotgun' scattered ideas based on your input. I expect the better ones will be massively better when fed better input.
It is happening just not at scale yet to really scare people; that will happen though. It is just stupidly cheaper; for the price of one junior you can do so many api requests to claude it's not even funny. Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run.
Good logical thinkers and problemsolvers won't be replaced any time soon, but mediocre or bad programmers are already gone ; a llm is faster, cheaper and doesn't get ill or tired. And there are so many of them, just try someone on upwork or fiverr and you will know.
Privacy is a fundamental right.
And companies should care about not leaking trade secrets, including code, but the rest as well.
US companies are known to use the cloud to spy on competitors.
Companies should have their own private LLM, not rely on cloud instances with a contract that guarantees "privacy".
That attitude is why our industry has such a bad rep and why things are going down the drain to dystopia. Devs without ethics. This world is doomed.
Idk if the cheap price is really cheap price or promotion price, where after that the enshittification and price increase happen, which is a trend for most tech companies.
And agree, llm will weed bad programmers further, though in the future, bad alternatives (like analyst or bad llm users) may emerge
What really happened: this is used by programmers to improve their workflow
1. There will be way more software
2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.
3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking
4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.
EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:
- https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...
This presumes that they know exactly what they want.
My brother works for a company and they just ran into this issue. They target customer retention as a metric. The result is that all of their customers are the WORST, don't make them any money, but they stay around a long time.
The company is about to run out of money and crash into the ground.
If people knew exactly what they wanted 99% of all problems in the world wouldn't exist. This is one of the jobs of a developer, to explore what people actually want with them and then implement it.
The first bit is WAY harder than the second bit, and LLMs only do the second bit.
From working in a non-software place, I see the opposite occurring. Non-software management doesn't buy closed source software because they think it's 'better', they buy closed source software because there's a clear path of liability.
Who pays if the software messes up? Who takes the blame? LLMs make this even worse. Anthropic is not going to pay your business damages because the LLM produced bad code.
1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.
2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.
At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.
I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is
From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are
Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect
If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out