Readit News logoReadit News
Posted by u/throwaway_43793 8 months ago
Ask HN: SWEs how do you future-proof your career in light of LLMs?
LLMs are becoming a part of software engineering career.

The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.

I didn't want to believe in it, but I think it's here. And even arguments like "feeding proprietary code" will be eventually solved by companies hosting their own isolated LLMs as they become better and hardware becomes more available.

My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.

So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?

--- EDIT ---

I want to clarify something, because there seems to be slight misunderstanding.

A lot of people have been talking about SWE being not only about code, and I agree with that. But it's also easier to sell this idea to a young person who is just starting in this career. And while I want this Ask HN to be helpful to young/fresh engineers as well, I'm more interested in getting help for myself, and many others who are in a similar position.

I have almost two decades of SWE experience. But despite that, I seem to have missed the party where they told us that "coding is not a means to an end", and realized it in the past few years. I bet there are people out there who are in a similar situations. How can we future-proof our career?

simianparrot · 8 months ago
Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.

I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.

And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.

All the while re-evaluating all my assumptions and that of others.

If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.

Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.

richardw · 8 months ago
For those less confident:

U.S. (and German) automakers were absolutely sure that the Japanese would never be able to touch them. Then Koreans. Now Chinese. Now there are tariffs and more coming to save jobs.

Betting against AI (or increasing automation, really) is a bet against not against robots, but against human ingenuity. Humans are the ones making progress, and we can work with toothpicks as levers. LLM's are our current building blocks, and people are doing crazy things with them.

I've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math, to building a soccer team generator that uses Google's OR tools to optimise across various axes, to spinning up four different test apps with Replit's agent to try multiple approaches to a task I'm working on. All the while skilling up in React and friends.

I don't really have time for those side-quests but LLM's make them possible. Easy, even. The amount of time and energy I'd need pre-LLM's to do this makes this a million miles away from "a waste of time".

And even if LLM's get no better, we're good at finding the parts that work well and using that as leverage. I'm using it to build and check datasets, because it's really good at extraction. I can throw a human in the loop, but in a startup setting this is 80/20 and that's enough. When I need enterprise level code, I brainstorm 10 approaches with it and then take the reins. How is this not valuable?

pdimitar · 8 months ago
In other words, you have built exactly zero commercial-grade applications that us the working programmers work on building every day.

LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.

LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).

You seem to confuse one-off projects that have zero to little need for maintenance for actual commercial programming, perhaps?

Your analogy with the automakers seems puzzlingly irrelevant to the discussion at hand, and very far from transferable to it. Also I am 100% convinced nobody is going to protect the programmers; business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic

Like your parent commenter, if LLMs get on my level, I'd be _happy_ to retire. I don't have a super vested interest in commercial programming, in fact I became as good at it in the last several years because I started hating it and wanted to get all my tasks done with minimal time expended; so I am quite confident in my assessment that LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.

Your take is rather romantic.

nidnogg · 8 months ago
As a senior-ish programmer who struggled a lot with algorithmic thinking in college, it's really awe-inspiring.

Truly hit the nail on the head there. We HAD no business with these side-quests, but now? They're all ripe for the taking, really.

cadamsau · 8 months ago
One hundred per cent this.

LLM pair programming is unbelievably fun, satisfying, and productive. Why type out the code when you can instead watch it being typed while thinking of and typing out/speaking the next thing you want.

For those who enjoy typing, you could try to get a job dictating letters for lawyers, but something tells me that’s on the way out too.

jayd16 · 8 months ago
"Other people were wrong about something else so that invalidates your argument"

Why are half the replies like this?

Timber-6539 · 8 months ago
This is no different from creating a to-do app with an LLM and proclaiming all developers are up for replacement. Demos are not what makes LLMs good, let alone useful.
dustingetz · 8 months ago
quantum computers still can’t factor any number larger than 21
apwell23 · 8 months ago
> 've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math,

Yea i built bunch of apps when RoR blog demo came out like 2 decades ago. So what?

rybosworld · 8 months ago
> Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.

I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?

> LLM’s never provide code that pass my sniff test

This is ego speaking.

IshKebab · 8 months ago
> Is there some expectation that these things won't improve?

I definitely expect them to improve. But I also think the point at which they can actually replace a senior programmer is pretty much the exact point at which they can replace any knowledge worker, at which point western society (possibly all society) is in way deeper shit than just me being out of a job.

> This is ego speaking.

It definitely isn't. LLMs are useful for coding now, but they can't really do the whole job without help - at least not for anything non-trivial.

deathanatos · 8 months ago
> This is ego speaking.

No, it really isn't. Repeatedly, the case is that people are trying to pass off GPT's work as good without actually verifying the output. I keep seeing "look at this wonderful script GPT made for me to do X", and it does not pass code review, and is generally extremely low quality.

In one example, a bash script was generated to count number SLoC changed by author; it was extremely convoluted, and after I simplified it, I noticed that the output of the simplified version differed, because the original was omitted changes that were only a single line.

In another example it took several back & forths during a review to ask "where are you getting this code? / why do you think this code works, when nothing in the docs supports that?" and after several back and forths, it was admitted that GPT wrote it. The dev who wrote it would have been far better served RTFM, than a several cycle long review that ended up with most of GPT's hallucinations being stripped from the PR.

Those who think LLM's output is good have not reviewed the output strenuously enough.

> Is there some expectation that these things won't improve?

Because randomized token generation inherently lacks actual reasoning about the behavior of the code. My code generator does not.

simianparrot · 8 months ago
At my job I review a lot of code, and I write code as well. The only type of developer an LLM’s output comes close to is a fresh junior usually straight out of university in their first real development job, with little practical experience in a professional code-shipping landscape. And the majority of those juniors improve drastically within a few weeks or months, with handholding only at the very start and then less and less guidance. This is because I teach them to reason about their choices and approaches, to question assumptions, and thus they learn quickly that programming rarely has one solution to a problem, and that the context matters so much in determining the way forward.

A human junior developer can learn from this tutoring and rarely regress over time. But the LLM’s all by design cannot and do not rewire their understanding of the problem space over time, nor do they remember examples and lessons from previous iterations to build upon. I have to handhold them forever, and they never learn.

Even when they use significant parts of the existing codebase as their context window they’re still blind to the whole reality and history of the code.

Now just to be clear, I do use LLM’s at my job. Just not to code. I use them to parse documents and assist users with otherwise repetitive manual tasks. I use their strength as language models to convert visual tokens parsed by an OCR to grasp the sentence structure and convert that into text segments which can be used more readily by users. At that they are incredible, even something smaller like llama 7b.

surgical_fire · 8 months ago
> I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?

Is there any expectations that things will? Is there more untapped great quality data that LLMs can ingest? Will a larger model perform meaningfully better? Will it solve the pervasive issue of generating plausibly sounding bullshit?

I used LLMs for a while, I found them largely useless for my job. They were helpful for things I don't really need help with, and they were mostly damaging for things I actually needed.

> This is ego speaking.

Or maybe it was an accurate assessment for his use case, and your wishful thinking makes you think it was his ego speaking.

nicoburns · 8 months ago
> > LLM’s never provide code that pass my sniff test

> This is ego speaking.

That's been my experience of LLM-generated code that people have submitted to open source projects I work on. It's all been crap. Some of it didn't even compile. Some of it changed comments that were previously correct to say something untrue. I've yet to see a single PR that implemented something useful.

jcranmer · 8 months ago
This is at least the third time in my life that we've seen a loudly-heralded purported the-end-of-programming technology. The previous two times both ended up being damp squibs that barely mention footnotes in the history of computing.

Why do we expect that LLMs are going to buck this trend? It's not for accuracy--the previous attempts, when demonstrating their proof-of-concepts, actually reliably worked, whereas with "modern LLMs", virtually every demonstration manages to include "well, okay, the output has a bug here."

tobyhinloopen · 8 months ago
LLMs are great for simple, common, boring things.

As soon as you have something less common, it will give you widely incorrect garbage that does not make any sense. Even worse, it APPEARS correct, but it won’t work or will do something else completely.

And I use 4o and o1 every day. Mostly for boilerplate and boring stuff.

I have colleagues that submit ChatGPT generated code and I’ll immediately recognize it because it is just very bad. The colleague would have tested it, so the code does work, but it is always bad, weird or otherwise unusual. Functional, but not nice.

ChatGPT can give very complicated solutions to things that can be solved with a one-liner.

groby_b · 8 months ago
> Is there some expectation that these things won't improve?

Sure. But the expectation is quantitative improvement - qualitative improvement has not happened, and is unlikely to happen without major research breakthroughs.

LLMs are useful. They still need a lot of supervision & hand holding, and they'll continue to for a long while

And no, it's not "ego speaking". It's long experience. There is fundamentally no reason to believe LLMs will take a leap to "works reliably in subtle circumstances, and will elicit requirements as necessary". (Sure, if you think SWE work is typing keys and make some code, any code, appear, then LLM are a threat)

deegles · 8 months ago
LLMs as they currently exist will never yield a true, actually-sentient AI. maybe they will get better in some ways, but it's like asking if a bird will ever fly to the moon. Something else is needed.
cies · 8 months ago
> ChatGPT was only just released in 2022.

Bitcoin was released in what year? I still cannot use it for payments.

No-code solutions exist since when? And still programmers work...

I dont think all hyped techs are fads. For instance: we use SaaS now instead of installing software locally. This transition took the world by storm.

But those tech that needs lots of ads, and lots of zealots, and make incredible promises: they usually are fads.

sitzkrieg · 8 months ago
ego? LLMs goof on basic math and cant even generate code for many non public things. theyre not useful to me whatsoever
akira2501 · 8 months ago
> Is there some expectation that these things won't improve?

Yes. The current technology is at a dead end. The costs for training and for scaling the network are not sustainable. This has been obvious since 2022 and is related to the way in which OpenAI created their product. There is no path described for moving from the current dead end technology to anything that could remotely be described as "AGI."

> This is ego speaking.

This is ignorance manifest.

code_for_monkey · 8 months ago
I agree with you tbh, and it also just misses something huge that doesnt get brought up: its not about your sniff test, its about your bosses sniff test. Are you making 300k a year? Thats 300 thousand reasons to replace you for a short term boost in profit, companies love doing that.
gosub100 · 8 months ago
The reason it's so good at "rewrite this C program in Python" is because it was trained on a huge corpus of code at GitHub. There is no such corpus of examples of more abstract commands, thus a limited amount by which it can improve.
goalonetwo · 8 months ago
100% and even if they might not "replace" a senior or staff SWE, they can make their job significantly easier which means that instead of requiring 5 Senior or Staff you will only need two.

LLM WILL change the job market dynamics in the coming years. Engineers have been vastly overpaid over the last 10 years. There is no reason to not see a reversal to the mean here. Getting a 500k offer from a FAANG because you studied leetcode for a couple weeks is not going to fly anymore.

jazz9k · 8 months ago
Have you ever used LLMs to generate code? It's not good enough yet.

In addition to this, most companies aren't willing to give away all off their proprietary IP and knowledge through 3rd party servers.

It will be awhile before engineering jobs are at risk.

quantadev · 8 months ago
Even since ChatGPT 3.5 AI coding has been astoundingly good. Like you, I'm totally baffled by developers who say it's not. And I'm not just unskilled and unable to judge. I have 35yrs experience! I've been shocked and impressed from day one, how good AI is. I've even seen Github Copilot do thing that almost resemble mind-reading insofar as predicting what I'm about to type next. It predicts some things it should NOT be able to predict unless it's seeing into the future or into parallel universes or something! And I'm only half joking when I speculate that!

Deleted Comment

palata · 8 months ago
> Is there some expectation that these things won't improve?

Most of the noise I hear about LLMs is about expectations that things will improve. It's always like that: people extrapolate and get money from VCs for that.

The thing is: nobody can know. So when someone extrapolates and says it's likely to happen as they predict, they shouldn't be surprised to get answers that say "I don't have the same beliefs".

globnomulous · 8 months ago
> This is ego speaking.

It absolutely isn't. I have yet to find an area where LLM-generated code solves the kinds of problems I work on more reliably, effectively, of efficiently than I do.

I'm also not interested in spending my mental energy on code reviews for an uncomphrehending token-prediction golem, let alone finding or fixing bugs in the code it blindly generates. That's a waste of my time and a special kind of personal hell.

talldayo · 8 months ago
> This is ego speaking.

I suspect you haven't seen code review at a 500+ seat company.

tomcar288 · 8 months ago
when people's entire livehoods are threatened, you're going to see some defensive reactions.
JeremyNT · 8 months ago
Yeah I feel like this is just the beginning.

I'm in my 40s with a pretty interesting background and I feel like maybe I'll make it to retirement. There are still mainframe programmers after all. Maintaining legacy stuff will still have a place.

But I think we'll be the last generation where programmers will be this prevalent and the job market will severely contract. No/low code solutions backed by llms are going to eat away at a lot of what we do, and for traditional programming the tooling we use is going to improve rapidly and greatly reduce the number of developers needed.

shakezooola · 8 months ago
>This is ego speaking.

Very much so. These things are moving so quickly and agentic systems are already writing complete codebases. Give it a few years. No matter how 1337 you think you are, they are very likely to surpass you in 5-10 years.

eschaton · 8 months ago
They shouldn’t be expected to improve in accuracy because of what they are and how they work. Contrary to what the average HackerNews seems to believe, LLMs don’t “think,” they just predict. And there’s nothing in them that will constrain their token prediction in a way that improves accuracy.
Bjorkbat · 8 months ago
> I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?

I mean, in a way, yeah.

Last 10 years were basically one hype-cycle after another filled with lofty predictions that never quite panned out. Besides the fact that many of these predictions kind of fell short, there's also the perception that progress on these various things kind of ground to a halt once the interest faded.

3D printers are interesting. Sure, they have gotten incrementally better after the hype cycle died out, but otherwise their place in society hasn't changed, nor will it likely ever change. It has its utility for prototyping and as a fun hobbyist machine for making plastic toys, but otherwise I remember people saying that we'd be able to just 3D print whatever we needed rather than relying on factories.

Same story with VR. We've made a lot of progress since the first Oculus came out, but otherwise their role in society hasn't changed much since then. The latest VR headsets are still as useless and still as bad for gaming. The metaverse will probably never happen.

With AI, I don't want to be overly dismissive, but at the same time there's a growing consensus that pre-training scaling laws are plateauing, and AI "reasoning" approaches always seemed kind of goofy to me. I wouldn't be surprised if generative AI reaches a kind of equilibrium where it incrementally improves but improves in a way where it gets continuously better at being a junior developer but never quite matures beyond that. The world's smartest beginner if you will.

Which is still pretty significant mind you, it's just that I'm not sure how much this significance will be felt. It's not like one's skillset needs to adjust that much in order to use Cursor or Claude, especially as they get better over time. Even if it made developers 50% more productive, I feel like the impact of this will be balanced-out to a degree by declining interest in programming as a career (feel like coding bootcamp hype has been dead for a while now), a lack of enough young people to replace those that are aging out, the fact that a significant number of developers are, frankly, bad at their job and gave up trying to learn new things a long time ago, etc etc.

I think it really only matters in the end if we actually manage to achieve AGI, once that happens though it'll probably be the end of work and the economy as we know it, so who cares?

I think the other thing to keep in mind is that the history of programming is filled with attempts to basically replace programmers. Prior to generative AI, I remember a lot of noise over low-code / no-code tools, but they were just the latest chapter in the evolution of low-code / no-code. Kind of surprised that even now in Anno Domini 2024 one can make a living developing small-business websites due to the limitations of the latest batch of website builders.

gspencley · 8 months ago
>> > LLM’s never provide code that pass my sniff test

> This is ego speaking.

Consider this, 100% of AI training data is human-generated content.

Generally speaking, we apply the 90/10 rule to human generated content: 90% of (books, movies, tv shows, software applications, products available on Amazon) is not very good. 10% shines.

In software development, I would say it's more like 99 to 1 after working in the industry professionally for over 25 years.

How do I divorce this from my personal ego? It's easy to apply objective criteria:

- Is the intent of code easy to understand?

- Are the "moving pieces" isolated, such that you can change the implementation of one with minimal risk of altering the others by mistake?

- Is the solution in code a simple one relative to alternatives?

The majority of human produced code does not pass the above sniff test. Most of my job, as a Principal on a platform team, is cleaning up other peoples' messes and training them how to make less of a mess in the future.

If the majority of human generated content fails to follow basic engineering practices that are employed in other engineering disciplines (i.e: it never ceases to amaze me how much of an uphill battle it is just to get some SWEs just to break down their work into small, single responsibility, easily testable and reusable "modules") then we can't logically expect any better from LLMs because this is what they're being trained on.

And we are VERY far off from LLMs that can weigh the merits of different approaches within the context of the overall business requirements and choose which one makes the most sense for the problem at hand, as opposed to just "what's the most common answer to this question?"

LLMs today are a type of magic trick. You give it a whole bunch of 1s and 0s so that you can input some new 1s and 0s and it can use some fancy proability maths to predict "based on the previous 1s and 0s, what are the statistically most likely next 1s and 0s to follow from the input?"

That is useful, and the result can be shockingly impressive depending on what you're trying to do. But the limitations are so limited that the prospect of replacing an entire high-skilled profession with that magic trick is kind of a joke.

Dead Comment

Dead Comment

ricardobeat · 8 months ago
That’s short term thinking in my opinion. LLMs will not replace developers by writing better code: it’s the systems we work on that will start disappearing.

Every SaaS, marketplace is at risk of extinction, superseded by AI agents communicating ad-hoc. Management and business software replaced by custom, one-off programs built by AI. The era of large teams painstakingly building specialized software for niche use cases will end. Consequently we’ll have millions of unemployed developers, except for the ones maintaining the top level orchestration for all of this.

dimgl · 8 months ago
> most of the actual systems we work on will simply start disappearing.

What systems do you think are going to start disappearing? I'm unclear how LLMs are contributing to systems becoming redundant.

asdev · 8 months ago
you do realize that these so called "one-off" AI programs would need to be maintained? Most people paying for Saas are paying for the support/maintenance rather than features, which AI can't handle. No one will want to replace any Saas they depend on with a poorly generated variant that they want to maintain
luddite2309 · 8 months ago
This is a fascinating comment, because it shows such a mis-reading of the history and point of technology (on a tech forum). Technological progress always leads to loss of skilled labor like your own, usually resulting in lower quality (but higher profits and often lower prices). Of COURSE an LLM won't be able to do work as well as you, just as industrial textile manufacturing could not, and still does not, produce the quality of work of 19th century cottage industry weavers; that was in fact one of their main complaints.

As an aside, at the top of the front page right now is a sprawling essay titled "Why is it so hard to buy things that work well?"...

packetlost · 8 months ago
This is a take that shows a completely lack of understanding on what software engineering is actually about.
brink · 8 months ago
Comparing an LLM to an industrial textile machine is laughable, because one is consistent and reliable while the other is not.
sigmarule · 8 months ago
My perspective is that if you are unable to find ways to improve your own workflows, productivity, output quality, or any other meaningful metric using the current SOTA LLM models, you should consider the possibility that it is a personal failure at least as much as you consider the possibility that it is a failure of the models.

A more tangible pitfall I see people falling into is testing LLM code generation using something like ChatGPT and not considering more involved usage of LLMs via interfaces more suited for software development. The best results I've managed to realize on our codebase have not been with ChatGPT or IDEs like Cursor, but a series of processes that iterate over our full codebase multiple times to extract various levels of resuable insights, like general development patterns, error handling patterns, RBAC-related patterns, extracting example tasks for common types of tasks based on git commit histories (i.e. adding a new API endpoint related to XYZ), common bugs or failure patterns (again by looking through git commit histories), which create a sort of library of higher-level context and reusable concepts. Feeding this into o1, and having a pre-defined "call graph" of prompts to validate the output, fix identified issues, consider past errors in similar types of commits and past executions, etc has produced some very good results for us so far. I've also found much more success with ad-hoc questions after writing a small static analyzer to trace imports, variable references->declarations, etc, to isolate the portions of the codebase to use for context rather than RAG-based searching that a lot of LLM-centric development tools seem to use. It's also worth mentioning that performance quality seems to be very much influenced by language; I thankfully primarily work with Python codebases, though I've had success using it against (smaller) Rust codebases as well.

j45 · 8 months ago
Sometimes if it's as much work to setup and keep the tech running compared to writing it, it can be worth thinking about the tradeoffs.

A person with experience knowing how to push LLMs to output the perfect little function or utility to solve a problem, and collect enough of them to get somewhere is the interesting piece.

mplanchard · 8 months ago
This sounds nice, but it also sounds like a ton of work to set up we don't have time for. Local models that don't require us to send our codebase to Microsoft or OpenAI would be something I'm sure we'd be willing to try out.

I'd love it if more companies were actually considering real engineering needs to provide products in this space. Until then, I have yet to see any compelling evidence that the current chatbot models can consistently produce anything useful for my particular work other than the occasional SQL query.

ksdnjweusdnkl21 · 8 months ago
Hard to believe anyone is getting contacted more now than in 2020. But I agree with the general sentiment. I'll do nothing and if I get replaced then I get replaced and switch to woodworking or something. But if LLMs do not pan out then I'll be ahead of all the people who wasted their time with that.
th0ma5 · 8 months ago
The sanest comment here.
nidnogg · 8 months ago
LLMs are not necessarily a waste of time like you mention, as their application isn't limited to generating algorithms like you're used to.

When you consider LLMs to be building blocks in bigger, more complex systems, their potential increases dramatically. That's where mid/senior engineers would chip in and add value to a company, in my point of view. There's also different infrastructure paradigms involved that have to be considered carefully, so DevOps is potentially necessary for years to come.

I see a lot of ego in this comment, and I think this is actually a good example of how to NOT safeguard yourself against LLMs. This kind of skepticism is the most toxic to yourself. Dismiss them as novelty for the masses, as bullshit tech, keep doing your same old things and discard any applications. Textbook footgun.

NorthTheRock · 8 months ago
> When you consider LLMs to be building blocks in bigger, more complex systems, their potential increases dramatically.

Do you have any examples of where/how that would work? It has seemed for me like lot of the hype is "they'll be good" with no further explanation.

jensensbutton · 8 months ago
The question isn't about what you'll do when you're replaced by an LLM, it's what you're doing to future proof your job. There is a difference. The risk to hedge against is the productivity boost brought by LLMs resulting in a drop in the needs for new software engineers. This will put pressure on jobs (simply don't need as many as we used to so we're cutting 15%) AND wages (more engineers looking for fewer jobs with a larger part of their utility being commoditized).

Regardless of how sharp you keep yourself you're still at subject to the macro environment.

simianparrot · 8 months ago
I'm future proofing my job by ensuring I remain someone whose brain is tuned to solving complex problems, and to do that most effectively I find ways to keep being engaged in both the fundamentals of programming (as already mentioned) and the higher-level aspects: Teaching others (which in turn teaches me new things) and being in leadership roles where I can make real architectural choices in terms of what hardware to run our software on.

I'm far more worried about mental degradation due to any number of circumstances -- unlucky genetics, infections, what have you. But "future proofing" myself against some of that has the same answer: Remain curious, remain mentally ambidextrous, and don't let other people (or objects) think for me.

My brain is my greatest asset both for my job and my private life. So I do what I can to keep it in good shape, which incidentally also means replacing me with a parrot is unlikely to be a good decision.

luckylion · 8 months ago
Are you though? Until the AI-augmented developer provides better code at lower cost, I'm not seeing it. Senior developers aren't paid well because they can write code very fast, it's because they can make good decision and deliver projects that not only work, but can be maintained and built upon for years to come.

I know a few people who have been primarily programming for 10 years but are not seniors. 5 of them (probably 10 or more, but let's not overdo it), with AI, cannot replace one senior developer unless you make that senior do super basic tasks.

crystal_revenge · 8 months ago
> never provide code that pass my sniff test

Unfortunately it won't be your sniff test that matters. It's going to be an early founder that realizes they don't need to make that extra seed round hire, or the resource limited director that decides they can forgo that one head count and still deliver the product on time, or the in house team that realizes they no longer need a dedicated front end dev because, for their purposes, AI is good enough.

Personally, the team I lead is able to ship much faster with AI assistants than without, which means in practice we can out compete much larger teams in the same space.

Sure their are things that AI will always struggle with, but those things aren't merely "senior" in nature, they're much closer to the niche expert type of problems. Engineers working on generally cutting edge work will likely be in demand and hard to replace, but many others will very likely be impacted by AI from multiple directions.

irunmyownemail · 8 months ago
"Personally, the team I lead is able to ship much faster with AI assistants than without, which means in practice we can out compete much larger teams in the same space."

So, you're giving away your company's IP to AI firms, does your CEO understand this?

shafyy · 8 months ago
Completely agree. The other day I was trying to find something in the Shopify API docs (an API I'm not familar with), and I decided to try their chat bot assistant thingy. I thought, well, if an LLM is good at something, it's probably at finding information in a very narrow field like this. However, it kept telling me things that were plain wrong, I could compare it to the docs next to it easily. Would have been faster just reading the docs.
Quarrel · 8 months ago
> I thought, well, if an LLM is good at something, it's probably at finding information in a very narrow field like this.

I feel like of all the things in this thread, this one is on them. It absolutely is something that LLMs are good at. They have the sample size, examples and docs, all the things. LLMs are particularly good at speaking "their" language, the most surprising thing is that they can do so much more beyond that next token reckoning.

So, yeah, I'm a bit surprised that a shop like Shopify is so sloppy, but absolutely I think they should be able to provide you an LLM to answer your questions. Particularly given some of the Shopify alumni I've interviewed.

That said, some marketing person might have just oversold the capabilities of an LLM that answers most of their core customer questions, rather than one that knows much at all about their API integrations.

TZubiri · 8 months ago
"Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time"

That's why the question is future proof. Models get better with time, not worse.

Seb-C · 8 months ago
LLMs are not the right tool for the job, so even if some marginal bits can be improved, it fundamentally cannot "get better".

The job of a software engineer is to first understand the real needs of the customer/user, which requires a level of knowledge and understanding of the real world that LLMs simply don't have and will never do because that is simply not what it does.

The second part of a software engineer's job is to translate those needs into a functional program. Here the issue is that natural languages are simply not precise enough for the kind of logic that is involved. This is why we invented programming languages rather than use plain English. And since the interface of LLMs is by definition human (natural) languages, it fundamentally will always have the same flaws.

Any precise enough interface for this second part of the job will by definition just be some higher level programming language, which not only involves an expert of the tool anyway, but also is unrealistic given the amount of efforts that we already collectively invest into getting more productive languages and frameworks to replace ourselves.

latentsea · 8 months ago
If the last-mile problems of things like autonomous vehicles have been anything to go by, it seems the last mile problems of entrusting your entire business operations to complete black box software, or software written by a novices talking to complete black box, will be infinitely worse.

There's plenty of low-code, no-code solutions around, and yet still lots of software. The slice of the pie will change, but it's very hard to see it being eliminated entirely.

Ultimately it's going to come down to "do I feel like I can trust this?" and with little to no way to be certain you can completely trust it, that's going to be a harder and harder sell as risk increases with the size, complexity, and value of the business processes being managed.

layer8 · 8 months ago
Models don’t get better just by time passing. The specific reasons for why they’ve been getting better don’t necessarily look like they’ll extend indefinitely into the future.
palata · 8 months ago
> Models get better with time, not worse

A decade (almost?) ago, people were saying "look at how self-driving cars improved in the last 2 years: in 3-5 years we won't need to learn how to drive anymore". And yet...

throwaway_36924 · 8 months ago
I have been an AI denier for the past 2 years. I genuinely wanted to try it out but the experience I got from Copilot in early 2023 was just terrible. I have been using ChatGPT for some programming questions for all that time, but it was making mistakes and lack of editor integration did not seem like it would make me more productive.

Thanks to this thread I have signed up again for Copilot and I am blown away. I think this easily makes me 2x productive when doing implementation work. It does not make silly mistakes anymore and it's just faster to have it write a block of code than doing it myself.

And the experience is more of an augmentation than replacement. I don't have to let it run wild. I'm using it locally and refactor its output if needed to match my code quality standards.

I am as much concerned (job market) as I am excited (what I will be able to build myself).

idopmstuff · 8 months ago
> But I think that’s generations away at best.

I'm not sure whether you mean human generations or LLM generations, but I think it's the latter. In that case, I agree with you, but also that doesn't seem to put you particularly far off from OP, who didn't provide specific timelines but also seems to be indicating that the elimination of most engineers is still a little ways away. Since we're seeing a new generation of LLMs every 1-2 years, would you agree that in ~10 years at the outside, AI will be able to do the things that would cause you to gladly retire?

simianparrot · 8 months ago
I mean human generations because to do system architecture, design and development well you need something that can at least match an average human brain in reasoning, logic and learning plasticity.

I don’t think that’s impossible but I think we’re quite a few human generations away from that. And scaling LLM’s is not the solution to that problem; an LLM is just a small but important part of it.

curl-up · 8 months ago
Could you give an example of an AI capability that would change your mind on this, even slightly? "Sniff test" is rather subjective, and job replacement rarely happens with machines doing something better on the exact same axis of performance as humans. E.g., cars wouldn't have passed the coachman's "sniff test". Or to use a current topic - fast food self-order touchscreens don't pass the customer service people sniff test.
Const-me · 8 months ago
> CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines

I’m curious what’s so special about that blitting?

BTW, pixel shaders in D3D11 can receive screen-space pixel coordinates in SV_Position semantic. The pixel shader can cast .xy slice of that value from float2 to int2 (truncating towards 0), offset the int2 vector to be relative to the top-left of the sprite, then pass the integers into Texture2D.Load method.

Unlike the more commonly used Texture2D.Sample, Texture2D.Load method delivers a single texel as stored in the texture i.e. no filtering, sampling or interpolations. The texel is identified by integer coordinates, as opposed to UV floats for the Sample method.

rdrsss · 8 months ago
+1 to this sentiment for now, I give them a try every 6 months or so to see how they advance. And for pure code generation, for my workflow, I don't find them very useful yet. For parsing large sets of documentation though, not bad. They haven't creeped their way into my usual research loop just yet, but I could see that becoming a thing.

I do hear some of my junior colleagues use them now and again, and gain some value there. And if llm's can help get people up to speed faster that'd be a good thing. Assuming we continue to make the effort to understand the output.

But yeah, agree, I raise my eyebrow from time to time, but I don't see anything jaw dropping yet. Right now they just feel like surrogate googler's.

jf22 · 8 months ago
I'd challenge the assertion that LLMs never pass whatever a "sniff test" is to you. The code they produce is looking more and more like working production code.
rakkhi · 8 months ago
Agree with this. SWE don't need to do anything because LLMs will just make expert engineers more productive. More detail on my arguments here: https://rakkhi.substack.com/p/economics-of-large-language-mo...
og2023 · 8 months ago
Seems like nobody noticed the "delve" part, which is a very dark irony here.
dyauspitr · 8 months ago
This is denial. LLMs already write code at a senior level. Couple that with solid tests and a human code reviewer and we’re already at 5x the story points per sprint at the startup I work at.
whtsthmttrmn · 8 months ago
I'm sure that startup is a great indication of things to come. Bound for long term growth.

Deleted Comment

valval · 8 months ago
Another post where I get to tell a skeptic that they’re failing at using these products.

There will be human programmers in the future as well, they just won’t be ones who can’t use LLMs.

swishman · 8 months ago
The arrogance of comments like this is amazing.

I think it's an interesting psychological phenomenon similar to virtue signalling. Here you are signalling to the programmer in-group how good of a programmer you are. The more dismissive you are the better you look. Anyone worried about it reveals themself as a bad coder.

It's a luxury belief, and the better LLMs get the better you look by dismissing them.

rybosworld · 8 months ago
This is spot on.

It's essentially like saying "What I do in particular, is much too difficult for an AI to ever replicate." It is always in part, humble bragging.

I think some developers like to pretend that they are exclusively solving problems that have never been solved before. Which sure, the LLM architecture in particular might never be better than a person for the novel class of problem.

But the reality is, an extremely high percentage of all problems (and by reduction, the lines of code that build that solution) are not novel. I would guesstimate that less than 1 out of 10,000 developers are solving truly novel problems with any regularity. And those folks tend to work at places like Google Brain.

That's relevant because LLM's can likely scale forever in terms of solving the already solved.

irunmyownemail · 8 months ago
"The arrogance of comments like this is amazing."

Defending AI with passion is nonsensical, at least ironic.

arisAlexis · 8 months ago
So your argument is:

There is some tech that is getting progressively better.

I am high on the linear scale

Therefore I don't worry about it cathing up to me ever

And this is the top voted argument.

NorthTheRock · 8 months ago
> is getting progressively better

Is it still getting better? My understanding is that we're already training them on all of the publicly available code in existence, and we're running in to scaling walls with bigger models.

whtsthmttrmn · 8 months ago
Considering the entire purpose of the original post is asking people what they're doing, why do you have such a problem with the top voted argument? If the top voted argument was in favour of this tech taking jobs, would you feel better?

Deleted Comment

goddamnyouryan · 8 months ago
Tell me more about this 2D pixel blitter engine
markerdmann · 8 months ago
isn't "delve" a classic tell of gpt-generated output? i'm pretty sure simianparrot is just trolling us. :-)
tigershark · 8 months ago
Yeah.. generations. I really hope that it doesn’t end like the New York Times article saying that human flight was at best hundred of years away a few weeks before the Wright brothers flight..

Deleted Comment

modeless · 8 months ago
> If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.

People really believe it will be generations before an AI will approach human level coding abilities? I don't know how a person could seriously consider that likely given the pace of progress in the field. This seems like burying your head in the sand. Even the whole package of translating high level ideas into robust deployed systems seems possible to solve within a decade.

I believe there will still be jobs for technical people even when AI is good at coding. And I think they will be enjoyable and extremely productive. But they will be different.

handzhiev · 8 months ago
I've heard similar statements about human translation - and look where the translators are now
atomsatomsatoms · 8 months ago
"delve"

hmmmmm

planb · 8 months ago
This is satire, right? The completely off-topic mentioning of your graphics programming skills, the overconfidence, the phrase "delve into"... Let me guess: The prompt was "Write a typical HN response to this post"
simianparrot · 8 months ago
This made me chuckle. But at the same time a bit uneasy because this could mean my own way of expressing myself online might've been impacted by reading too much AI drivel whilst trying to find signal in the noisy internet landscape...

The off-topic mentioning of graphics programming was because I tend to type as I think, then make corrections, and as I re-read it now the paragraph isn't great. The intent was to give an example of how I keep myself sharp, and challenging what many consider "settled" knowledge, where graphics programming happens to be my most recent example.

For what it's worth, English isn't my native language, and you'll have to take my word for it that no chat bots were used in generating any of the responses I've made in this thread. The fact that people are already uncertain about who's a human and who's not is worrisome.

jejeyyy77 · 8 months ago
head meet sand lol

source: been coding for 15+ years and using LLMs for past year.

Deleted Comment

dogman144 · 8 months ago
The last fairly technical career to get surprisingly and fully automated in the way this post displays concern about - trading.

I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.

Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:

- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans

Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.

And lastly - LLMs don't have to be "great," they just have to be "good enough."

See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.

Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.

manquer · 8 months ago
> LLMs can pass CFA III.

Everyone cites these kind of examples as LLM beating some test or other as some kind of validation. It isn’t .

To me that just tells that the tests are poor, not the LLMs are good. Designing and curating a good test is hard and expensive.

Certifying and examination bodies often use knowledge as a proxy to understanding or reasoning or any critical thinking skills.they just need to filter enough people out, there is no competitive pressure to improve quality at all. Knowledge tests do that just as well and are cheaper.

Standardization is also hard to do correctly, common core is a classic example of how that changes incentives for both teachers and students . Goodhart's law also applies.

To me it is more often than not a function of poor test measurement practices rather than any great skill shown by the LLM.

Passing the CFA or the bar exam while daunting for humans by design does not teach you anything practicing law or accounting. Managing the books of a real company is nothing like what the textbook and exams teaches you .

—-

The best accountants or lawyers etc are not making partner because of their knowledge of the law and tax. They make money same as everyone else - networking and building customer relationships. As long as the certification bodies don’t flood the market they will do well which is what the test does.

crystal_revenge · 8 months ago
> To me that just tells that the tests are poor, not the LLMs are good.

I mean the same is true of leetcode but I know plenty of mediocre engineers still making ~$500k because they learned how to grind leetcode.

You can argue that the world is unjust till you're blue in the face, but it won't make it a just world.

oangemangut · 8 months ago
Having also worked on desks in the 00s and early 10s I think a big difference here is what trading meant really changed; much of what traders did went away with innovations in speed. Speed and algos became the way to trade neither of which humans can do. While SWE became significantly more important on trading desks, you still have researchers, quants, portfolio analysts, etc. that spend their working days developing new algos, new market opportunities, ways to minimize TCOS, etc.

That being said, there's also a massive low hanging fruit in dev work that we'll automate away, and I feel like that's coming sooner rather than later, yes even though we've been saying that for decades. However, I bet that the incumbents (Senior SWE) have a bit longer of a runway and potentially their economic rent increases as they're able to be more efficient, and companies need not hire as many humans as they needed before. Will be an interesting go these next few decades.

skydhash · 8 months ago
> That being said, there's also a massive low hanging fruit in dev work that we'll automate away

And this has been solve for years already with existing tooling. Debuggers, Intellisense, Linters, Snippets and other code generations tools, build systems, Framework Specific tooling.... There's a lot of tools for writing and maintaining code. The only thing left was always the understanding of the system that solves the problem and knowledge of the tools to build it. And I don't believe we can automate this away. Using LLMs is like riding a drugged donkey instead of a motorbike. It can only work for very short distances or the thrill.

In any long lived project, most modifications are only a few lines of codes. The most valuable thing is the knowledge of where and how to edit. Not the ability to write 400 lines of code in 5 seconds.

jvanderbot · 8 months ago
This actually surfaces a much more likely scenario: That it's not our jobs that are automated, but a designed-from-scratch automated sw/eng job that just replaces our jobs because it's faster and better. It's quite possible all our thinking is just required because we can only write one prototype at a time. If you could generate 10 attempts / day, until you have a stakeholder say "good enough", you wouldn't need much in the way of requirements, testing, thinking, design, etc.
datavirtue · 8 months ago
We need that reduction in demand for workers, though. Backfilling is not going to be a thing for a population in decline.
epolanski · 8 months ago
I don't see it.

Trading is about doing very specific math in a very specific scenario with known expectations.

Software engineering is anything but like that.

grugagag · 8 months ago
Yes, software engineering is different in many areas but today a lot of it is CRUD and plumbing. While SW engineering will not die it will certainly transform a lot, quite possibly there will be fewer generalists than today and more specialized branches will pop out - or maybe being a generalist will require one to be familiar many new areas. Likely the code we write today will go the same way writing assembly code went and sure, it will not completely disappear but...
throw09890989 · 8 months ago
Actually the profession is already taking big hit. Expecting at least partial replacement by LLMs is _already_ one of the reasons for the reduction of new jobs. Actually _replacing_ developers is one of the killer apps investors see in AI.

I'd be impressed if the profession survived unscathed. It will mutate to some form or another. And it will probably shrink. Wages will go down to the levels that OpenAI will sell their monthly pro. And maybe 3rd world devs will stay competitive enough. But that depends a lot on whether AI companies will get abundant and cheap energy. IIUC that's their choke point atm.

If it happens it will be quite ironic and karmic TBH. Ironic as it is its our profession very own R&D that will kill it. Karmic because it's exactly what it did to numerous other fields and industries (remember Polaroid killed by instagram, etc).

OTOH there's no riskier thing than predicting the future. Who knows. Maybe AI will fall on it's own face due to insane energy economics, internet rot (attributed to itself) and what not.

dogman144 · 8 months ago
Good comment! I see all these arguments in this post, and then think of the most talented journeyman engineer I know who just walked away from Google because he knew both AI was coming for his and coworkers jobs and he wasn’t good enough to be past the line where that wouldn’t matter. Everyone might be the next 10x engineer and be ok… but a lot aren’t.
jvanderbot · 8 months ago
They most insightful thing here would have been to learn how those traders survived, adapted, or moved on.

It's possible everyone just stops hiring new folks and lets the incumbents automate it. Or it's possible they all washed cars the rest of their careers.

dweinus · 8 months ago
I knew a bunch of them. Most of them moved on: retired or started over in new careers. It hit them hard. Maybe not so hard because trading was a lucrative career, but most of us don't have that kind of dough to fall back on.
dogman144 · 8 months ago
To answer what I saw, some blend of this:

- post-9/11 stress and ‘08 was a big jolt, and pushed a lot of folks out.

- Managed their money fine (or not) for when the job slowed down and also when ‘08 hit

- “traders” became “salespeople” or otherwise managing relationships

- Saw the trend, leaned into it hard, you now have Citadel, Virtu, JS, and so on.

- Saw the trend, specialized or were already in assets hard to automate.

- Senior enough to either steer the algo farms + jr traders, or become an algo steerer themselves

- Not senior enough, not rich enough, not flexible enough or not interested anymore and now drive Uber, mobile dog washers, joined law enforcement (3x example I know of).

ChuckMcM · 8 months ago
I like this comment, it is exceptionally insightful.

Any interesting question is "How is programming like trading securities?"

I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

What trading software has a hard time doing is coming up with new securities. What LLMs absolutely cannot do (yet?) is come up with novel mechanisms. To illustrate that, consider the idea that an LLM has been trained on every kind of car there is. If you ask it to design a plane it will fail. Train it on all cars and plans and ask it to design a boat, same problem. Train it on cars, planes, and boats and ask it to design a rocket, same problem.

The sad truth is that a lot of programming is 'done' , which is to say we have created lots of compilers, lots of editors, lots of tools, lots of word processors, lots of operating systems. Training an LLM on those things can put all of the mechanisms used in all of them into the model, and spitting out a variant is entirely within the capabilities of the LLM.

Thus the role of humans will continue to be to do the things that have not been done yet. No LLM can design a quantum computer, nor can it design a compiler that runs on a quantum computer. Those things haven't been "done" and they are not in the model. The other role of humans will continue to be 'taste.'

Taste, as defined as an aesthetic, something that you know when you see it. It is why for many, AI "art" stands out as having been created by AI, it has a synthetic aesthetic. And as one gets older it often becomes apparent that the tools are not what determines the quality of the output, it is the operator.

I watched Dan Silva do some amazing doodles with Deluxe Paint on the Amiga and I thought, "That's what I want to do!" and ran out and bought a copy and started doodling. My doodles looked like crap :-). The understanding that I would have to use the tool, find its strengths and weaknesses, and then express through it was clearly a lot more time consuming than "get the tool and go."

LLMs let people generate marginal code quickly. For so many jobs that is good enough. People who can generate really good code taking in constraints that the LLM can't model, is something that will remain the domain of humans until GAI is achieved[2]. So careers in things like real-time and embedded systems will probably still have a lot of humans involved, and systems where every single compute cycle needs to be extracted out of the engine is a priority, that will likely be dominated by humans too.

[1] Very early on there were papers on 'genetic' programming. Its a good thing to read them because they arrive at a singularly important point, "How do you define 'Which is better'?" For a solid, qualitative and testable metric for 'goodness' genetic algorithms out perform nearly everything. When the ability to specify 'goodness' is not there, genetic algorithms cannot out perform humans. What is more they cannot escape 'quality moats' where the solutions on the far side the moat are better than the solutions being explored but they cannot algorithmically get far enough into the 'bad' solutions to start climbing up the hill on the other side to the 'better' solutions.

[2] GAI being "Generalized Artificial Intelligence" which will have to have some way of modelling and integrating conceptual systems. Lots of things get better then (like self driving finally works), maybe even novel things. Until we get that though, LLMs won't play here.

jarsin · 8 months ago
> LLMs let people generate marginal code quickly.

What's weird to me is why people think this is some kind of great benefit. Not once have I ever worked on a project where the big problem was that everyone was already maxed out coding 8 hours a day.

Figuring out what to actually code and how to do it the right way seemed to always be the real time sink.

VirusNewbie · 8 months ago
>I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

right, but when python came into popularity it's not like we reduced the number of engineers 10 fold, even though it used to take a team 10x as long to write similar functionality in C++.

Deleted Comment

aprilthird2021 · 8 months ago
Okay, but one thing you kinda miss is that trading (e.g. investing) is still one of the largest ways for people to make money. Even passively investing in ETFs is extremely lucrative.

If LLMs become so good that everyone can just let an LLM go into the world and make them money, the way we do with our investments, won't that be good?

parentheses · 8 months ago
The financial markets are a 0 sum game mostly. This approach would not work.
dogman144 · 8 months ago
you are missing that trading in what I’m talking about != “e.g investing.”

And, certainly, prob a good thing for some, bad thing for the money conveyor belt of the last 20 yrs of tech careers.

spaceman_2020 · 8 months ago
The key difference between trading and coding is that code often underpins uncritical operations - think of all the CRUD apps in small businesses - and there is no money involved, at least directly.
irunmyownemail · 8 months ago
"See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do."

Sounds like it was written by someone trying to keep any grasp on the fading reality of AI.

9cb14c1ec0 · 8 months ago
> LLMs don't have to be "great," they just have to be "good enough."

NO NO NO NO NO NO NO!!!! It may be that some random script you run on your PC can be "good enough", but the software the my business sells can't be produced by "good enough" LLMs. I'm tired of my junior dev turning in garbage code that the latest and greatest "good enough" LLM created. I'm about to tell him he can't use AI tools anymore. I'm so thankful I actually learned how to code in pre-LLM days, because I know more than just how to copy and paste.

wewtyflakes · 8 months ago
You're fighting the tide with a broom.
hi_hi · 8 months ago
I don't think the problem is the LLM.

I think of LLMs like clay, or paint, or any other medium where you need people who know what they're doing to drive them.

Also, I might humbly suggest you invest some time in the junior dev and ask yourself why they keep on producing "garbage code". They're junior, they aren't likely to know the difference between good and bad. Teach them. (Maybe you already are, I'm just taking a wild guess)

dogman144 · 8 months ago
You might care about that but do you think your sales team does?
thegrim33 · 8 months ago
I don't worry about it, because:

1) I believe we need true AGI to replace developers.

2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.

3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.

janalsncm · 8 months ago
To replace all developers, we need AGI yes. To replace many developers? No. If one developer can do the same work as 5 could previously, unless the amount of work expands then 4 developers are going to be looking for a job.

Therefore, unless you for some reason believe you will be in the shrinking portion that cannot be replaced I think the question deserves more attention than “nothing”.

simplyluke · 8 months ago
Frameworks, compilers, and countless other developments in computing massively expanded the efficiency of programmers and that only expanded the field.

Short of genuine AGI I’ve yet to see a compelling argument why productivity eliminates jobs, when the opposite has been true in every modern economy.

amrocha · 8 months ago
I have a coworker who I suspect has been using LLMs to write most of his code for him. He wrote multiple PRs with thousands of lines of code over a month.

Me and the other senior dev spent weeks reviewing these PRs. Here’s what we found:

- The feature wasn’t built to spec, so even though it worked in general the details were all wrong

- The code was sloppy and didn’t adhere to the repos guidelines

- He couldn’t explain why he did things a certain way versus another, so reviews took a long time

- The code worked for the happy path, and errored for everything else

Eventually this guy got moved to a different team and we closed his PRs and rewrote the feature in less than a week.

This was an awful experience. If you told me that this is the future of software I’d laugh you out of the room, because engineers make enough money and have enough leverage to just quit. If you force engineers to work this way, all the good ones will quit and retire. So you’re gonna be stuck with the guys who can’t write code reviewing code they don’t understand.

JamesBarney · 8 months ago
I think not only is this possible, it's likely for two reasons.

1. A lot more projects get the green light when the price is 5x less, and a many more organizations can afford custom applications.

2. LLMs unlock large amounts of new applications. A lot more of the economy is now automatable with LLMs.

I think jr devs will see the biggest hit. If you're going to teach someone how to code, might as teach a domain expert. LLMs already code much better than almost all jr devs.

natemwilson · 8 months ago
It’s my belief that humanity has an effectively infinite capacity for software and code. We can always recursively explore deeper complexity.
j45 · 8 months ago
I think counting the number of devs might not be the best way to go considering not all teams are equally capable or skilled in each person, and in enterprises, some people are inevitably hiding in a project or team.

Comparing only the amount of forward progress in a codebase and AI's ability to participate or cover in it might be better.

tqi · 8 months ago
I'm not sure it is as simple as that - Induced Demand might be enough to keep the pool of human-hours steady. What that does to wages though, who can say...
throwawayffffas · 8 months ago
That's a weird way to look at it. If one developer can do what 5 could do before, that doesn't mean I will be looking for a job, it means I will be doing 5 times more work.
fire_lake · 8 months ago
> unless the amount of work expands

This is what will happen

akira2501 · 8 months ago
Even if AGI suddenly appears we will most likely have an energy feed and efficiency problem with it. These scaling problems are just not on the common roadmap at all and people forget how much effort typically has to be spent here before a new technology can take over.
plantwallshoe · 8 months ago
This is where I’m at too. By the time we fully automate software engineering, we definitely will have already automated marketing, sales, customer success, accountants, office administrators, program managers, lawyers, etc.

At that point its total societal upheaval and losing my job will probably be the least of my worries.

throw234234234 · 8 months ago
1) Maybe.

2) I do see this, given the money poured into this cycle, as a potential possibility. It may not just be LLM's. To another comment you are betting against the whole capitalist systems, human ingenuity and billions/trillions? of dollars targeted at making SWE's redundant.

3) I think it can disrupt only knowledge jobs, and be some large time before it disrupts physical jobs. For SWE's this is the worst outcome - it means you are on your own w.r.t adjusting for the changes coming. Its only "world changing" as you put it to economic systems if it disrupts everyone at once. I don't think it will happen that way.

More to the point software engineers will automate themselves out before other jobs for only one reason - they understand AI better than other jobs (even if objectively it is harder to automate) and they tend not to protect the knowledge required to do so. They have the domain knowledge to know what to automate/make redundant.

The people that have the power to resist/slow down disruption (i.e. hide knowledge) will gain more pricing power, and therefore be able to earn more capital taking advantage of the efficiency gains made by jobs being redundant from AI. The last to be disrupted has the most opportunity to gain ownership of assets and capital from their economic profits preserved. The inefficient will win out of this - capital rewards scarcity/people that can remain in demand despite being inefficient relatively. Competition is for losers - its IMV the biggest flaw of the system. As a result people will see what has happened to SWE's and make sure their industry "has time" to adapt particularly since many knowledge professions are really "industry unions/licensed clubs" who have the advantage of keeping their domain knowledge harder to access.

To explain it further even if software is more complicated; there is just so much more capital it seems trying to disrupt it than other industries. Given IMV software demand is relatively inelastic to price due to scaling profits, making it cheaper to produce won't really benefit society all that much w.r.t more output (i.e. what was good economically to build would of been built anyway in an inelastic demand/scaling commodity). Generally more supply/less cost of a good has more absolute societal benefits when there is unmet and/or elastic demand. Instead costs of SWE's will go down and the benefit will be distributed to the jobs/people remaining (managers, CEO's, etc) - the people that dev's think "are inefficient" in my experience. When it is about inelastic demand its more re-distributive; the customer benefits and the supplier (in this case SWE's) lose.

I don't like saying this; but we gave AI all the advantage. No licensing requirements, open source software for training, etc.

flustercan · 8 months ago
> I think it can disrupt only knowledge jobs

What happens when a huge chunk of knowledge workers lose their job? Who is going to buy houses, roofs, cars, cabinets, furniture, amazon packages, etc. from all the the blue-collar workers?

What happens when all those former knowledge workers start flooding the job markets for cashiers and factory workers, or applying en masse to the limited spots in nursing schools or trade programs?

If GPTs take away knowledge work at any rate above "glacially slow" we will quickly see a collapse that affects every corner of the global economy.

At that point we just have to hope for a real revolution in terms of what it means to own the means of production.

taylodl · 8 months ago
Back in the late 80s and early 90s there was a craze called CASE - Computer-Aided Software Engineering. The idea was humans really suck at writing code, but we're really good at modeling and creating specifications. Tools like Rational Rose arose during this era, as did Booch notation which eventually became part of UML.

The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.

Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.

Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?

At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.

Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.

lubujackson · 8 months ago
This is more or less my take. I came in on Web 1.0 when "real" programmers were coding in C++ and I was mucking around with Perl and PHP.

This just seems like just the next level of abstraction. I don't forsee a "traders all disappeared" situation like the top comment, because at the end of the day someone needs to know WHAT they want to build.

So yes, less junior developers and development looking more like management/architecting. A lot more reliance on deeply knowledgable folks to debug the spaghetti hell. But also a lot more designers that are suddenly Very Successful Developers. A lot more product people that super-charge things. A lot more very fast startups run by some shmoes with unfundable but ultimately visionary ideas.

At least, that's my best case scenario. Otherwise: SkyNet.

Zababa · 8 months ago
Here's to Yudkowsky's 84th law.
neilv · 8 months ago
I worked on CASE, and generally agree with this.

I think it's important to note that there were a couple distinct markets for CASE:

1. Military/aerospace/datacomm/medical type technical development. Where you were building very complex things, that integrated into larger systems, that had to work, with teams, and you used higher-level formalisms when appropriate.

2. "MIS" (Management Information Systems) in-house/intranet business applications. Modeling business processes and flows, and a whole lot of data entry forms, queries, and printed reports. (Much of the coding parts already had decades of work on automating them, such as with WYSIWYG form painters and report languages.)

Today, most Web CRUD and mobile apps are the descendant of #2, albeit with branches for in-house vs. polished graphic design consumer appeal.

My teams had some successes with #1 technical software, but UML under IBM seemed to head towards #2 enterprise development. I don't have much visibility into where it went from there.

I did find a few years ago (as a bit of a methodology expert familiar with the influences that went into UML, as well as familiar with those metamodels as a CASE developer) that the UML specs were scary and huge, and mostly full of stuff I didn't want. So I did the business process modeling for a customer logistics integration using a very small subset, with very high value. (Maybe it's a little like knowing hypertext, and then being teleported 20 years into the future, where the hypertext technology has been taken over by evil advertising brochures and surveillance capitalism, so you have to work to dig out the 1% hypertext bits that you can see are there.)

Post-ZIRP, if more people start caring about complex systems that really have to work (and fewer people care about lots of hiring and churning code to make it look like they have "growth"), people will rediscover some of the better modeling methods, and be, like, whoa, this ancient DeMarco-Yourdon thing is most of what we need to get this process straight in a way everyone can understand, or this Harel thing makes our crazy event loop with concurrent activities tractable to implement correctly without a QA nightmare, or this Rumbaugh/Booch/etc. thing really helps us understand this nontrivial schema, and keep it documented as a visual for bringing people onboard and evolving it sanely, and this Jacobson thing helps us integrate that with some of the better parts of our evolving Agile process.

taylodl · 8 months ago
As I recall, the biggest problem from the last go-around was the models and implementation were two different sets of artifacts and therefore were guaranteed to diverge. If we move to a modern incarnation where the AI is generating the implementation from the models and humans are no longer doing that task, then it may work as the models will now be the only existing set of artifacts.

But I was definitely in camp #2 - the in-house business applications. I'd love to hear the experiences from those in camp #1. To your point, once IBM got involved it all went south. There was a place I was working for in the early 90s that really turned me off against anything "enterprise" from IBM. I had yet to learn that would apply to pretty much every vendor! :)

627467 · 8 months ago
It's interesting you say this because in my current process to learn to build apps for myself I first try build mermaid diagrams aided by LLM. And when I'm happy, i then ask it to generate the code for me based on these diagrams.

I'm no SWE and probably never will be. SWE probably don't consider what I do "building an app" but I don't really care

aprilthird2021 · 8 months ago
Diagramming out what needs to be built is often what some of the highest paid programmers do all day
indrora · 8 months ago
> Bottom line, you don't see too many assembly language developers anymore.

And where you do, no LLM is going to replace them because they are working in the dark mines where no compiler has seen and the optimizations they are doing involve arcane lore about the mysteries of some Intel engineer's mind while one or both of them are on a drug fueled alcoholic deep dive.

taylodl · 8 months ago
Out of curiosity, who does assembly language programming these days? Back in the 90s the compilers had learned all our best tricks. Now with multiple instruction pipelines and out-of-order instruction processing and registers to be managed, can humans still write better optimized assembly than a compiler? Is the juice worth that squeeze?

I can see people still learning assembly in a pedagogical setting, but not in a production setting. I'd be interested to hear otherwise.

mianos · 8 months ago
I am 61, I have been a full time developer since I was about 19. I have lost count of the number of 'next thing to replace developers' many many times. many of them showed promise. Many of them continue to be developed. Frameworks with higher and higher levels of abstraction.

I see LLMs as the next higher level of abstraction.

Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.

Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.

Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.

jb_briant · 8 months ago
I have a decade of experience writing web code professionally, in my experience, LLM is a true waste of time regarding web.

On the other side, I'm switching to game dev and it became a very useful companion, outputing well known algorithms. It's more like an universal API rather than a junior assistant.

Instead of me taking time to understand the algo in the details then implementing, I use GPT4o to expand the Unreal API with missing parts. It truly expands the scope I'm able to handle and it feels good to save hours that compounds in days and weeks of work.

Eg. 1. OOB and SAT https://stackoverflow.com/questions/47866571/simple-oriented...

2. Making a grid system using lat/long coordinates for a voxel planet.

baq · 8 months ago
> I have a decade of experience writing web code professionally, in my experience, LLM is a true waste of time regarding web.

As someone who knows web front end development only to the extent I need it for internal tools, it’s been turning day-long fights into single hour dare I say it pleasurable experiences. I tell it to make a row of widgets and it outputs all the div+css soup (or e.g. material components) that only needs some tuning instead of having to google everything.

It still takes experience to know when it doesn’t use a component when it should etc., but it’s a force multiplier, not a replacement. For now.

Copyrighted · 8 months ago
Same here. Really liking it asking it Unreal C++ trivia.
euroderf · 8 months ago
Let's say we get an AI that can take well-written requirements and cough up an app.

I think you have to be a developer to learn how to write those requirements well. And I don't even mean the concepts of data flows and logic flows. I mean, just learning to organise thoughts in a way that they don't fatally contradict themselves or lead to dead ends or otherwise tie themselves in unresolvable knots. I mean like non-lawyers trying to write laws without any understanding of the entire suite of mental furniture.

mianos · 8 months ago
That is exactly what I mean when I said "we'll be the ones using it".

I didn't want to expand it, for fear of sounding like an elitist, and you said it better anyway. The same things that make a programmer excellent will be in a much better position to use an even better LLM.

Concise thinking and expression. At the moment LLMs will just kinda 'shotgun' scattered ideas based on your input. I expect the better ones will be massively better when fed better input.

anonzzzies · 8 months ago
I have seen this happening since the 70s. My father made a tool to replace programmers. It did not and it disillusioned him greatly. Sold very well though. But this time is different though; I have often said that I thought chatgpt like AI was at least 50 years out still but it's here; it does replace programmers every day. I see many companies inside (I have a troubleshoot company; we get called to fix very urgent stuff and we only do that so we hop around a lot) and many are starting to replace outsourced teams with, for instance, our product (which was a side project for me to see if it can be done, it can). But also just shrinking teams and give them claude or gpt to replace the bad people.

It is happening just not at scale yet to really scare people; that will happen though. It is just stupidly cheaper; for the price of one junior you can do so many api requests to claude it's not even funny. Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run.

Good logical thinkers and problemsolvers won't be replaced any time soon, but mediocre or bad programmers are already gone ; a llm is faster, cheaper and doesn't get ill or tired. And there are so many of them, just try someone on upwork or fiverr and you will know.

zoobab · 8 months ago
"Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run."

Privacy is a fundamental right.

And companies should care about not leaking trade secrets, including code, but the rest as well.

US companies are known to use the cloud to spy on competitors.

Companies should have their own private LLM, not rely on cloud instances with a contract that guarantees "privacy".

tugu77 · 8 months ago
> Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run.

That attitude is why our industry has such a bad rep and why things are going down the drain to dystopia. Devs without ethics. This world is doomed.

fendy3002 · 8 months ago
> It is just stupidly cheaper; for the price of one junior you can do so many api requests to claude it's not even funny

Idk if the cheap price is really cheap price or promotion price, where after that the enshittification and price increase happen, which is a trend for most tech companies.

And agree, llm will weed bad programmers further, though in the future, bad alternatives (like analyst or bad llm users) may emerge

smusamashah · 8 months ago
Which tool did he make?
ipnon · 8 months ago
Yes, and if you look back on the history of the tech industry, each time programmers were supposedly on the verge of replacement was an excellent time to start programming.
fendy3002 · 8 months ago
What people expect: this will replace programmers

What really happened: this is used by programmers to improve their workflow

Aeolun · 8 months ago
The best time was 20 years ago. The second best time is now :)
pandemic_region · 8 months ago
Hey man, thanks for posting this. There's about ten years between us. I too hope that they will need to pry a keyboard from my cold dead hands.
ido · 8 months ago
I'm only 41 but that's long enough to have also seen this happen a few times (got my first job as a professional developer at age 18). I've also dabbled in using copilot and chatgpt and I find at most they're a boost to an experienced developer- they're not enough to make a novice a replacement for a senior.
bee_rider · 8 months ago
I think the concern is that they might become good enough for a senior to not need a novice. At that point where to the seniors come from?
m_ke · 8 months ago
I've been thinking about this a bunch and here's what I think will happen as cost of writing software approaches 0:

1. There will be way more software

2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.

3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking

4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.

EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:

- https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...

- https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...

- https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...

from-nibly · 8 months ago
> that do exactly what they want

This presumes that they know exactly what they want.

My brother works for a company and they just ran into this issue. They target customer retention as a metric. The result is that all of their customers are the WORST, don't make them any money, but they stay around a long time.

The company is about to run out of money and crash into the ground.

If people knew exactly what they wanted 99% of all problems in the world wouldn't exist. This is one of the jobs of a developer, to explore what people actually want with them and then implement it.

The first bit is WAY harder than the second bit, and LLMs only do the second bit.

cweld510 · 8 months ago
Sure, but without an LLM, measuring customer retention might require sending a request over to your data scientist because they know how to make dashboards, then they have to balance it with their other work, so who knows when it gets done. You can do this sort of thing faster with an LLM, and the communication cost will be less. So even if you choose the wrong statistic, you can get it built sooner, and find out sooner that it's wrong, and hopefully course-correct sooner as well.
a_bonobo · 8 months ago
>3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking

From working in a non-software place, I see the opposite occurring. Non-software management doesn't buy closed source software because they think it's 'better', they buy closed source software because there's a clear path of liability.

Who pays if the software messes up? Who takes the blame? LLMs make this even worse. Anthropic is not going to pay your business damages because the LLM produced bad code.

brodouevencode · 8 months ago
Good points - my company has already committed to #2
ThrowawayR2 · 8 months ago
What's the equivalent of @justsayinmice for NeurIPS papers? A lot of things in papers don't pan out in the real world.
m_ke · 8 months ago
There's a lot of work showing that we can reliably get to or above human level performance on tasks where it's easy to sample at scale and the solution is cheap to verify.
sureglymop · 8 months ago
As a junior dev, I do two conscious things to make sure I'll still be relevant for the workforce in the future.

1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.

2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.

At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.

HeikoKemp · 8 months ago
I'm shocked that I had to dig this deep in the comments to see someone mention cybersecurity. Same as you, seeing this trend, I'm doubling down on security. As more businesses "hack away" their projects. It's going to be a big party. I'm sure black hats are thrilled right now. Will LLMs be able to secure their code? I'm not so sure. Even human-written code is exploitable. That's their source for training.
atemerev · 8 months ago
You are the smart one; I hope everything will work for you!
matrix87 · 8 months ago
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.

I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is

From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are

Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect

If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out