Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.
While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.
But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.
Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering
Also, I think we are quite a ways out from a tool being able to devise a solution to a complex high-level problem without online precedent, which is where I find the most satisfaction anyway.
LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow, and, as such, a very good complement to a programmer's skills – seems to me like a net positive at least in the near to medium term.
Spreadsheets didn’t replace accountants, however, it made them more efficient. I don’t personally believe AI will replace software engineers anytime soon, but it’s already making us more efficient. Just as Excel experience is required to crunch numbers, I suspect AI experience will be required to write code.
I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.
It's also where I find most of the work. There are plenty of off the shelf tools to solve all the needs of the company I work at. However, we still end up making a lot of our own stuff, because we want something that the off the shelf option doesn't do, or it can't scale to the level we need. Other times we buy two tools that can't talk to each other and need to write something to make them talk. I often hear people online say they simply copy/paste stuff together from Stack Overflow, but that has never been something I could do at my job.
My concern isn't about an LLM replacing me. My concern is our CIO will think it can, firing first, and thinking later.
We’ll see - but given the gap between chatgpt 3 and 4, I think AIs will be competitive with mid level programmers by the end of the decade. I’d be surprised if they aren’t.
The training systems we use for LLMs are still so crude. ChatGPT has never interacted with a compiler. Imagine learning to write code by only reading (quite small!) snippets on GitHub. That’s the state llms are in now. It’s only a matter of time before someone figures out how to put a compiler in a reinforcement learning loop while training an LLM. I think the outcome of that will be something that can program orders of magnitude better. I’ll do it eventually if nobody else does it first. We also need to solve the “context” problem - but that seems tractable to me too.
For all the computational resources they use to do training and inference, our LLMs are still incredibly simple. The fact they can already code so well is a very strong hint for what is to come.
> LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow
I think that sentence nails it. For the people who consider "searching stackoverflow and copy/pasting" as programming, LLMs will replace your job, sure. But software development is so much more, critical thinking, analysing, gathering requirements, testing ideas and figuring out which to reject, and more.
Two years ago we were quite a ways out from having LLMs that could competently respond to commands without getting into garbage loops and repeating random nonsense over and over. Now nobody even talks about the Turing test anymore because it's so clearly been blown past.
I wouldn't be so sure it will be very long before solving big, hard, and complex problems is within reach...
I’ve never found GPT-4 capable of producing a useful solution in my niche of engineering.
When I’m stumped, it’s usually on a complex and very multi-faceted problem where the full scope doesn’t fit into the human brain very well. And for these problems, GPT will produce some borderline unworkable solutions. It’s like a jack of all trades and master of none in code. It’s knowledge seems a mile wide and an inch deep.
Granted, it could be different for junior to mid programmers.
Same here. I'm not a developer. I do engineering and architecture in IAM. I've tested out GPT-4 and it's good for general advice or problem solving. But it can't know the intricascies of the company I work at with all our baggage, legacy systems and us humans sometimes just being straight up illogical and inefficient with what we want.
So my usage has mostly been for it to play a more advanced rubber duck to bounce ideas and concepts off of and to do some of the more tedious scripting work (that I still have to double check thoroughly).
At some point GPT and other LLMs might be able to replace what I do in large parts. But that's still a while off.
How long ago would you have considered this discussion ridiculous? How long till GPT-N will be churning out solutions faster than you can read them? It's useless for me now as well, but I'm pretty sure I'll be doomed professionally in the future.
I think much of using it well is understanding what it can and can’t do (though of course this is a moving target).
It’s great when the limiting factor is knowledge of APIs, best practices, or common algorithms. When the limiting factor is architectural complexity or understanding how many different components of a system fit together, it’s less useful.
Still, I find I can often save time on more difficult tasks by figuring out the structure and then having GPT-4 fill in the blanks. It’s a much better programmer once you get it started down the right path.
Well no, you shouldn't use it for your top-end problems, but your bottom-end problems. Aren't there things that you have to do in your job that really could be done by a junior programmer? Don't you ever have one-off (or once-a-year) things you have to do that each time you have to invest a lot of time refreshing in your brain, and then basically forgetting for lack of use?
Here's an example I used the other day: Our project had lost access to our YT channel, which had 350+ videos on it (due to someone's untimely passing and a lack of redundancy). I had used yt-dlp to download all the old videos, including descriptions. Our community manager had uploaded all the videos, but wasn't looking forward to copy-and-pasting every description into the new video.
So I offered to use GPT-4 to write a python script to use the API to do that for her. I didn't know anything about the YT API, nor am I an expert in python. I wouldn't have invested the time learning the YT API (and trying to work through my rudimentary python knowledge) for a one-off thing like this, but I knew that GPT-4 would be able to help me focus on what to do rather than how to do it. The transcript is here:
By contrast, I don't think there's any possible way the current generation could have identified, or helped fix, this problem that I fixed a few years ago:
(Although it would be interesting to try to ask it about that to see how well it does.)
The point of using GPT-4 should be to take over the "low value" work from you, so that you have more time and mental space to focus on the "high value" work.
Same. Even for technologies that it supposedly should know a lot about (e.g. Kafka), if I prompt it for something slightly non-standard, it just makes up things that aren't supported or is otherwise unhelpful.
The one time I've found ChatGPT to be genuinely useful is when I asked it to explain a bash script to me, seeing as bash is notoriously inscrutable. Still, it did get a detail wrong somehow.
i kind of agree but also it kind of sucks spending hours debugging code in which gpt-4 has carefully concealed numerous bugs
i mean raise your hand if debugging code that looks obviously correct is the part of programming you enjoy most?
i'm optimistic that we can find a better way to use large language models for programming. run it in a loop trying to pass a test suite, say, or deliver code together with a proof-assistant-verified correctness proof
Yeah, I agree. I was thinking about it today — that most of my life I have coded projects that I have enjoyed. (Well, I often found ways to enjoy them even when they were unwelcome projects dropped on my desk.)
In a larger sense though I think I have looked for projects that allowed a certain artistic license rather than the more academic code that you measure its worth in cycles, latency or some other quantifiable metric.
I have thought though for some time that the kind of coding that I enjoyed early in my career has been waning long before ChatGPT. I confess I began my career in a (privileged it seems now) era when the engineers were the ones minding the store, not marketing.
I've been saying the same thing. Coding is the worst part of the process. I've been doing it for 20 years professionally and another 10 or more on top of that as a hobby. Don't care about code, just want to make things. Code sucks.
While I don't want to go as far as saying that it sucks, I do largely agree with the sentiment. Personally, I do like coding a little bit but mostly as a puzzle but for the most part it is a means to an end.
Lately, I have been using ChatGPT and the OpenAI API to do exactly that for a few projects. I used it to help me round out the design, brainstorm about approaches, tune database requirements, etc. I basically got to the point where I had a proof of concept for all the separate components in a very short amount of time. Then for the implementation it was a similar story. I already had a much more solid idea (technical and functional design, if you will) of how I wanted to implement things than I normally do. And, for most of the things where I would get slowed down normally, I could just turn to the chat. Then by just telling it what part I had trouble with, it would get me back on track in no time.
Having said all that, I couldn't have used it in such a way without any knowledge of programming. Because if you just tell it that you want to "create an application that does X" it will come up with overly broad solution. All the questions and problems I presented to it were based from a position where I already knew the language, platform and had a general sense of requirements.
I think LLMs are the wrong solution for this problem.
Why make something that produces low level code based off of existing low level code instead of building up meaningful abstractions to make development easier and ensure that low level code was written right?
Basically react and other similar abstractions for other languages did more to take "coding" out of creating applications than gpt ever will IMO.
Many designers despise AI generated images, because they love the process itself. I knew one who missed the slow loading of massive design documents, because he would use that time to get inspired by stuff.
There were probably a lot of loom weavers that felt the same about their tools. But the times, they are a-changing.
If you don't want to code, how do you "make things"? (Presumably by "things" you mean programs/apps.) "Making" and "coding" are synonymous for programmers.
>Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.
You're not the minority. You're the majority. The majority can't look reality in the face and see the end. They lie to themselves.
>While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.
Everyone and I mean everyone knows that if fails often. Use some common sense here. Why was the article written despite the fact that Everyone knows what you know? Because of the trendline. What AI was yesterday versus what it is today heralds what it will be tomorrow and every tomorrow AI will be failing less and less and less until it doesn't fail at all.
>But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.
Ever the optimist. The reality is we don't know if it's good or bad. It can be both or it can weigh heavily in one direction. Most likely it will be both given the fact that our entire careers can nearly be replaced.
>Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering
Sure. AI isn't going to end hobbies. It's going to end careers and ways of life. Hobbies will most likely survive.
I appreciate your position but I want to push back against this type of rhetorical defense of stuff that has no basis in evidence or reasonable expectation.
This sentiment parrots Sam Altman's and Musk's insistence that "AI" is super-powerful and dangerous, which is baseless rhetoric.
If thousands of people have done it before you than why isn't it abstracted to the point that it's just as easy to tell an LLM to do it as it is to do it yourself?
It'll be amazing if anyone can request any basic program they want. Totally amazing if they can request any complex program.
I cannot really envision a more empowering thing for the common person. It should really upset the balance of power.
I think we'll see, soon, that we've only just started building with code. As a lifelong coder, I cannot wait to see the day when anyone can program anything.
From my experience, most people have only the vaguest idea of what they want, and no clue about the contradictions or other problems inherent in their idea. That is the real value that a good software engineer provides - finding and interpreting the requirements of a person who doesn't understand software, so that someone who does can build the right thing.
How would this anyone be able to evaluate whether the program they requested is correct or not?
Automatic program generation from human language really feels like the same problem with machine translation between human languages. I have an elementary understanding of French and so when I see a passage machine translated into French (regardless of software, Google Translate or DeepL) I cannot find any mistakes; I may even learn a few new words. But to the professional translator, the passage is full of mistakes, non-idiomatic expressions and other weirdness. You aren't going to see publishers publishing entirely machine translated books.
I suspect the same thing happens for LLM-written programs. The average person finds them useful; the expert finds them riddled with bugs. When the stakes are low, like tourists not speaking the native language, machine translation is fine. So will many run-once programs destined for a specific purpose. When the stakes are high, human craft is still needed.
Requesting a basic or complex program still requires breaking down the problem into components a computer can understand. At least for now, I haven’t seen evidence most people are capable of this. I’ve been coding for ~15 years and still fail to describe problems correctly to LLMs.
To me best part of AI is I can ask it a question and then a follow-up question, about how some code- or API construct works. THEN I can ask it a follow-up question. That was not possible before with Google.
I can ask exactly what I want in English, not by entering a search-term. A search-term is not a question, but a COMMAND: "Find me web-pages containing this search-term".
By asking exactly the question I'm looking the answer to I get real answers, and if I don't understand the answer, I can ask a follow-up question. Life is great and there's still an infinite amount of code to be written.
This is the main benefit I get from the free ChatGPT. I ask a question more related to syntax e.g. how to make a LINQ statement since I haven't been in C# for a few weeks and I forget. If it gets things a little wrong I can drill down until it works. It's also good for generic stuff done a million times like a basic API call with WebClient or similar.
We tested CoPilot for a bit but for whatever reason, it sometimes produced nice boilerplate but mostly just made one-line suggestions that were slower than just typing if I knew what I was doing. It was also strangely opinionated about what comments should say. In the end it felt like it added to my mental load by parsing and deciding to take or ignore suggestions so I turned it off. Typing is (and has been for a while) not the hard part of my job anyway.
Some people I feel fear losing their siloed prestige built on arcane software knowledge. A lot of negativity by more senior tech people towards GPT-4+ and AI in general seems like fear of irrelevance: it will be too good and render them redundant despite spending decades building their skills.
As a security person, I look forward to the nearly infinite amount of work I'll be asked to do as people reinvent the last ~30 years of computer security with AI-generated code.
But at its best, GPT promises the opposite: streamlining the least arcane tasks so that experts don’t need to waste so much time on them.
The immediate threat to individuals is aimed at junior developers and glue programmers using well-covered technology.
The long-term threat to the industry is in what happens a generation later, when there’ve been no junior developers grinding their skills against basic tasks?
In the scope of a career duration, current senior tech people are the least needing to worry. Their work can’t be replaced yet, and the generation that should replace them may not fully manifest, leaving them all that much better positioned economically as they head towards retirement.
i've fired a lot of negativity at people for treating the entropy monster as a trustworthy information source. it's a waste of my time to prove it wrong to their satisfaction. it's great at creativity and recall but shitty at accuracy, and sometimes accuracy is what counts most
If your prestige is based solely on "arcane software knowledge", then sure, LLMs might be a threat. Especially as they get better.
But that is just one part of being a good software engineer. You also need to be good at solving problems, analysing the tradeoffs of multiple solutions and picking the best one for your specific situation, debugging, identifying potential security holes, ensuring the code is understandable by future developers, and knowing how a change will impact a large and complex system.
Maybe some future AI will be able to do all of that well. I can't see the future. But I'm very doubtful it will just be a better LLM.
I think the threat from LLMs isn't that it can replace developers. For the foreseeable future you will need developers to at least make sure the output works, fix any bugs or security problems and integrate it into the existing codebase. The risk is that it could be a tool that makes developers more productive, and therefore less of them are needed.
Can you blame them? Cushy tech jobs are the jackpot in this life. Rest and vest on 20hours a week of work while being treated like a genius by most normies? Sign me up!
At this moment, it is still not possible to do away with people in tech that have "senior" level knowledge and judgements.
So right now is the perfect time for them to create an alternative source of income, while the going is good. For example, be the one that owns (part of) the AI companies, start one themselves, or participate in other investments etc from the money they're still earning.
If a successor to GPT4 produced 5% of the errors it currently does, it would change programming, but there would still be programmers, the focus of what they worked on would be different.
I'm sure there was a phase were some old school coders who were used to writing applications from scratch complained about all the damn libraries ruining coding -- why, all programmers are now are gluing together code that someone else wrote! True or not, there are still programmers.
I agree, but mind you, libraries have always been consciously desired and heavily implemented. Lady Ada did it. Historically but more recently, the first operating systems began life as mere libraries.
But the worst problem I ever had was a vice president (acquired when our company was acquired) who insisted that all programming was, should, and must by-edict be only about gluing together existing libraries.
Talk about incompetent -- and about misguided beliefs in his own "superior intelligence".
I had to protect my team of 20+ from him and his stupid edicts and complaints, while still having us meet tight deadlines of various sorts (via programming, not so much by gluing).
Part of our team did graphical design for the web. Doing that by only gluing together existing images makes as little sense as it does for programming.
I disagree. For every 100 problems that would be convenient to solve in software, maybe 1 is important enough to the whims of the market that there are actually programmers working on it. If software becomes 100x easier to make, then you don't end up with fewer programmers, you end up with more problems being solved.
And once 100% of the problems that can be solved with software are already solved with software... that's pretty much post-scarcity, isn't it?
When we get to that point -- beyond a machine regurgitating reasonable facsimiles of code based on human examples, but actually designing and implementing novel systems from the ground up -- we'll need far, far fewer workers in general.
I hate to post typical "As a ADHDer" comment but ugh, As someone with ADHD chatgpt and copilot are insane boosts to productivity, I sometimes have to google the most stupid things about the language I code in daily for half a decade now and copilot or chatgpt is amazing at reducing friction there.
I don't, however, think that we're anywhere near being replaced by the AI overlords.
Frankly, I enjoy software development more because I can bounce obscure ideas off GPT4 and get sufficient quality questions and ideas back on subjects whenever it suits my schedule, as well as code snippets that lets me solve the interesting bits faster.
Maybe it'll take the coding part of my job and hobbies away from me one day, but even then, I feel that is more of an opportunity than a threat - there are many hobby projects I'd like to work on that are too big to do from scratch where using LLMs are already helping make them more tractable as solo projects and I get to pick and choose which bits to write myself.
And my "grab bag" repo of utility code that doesn't fit elsewhere has had its first fully GPT4 written function. Nothing I couldn't have easily done myself, but something I was happy I didn't have to.
For people who are content doing low level, low skilled coding, though, it will be a threat unless they learn how to use it to take a step up.
What do you mean by "low level" here? In the commonly accepted terminology I would take this to mean (nowadays) something that concerns itself more with the smaller details of things, which is exactly where I feel that current AI fails the most. I wouldn't trust it to generate even halfway decent lower-level code overall, whereas it can spit out reams of acceptable (in that world) high-level JavaScript.
We can already run local models on a laptop that are competitive with chatgpt 3.5
Open source may trail openai if they come out with a 20x improvement, but I'm not sure the dystopian future playing out is as likely as I would have thought it 1-2 years ago.
GPT4 code output is currently at the level of a middling CS student. This shouldn't encourage self-assurance or complacency because this is absolutely certain to change as LLMs with some deep learning will be constructed to self-test code and adopt narrow "critical thinking skills" to discriminate between low- and high-quality code.
Ultimately, the most valuable coders who will remain will be a smaller number of senior devs that will dwindle over time.
Unfortunately, AI is likely to reduce and suppress tech industry wages in the long-term. If the workers had clue, rather than watching their incomes gradually evaporate and sitting on their hands, they should organize and collectively bargain even more so than Hollywood actors.
> Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.
I've come to state something like this as "programming is writing poetry for many of your interesting friends somewhere on the autistic spectrum". Some of those friends are machines, but most of those friends are your fellow developers.
The best code is poetry: our programming languages give a meter and rhyme and other schemes to follow, but what we do within those is creative expression. Machines only care about the most literal interpretations of these poems, but the more fantastic and creative interpretations are the bread and butter of software design. This is where our abstractions grow, from abstract interpretations. This is the soil in which a program builds meaning and comprehension for a team, becomes less the raw "if-this-then-that" but grows into an embodiment of a business' rules and shares the knowledge culture of the whys and hows of what the program is meant to do.
From what I've seen, just as the literal interpretations are the ones most of interest to machines, these machines we are building are most good at providing literal interpretable code. There's obviously a use for that. It can be a useful tool. But we aren't writing our code just for the solely literal minded among us and there's so much creative space in software development that describes/neeeds/expands into abstraction and creative interpretation that for now (and maybe for the conceivable future) that still makes so many differences between just software and good software (from the perspectives of long-term team maintainability, if nothing deeper).
I tested out GPT-4 the other day and asked it to generate a simple two boxes in a row using Tailwind and hilariously, the resulting code actually crashed my browser tab. I reviewed the code and it was really basic, so this shouldn't have happened at all. But it consistently crashed every time. I'm still not entirely sure what happened, maybe an invisible character or something, I think its more funny than anything else.
There's also a split between fresh ("green-field") projects versus modifying existing code ("brown-field"), where whatever generated snippet of code you get can be subtly incompatible or require shaping to fit in the existing framework.
The massive shared model could do better if it was fed on your company's private source-code... but that's something that probably isn't/shouldn't-be happening.
Although you are absolutely right, I think the point the author is trying to make is more melancholic. He's grieving about a loss of significance of the craft he has devoted so much of his life to. He's imagining software engineers becoming nothing more than a relic, like elevator operators or blacksmiths.
One of those is not like the others. Elevator operators disappeared entirely while the blacksmith profession morphed into the various type of metalworker that we still have today.
There are SO MANY problems left to solve even if software development is fully automated. Not just product management problems, but product strategy problems. Products that should be built that nobody has thought of yet.
If I could automate my own work, I would gladly switch to just being the PM for my LLM.
To be fair, there is an abstract worry that being smart will no longer be valuable in society if AI replaces all brain work. But I think we are far from that. And a world where that happens is so DIFFERENT from ours, I think I'd be willing to pay the price.
AI taking over one of the only professions able to afford someone a proper middle class existence is pretty shitty. It will be great for capitalists though.
This is the real point. If the profits from AI (or robots) replacing Job X were distributed among the people who used to do Job X, I don't think anyone would mind. In fact it would be great for society! But that's not what's going to happen. The AI (and robots) will be owned by the Shrinking Few, all the profits and benefits will go to the owners, and the people who used to do Job X will have to re-skill to gamble on some other career.
It’s also one of the few fields with good compensation that can be broken into with minimal expense — all one needs is an old laptop, an internet connection, and some grit. Just about anything else that nets a similar or better paycheck requires expensive training and equipment.
The "people" at the top in charge want nothing less than the population to be poor and dependant. There's a reason they've done everything they can to suppress wages and eliminate good jobs.
Despite that here on HN you have people cheering them on, excited for it. Tech is one of the last good paying fields and these people don't realize it's not a matter of changing career, because there won't be anything better to retrain in.
I'll ask simple questions for SQL queries and it just hallucinates fields that don't exist in system/information_schema tables. It's mind boggling how bad it is sometimes
Code generating LLMs are simply a form of higher-level language. The commercial practice of software development (C++, Java, etc) is very far from the frontier of higher-level languages (Haskell, Lisp, etc).
Perhaps "prompt engineering" will be the higher-level language that sticks, or perhaps it will fail to find purchase in industry for the same reasons.
There's a huge difference between LLMs and "higher level languages": Determinism
The same C++ or Java or Haskell code run with the same inputs twice, will cause the same result[0]. This repeatability is the magic that enables us to build the towering abstractions that are modern software.
And to a certain mind (eg, mine), that's one of the deepest joys of programming. The fact that you can construct an unimaginably complex system by building up layer by layer these deterministic blocks. Being able to truly understand a system up to abstraction boundaries far sharper than anything in the world of atoms.
LLMs based "programming" threatens to remove this determinism and, sadly for people like me, devalue the skill of being able to understand and construct such systems.
[0]Yes, there are exceptions (issues around concurrency, latency, memory usage), but as a profession we struggle mightily to tame these exceptions back to being deterministic because there's so much value in it.
Am I the only one becoming less impressed by LLMs as time passes?
I will admit, when Copilot first became a thing in 2021, I had my own “I’m about to become obsolete” moment.
However, it’s become clear to me, both through my own experience and through research that has been conducted, that modern LLMs are fundamentally flawed and are not on the path to general intelligence.
We are stuck with ancient (in AI terms) technology. GPT 4 is better than 3.5, but not in a fundamental way. I expect much the same from 5. This technology is incredibly flawed, and in hindsight, once we have actual powerful AI, I think we’ll laugh at how much attention we gave it.
> Am I the only one becoming less impressed by LLMs as time passes?
Not at all.
I was very impressed at first but it's gotten to the point where I can no longer trust anything it says other than very high level overviews. For example, I asked it to help me implement my own sound synthesizer from scratch. I wanted to generate audio samples and save them to wave files. The high level overview was helpful and enabled me to understand the concepts involved.
The code on the other hand was subtly wrong in ways I simply couldn't be sure of. Details like calculating the lengths of structures and whether something did or did not count towards the length were notoriously difficult for it to get right. Worse, as a beginner just encountering the subject matter I could not be sure if it was correct or not, I just thought it didn't look right. I'd ask for confirmation and it would just apologize and change the response to what I expected to hear. I couldn't trust it.
It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.
> It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.
It's really great for this.
I've found it useful for taking some pattern I've cranking on with an extensive API and finishing the grunt work for me... it generally does a very good job if you teach it properly. I recently had to do a full integration of the AWS Amplify Auth library and instead of grinding for half a day to perfect every method, it just spits out the entire set of actions and reducers for me with well-considered state objects. Again, it needs guidance from someone with a clue, so don't fear it taking my job anytime soon.
> It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.
>Am I the only one becoming less impressed by LLMs as time passes
Jaron Lanier has some ideas about the space in between turing test and blade runner.
The first film goers, watching simple black and white movies thought that they were uncanny. A train coming towards the screen, would make audiences jump and duck. When people first heard gramophones, they reported that it is indistinguishable from live orchestra.
As we learn a technology, we learn to recognize. Get a feel for its limitations and strengths. The ability to detect that technology, is a skill. Less impressive over time.
It's hard not to be impressed when a thing does a thing that you did not think it could do.
We didn't move on to being unimpressed when the thing cannot do the thing we thought it be able to do.
I am not sure that GPT-4 is not better in a fundamental way than GPT-3.5. To me they seem like night and day. If GPT-5 is a similar jump, it will be impossible to compete without using it (or using a related / similar model). Yes they are both GPT models and trained as simple autoregressive LM, but there is a dramatic change you can experience at a personal level once GPT-4 can synthesize information correctly to address your specific requests in so many different contexts that GPT-3.5 was simply parroting like a toddler. All of LLM is just probabilistic inference on large bodies of text, however I do buy the idea that with enough compute and data a sufficiently large model will build the architecture it optimally needs to understand the data in the best possible way during training. And once the data becomes multimodal, the benefit to these probabilistic models can theoretically be multiplicative not just additive as each new modality will clarify and eliminate any previously wrong representations of the world. Yes, we will all laugh at how good GPT-10 trained with text, image, video, audio, and taste sensors will be, and yet GPT-4 was a major step forward, much bigger than any step taken by humanity so far.
I am seeing people seriously using the "Please write an expression for me which adds 2 and 2" prompt in order to get the "2+2" expression they need – advocating that they got it with magical efficiency. In all honesty, I don't like writing too much, and writing code for me is always shorter and faster than trying to describe it in general-purpose language, that is why we need code in the first place.
It sounds like your initial impression was an overestimate and your current impression is a correction back down from that. You could say that it's "fundamentally flawed" coming from a very high initial expectation, but you could just as easily say "this is an amazing tool" coming from the point of view that it's "worthless" as many people seem to think
If I can be so bold as to chime in, perhaps "fundamentally flawed" because it's design means it will never be more than a very clever BS engine. By design it is a stochastic token generator and its output will only ever be fundamentally some shade of random unless a fundamental redesign occurs.
I was also fooled and gave it too much credit, if you engage in a philosophical discussion with it it seems purpose-built for passing the turing test.
If LLMs are good at one thing, it's tricking people. I can't think of a more dangerous or valueless creation.
Yes. Much of the "wow factor" of generative AI is simple sleight of hand. Humans are trained to see patterns where there are none, and ignore anything that doesn't fit our preconceived notions of order. Often AI is just a complicated Clever Hans effect.
For a real example: once you start analyzing an AI image with a critical mind, you see that most of the image violates basic rules of composition, perspective and anatomy. The art is frankly quite trash, and once you see it it is hard to unsee.
AI is the next bubble. VCs are really pushing it but I don't see this solving day to day software development problems anytime soon. Solving difficult CS problems is one thing and I do find it impressive, unfortunately the greater majority of everyday work is not about generating Snake games or 0/1 knapsack solutions.
Also the idea that we'll need less engineers is bogus. Technology doesn't reduce the amount of work we do, it just increases productivity and puts more strain on individuals to perform. With AI spitting out unmaintainable code nobody understands I can only see more work for more engineers as the amount of code grows.
Idk. Tech bubbles, hype cycles.. they're weird, sometimes unhinged.. they're not entirely irrational.
In aggregate, they are just the phenomenal of an extremely high risk high reward investment environment.
Most tech companies do not need cash to scale. There are few factories to be built. What they need is risk capital. The big successes alphabet, Facebook, Amazon.. these winds are so big, that they really do "justify" the bubbles.
Amazon alone, arguably justifies the '90s dotcom bubble. The tens of billions invested into venture, IPOs... A balanced portfolio accrued over the period, was probably profitable in the long term... Especially if the investor kept buying through and after the crash.
IDK that anyone actually invests in risky startups that way, but just as a thought device..
> We are stuck with ancient (in AI terms) technology.
What are you talking about? ChatGPT came out only a year ago, GPT-4 less than a year ago. That's the opposite of ancient technology, it's extremely recent.
I have a simple front-end test that I give to junior devs. Every few months I see if ChatGPT can pass it. It hasn’t. It can’t. It isn’t even close.
It answers questions confidently but with subtle inaccuracies. The code that it produces is the same kind of non-sense that you get from recent bootcamp devs who’ve “mastered” the 50 technologies on their eight page résumé.
If it’s gotten better, I haven’t noticed.
Self-driving trucks were going to upend the trucking industry in ten years, ten years ago. The press around LLMs is identical. It’s neat but how long are these things going to do the equivalent of revving to 100 mph before slamming into a wall every time you ask them to turn left?
I’d rather use AI to connect constellations of dots that no human possibly could, have an expect verify the results, and go from there. I have no idea when we’re going to be able to “gpt install <prompt>” to get a new CLI tool or app, but, it’s not going to be soon.
I was on a team developing a critical public safety system on a tight deadline a few years ago, and i had to translate some wireframes for the admin back-end into CSS. I did a passable job but it wasn’t a perfect match. I was asked to redo it by the team-lead. It had zero business value, but such was the state of our team…being pixel perfect was a source of pride.
It was one of the incidents that made me to stop front-end development.
As an exercise, I recently asked ChatGPT to produce similar CSS and it did so flawlessly.
I’m certainly a middling programmer when it comes to CSS. But with ChatGPT I can produce stuff close to the quality of what the CSS masters do. The article points this out: middling generalists can now compete with specialists.
> I recently asked ChatGPT to produce similar CSS and it did so flawlessly.
I use ChatGPT every day for many tasks in my work and find it very helpful, but I simply do not believe this.
> The article points this out: middling generalists can now compete with specialists.
I'd say it might allow novices to compete with middling generalists, but even that is a stretch. On the contrary, ChatGPT is actually best suited to use by a specialist who has enough contextual knowledge to construct targeted prompts & can then verify & edit the responses into something optimal.
I can’t get ChatGPT to outperform a novice. And now I’m having candidates argue that they don’t need to learn the fundamentals because LLMs can do it for them.. Good luck HTML/CSS expert who couldn’t produce a valid HTML5 skeleton. Reminds me of the pre-LLM guy who said he was having trouble because usually uses React.. So I told him he could use React. I don’t mean to rag on novices but these guys really seemed to think the question was beneath them.
If you want to get back into front-end read “CSS: The Definitive Guide”. Great book, gives you a complete understanding of CSS by the end.
ChatGPT goes from zero to maybe 65th percentile? There or thereabouts. It's excellent if you know nothing. It's mediocre and super buggy if you're an expert.
A big difference is that the expert asks different questions, off in the tails of the distribution, and that's where these LLMs are no good. If you want a canonical example of something, the median pattern, it's great. As the ask heads out of the input data distribution the generalization ability is weak. Generative AI is good at interpolation and translation, it is not good with novelty.
(Expert and know-nothing context dependent here.)
One example: I use ChatGPT frequently to create Ruby scripts for this and that in personal projects. Frequently they need to call out other tools. ChatGPT 4 consistently fails to properly (and safely!) quote arguments. It loves the single-argument version of system which uses the shell. When you ask it to consider quoting arguments, it starts inserting escaped quotes, which is still unsafe (what if the interpolated variable contains a quote in its name). If you keep pushing, it might pull out Shell.escape or whatever it is.
I assume it reproduces the basic bugs that the median example code on the internet does. And 99% of everything being crap, that stuff is pretty low quality, only to be used as an inspiration or a clue as to how to approach something.
> middling generalists can now compete with specialists.
They can maybe compete in areas where there has been a lot of public discussion about a topic, but even that is debatable as there are other tasks than simply producing code (e.g. debugging existing stuff). In areas where there's close to no public discourse, ChatGPT and other coding assistance tools fail miserably.
>> The article points this out: middling generalists can now compete with specialists.
They can't, and aren't even trying to. It's OpenAI that's competing with the specialists. If the specialists go out of business, the middling generalists obviously aren't going to survive either so in the long term it is not in the interest of the "middling generalists" to use ChatGPT for code generation. What is in their interest is to become expert specialists and write better code both than ChatGPT currently can, and than "middling generalists". That's how you compete with specialists, by becoming a specialist yourself.
Speaking as a specialist occupying a very, very er special niche, at that.
> middling generalists can now compete with specialists.
I want to say that this has been the state of a lot of software development for a while now, but then, the problems that need to be solved don't require specialism, they require people to add a field to a database or to write a new SQL query to hook up to a REST API. It's not specialist work anymore, but it requires attention and meticulousness.
But if you are a middling programmer when it comes to CSS how do you know the output was “flawless” and close to the quality that css “masters” produce?
You may think it did a good job because of your limited CSS ability. I'd be amazed if ChatGPT can create pixel-perfect animations and transitions along with reusable clean CSS code which supports all of the browser requirements at your org.
I've seen the similar claims made on Twitter by people with zero programming ability claiming they've used ChatGPT to build an app. Although 99% of the time what they've actually created is some basic boilerplate react app.
> middling generalists can now compete with specialists.
Middling generalists can now compete with individuals with a basic understanding assuming they don't need to verify anything that they've produced.
> It had zero business value, but such was the state of our team…being pixel perfect was a source of pride
UX and UI are not some secondary concerns that engineers should dismiss as an annoying "state of our team" nuance. If you can't produce a high quality outcome you either don't have the skills or don't have the right mindset for the job.
I'm a developer but also have an art degree and an art background. I'm very mediocre at art and design. But lately I've been using AI to help plug that gap a bit. I really think it will be possible for me to make an entire game where I do the code, and AI plus my mediocre art skills get the art side across the line.
I think at least in the short term, this is where AI's power will lie. Augmentation, not replacement.
It probably depends on the area. CSS is very popular on one hand and limited to a very small set of problems on the other.
I did try asking ChatGPT about system-related stuff several times and had given up since then. The answers are worthless if not wrong, unless the questions are trivial.
ChatGPT works if it needs to answer a question that was already answered before. If you are facing a genuinely new problem, then it's just a waste of time.
I suspect that the "depth" of most CSS code is significantly shallower than what gets written in general purpose programming languages. In CSS you often align this box, then align that box, and so forth. A lot of the complexity in extant CSS comes from human beings attempting to avoid excessive repetition and typing. And this is particularly true when we consider the simple and generic CSS tasks that many people in this thread have touted GPT for performing. There are exceptions where someone builds something really unique in CSS, but that isn't what most people are asking from GPT.
But the good news is that "simple generic CSS" is the kind of thing that most good programmers consider to be essentially busywork, and they won't miss doing it.
> middling generalists can now compete with specialists
Great point. That's been my experience as well. I'm a generalist and ChatGPT can bring me up to speed on the idiomatic way to use almost any framework - provided it's been talked about online.
I use it to spit out simple scripts and code all day, but at this point it's not creating entire back-end services without weird mistakes or lots of hand holding.
That said, the state of the art is absolutely amazing when you consider that a year ago the best AIs on the market were Google or Siri telling me "I'm sorry I don't have any information about that" on 50% of my voice queries.
AI is a tool. Like all tools, it can be useful, when applied the right way, to the right circumstances. I use it to write powershell scripts, then just clean them up, and voila.
That being said, humans watch too much tv/movies. ;)
>The article points this out: middling generalists can now compete with specialists.
This is why you're going to get a ton of gatekeepers asking you to leetcode a bunch of obscure stuff with zero value to business, all to prove you're a "real coder". Like the OP.
I would really like to see the prompts for some of these. Mostly because I'm an old-school desktop developer who is very unfamiliar with modern frontend.
So, don't leave us in suspense; what do you ask of it? Because I'm quite sure it can already pass it.
Your experience is very different from mine anyway. I am a grumpy old backend dev that uses formal verification in anger when I consider it is needed and who gets annoyed when things don't act logical. We are working with computers, so everything is logical, but no; I mean things like a lot of frontend stuff. I ask our frontend guy; 'how do I center a text', he says 'text align'. Obviously I tried that, because that would be logical, but it doesn't work, because frontend is, for me, absolutely illogical. Even frontend people actually have to try-and-fail; they cannot answer simple questions without trying like I can in backend systems.
Now, in this new world, I don't have to bother with it anymore. If copilot doesn't just squirt out the answer, then chatgpt4 (and now my personal custom gpt 'front-end hacker' who knows our codebase) will fix it for me. And it works, every day, all day.
finalAlice's Children have no parent. When you point this out, it correctly advises regarding the immutable nature of these types in F#, then proceeds to produce a new solution that again has a subtle flaw: Alice -> Bob has the correct parent... but Alice -> Bob -> Alice -> Bob is missing a parent again.
Easy to miss this if you don't know what you're doing, and it's the kind of bug that will hit you one day and cause you to tear your hair out when half your program has a Bob-with-parent and the other half has an Orphan-Bob.
Phrase the question slightly differently, swapping "Age: int" with "Name: string":
Now it produces invalid code. Share the compiler error, and it produces code that doesn't compile but in a different way -- it has marked Parent mutable but then tried to mutate Children. Share the new error, and it concludes you can't have mutable properties in F#, when you actually can, it just tried marking the wrong field mutable. If you fix the error, you have correct code, but ChatGPT-4 has misinformed you AND started down a wrong path...
Don't get me wrong - I'm a huge fan of ChatGPT, but it's nowhere near where it needs to be yet.
You can ask it almost anything. Ask it to write a YAML parser in something a bit more complex like Rust and it falls like a rag.
Rust mostly because it's relatively new, and there isn't a native YAML parser in Rust (there is a translation of libfyaml). Also you can't bullshit your way out of Rust by making bunch of void* pointers.
That's been my experience both with Tesla AP/FSD implementation & with LLMs.
Super neat trick the first time you encounter it, feels like alien tech from the future.
Then you find all the holes. Use it for months/years and you notice the holes aren't really closing.. The pace of improvement is middling compared to the gap to it meeting the marketing/rhetoric. Eventually using them feels more like a chore than not using them.
It's possible some of these purely data driven ML approaches don't work for problems you need to be more than 80% correct on.
Trading algos that just need to be right 55% of the time to make money, recommendation engines that present a page of movies/songs for you to scroll, Google search results that come back with a list you can peruse, Spam filters that remove some noise from your inbox.. sure.
But authoritative "this is the right answer" or "drive the car without murdering anyone".. these problems are far harder.
With the AI "revolution," I began to appreciate the simplicity of models we create when doing programming (and physics, biology, and so on as well).
I used to think about these things differently: I felt that because our models of reality are just models, they aren't really something humanity should be proud of that much. Nature is more messy than the models, but we develop them due to our limitations.
AI is a model, too, but of far greater complexity, able to describe reality/nature more closely than what we were able to achieve previously. But now I've begun to value these simple models not because they describe nature that well but because they impose themselves on nature. For example, law, being such a model, is imposed on reality by the state institutions. It doesn't describe the complexity of reality very well, but it makes people take roles in its model and act in a certain way. People now consider whether something is legal or not (instead of moral vs immoral), which can be more productive. In software, if I implement the exchange of information based on an algorithm like Paxos/Raft, I get provable guarantees compared to if I allowed LLMs to exchange information over the network directly.
I tried for 2 hours to get ChatGPT to write a working smooth interpolation function in python. Most of the functions it returned didn't even go through the points between which it should be interpolating. When I pointed that out it returned a function that went through the points but it was no longer smooth. I really tried and restarted over multiple times. I believe we have to choose between a world with machine learning and robot delivery drones. Because if that thing writes code that controls machines it will be total pandemonium.
It did a decent job at trivial things like creating function parameters out of a variable tho.
That's weird to read. Interpolations of various sorts are known and solved and should probably be digested by chatgpt in training by the bulk. I'm not doubting your effort by any means, I'm just saying this sounds like one of those things it should do well.
There's a recent "real coding" benchmark that all the top LLMs perform abysmally on: https://www.swebench.com/
However, it seems only a matter of time before even this challenge is overcome, and when that happens the question will remain whether it's a real capability or just a data leak.
I have a very similar train of thought roll through my head nearly every day now as I browse through github and tech news. To me it seems wild how much serious effort is put into the misapplication of AI tools on problems that are obviously better solved with other techniques, and in some cases where the problem already has a purpose built, well tested, and optimized solution.
It's like the analysis and research phase of problem solving is just being skipped over in favor of not having to understand the mechanics of the problem you're trying to solve. Just reeks of massive technical debt, untraceable bugs, and very low reliability rates.
When studying fine art, a tutor of mine talked about "things that look like art", by which she meant the work that artists produce when they're just engaging with surface appearances rather than fully engaging with the process. I've been using GitHub Copilot for a while and find that it produces output that looks like working code but, aside from the occasional glaring mistake, it often has subtle mistakes sprinkled throughout it too. The plausibility is a serious issue, and means that I spend about as much time checking through the code for mistakes as I'd take to actually write it, but without the satisfaction that comes from writing my own code.
I dunno, maybe LLMs will get good enough eventually, but at the moment it feels plausible to me that there's some kind of an upper limit caused by its very nature of working from a collection of previous code. I guess we'll see...
Try breaking down the problem. You don't have to do it yourself, you can tell ChatGPT to break down the problem for you then try to implement individual parts.
When you have something that kind of works, tell ChatGPT what the problems are and ask for refinement.
IMHO currently the weak point of LLMs is that they can't really tell what's adequate for human consumption. You have to act as a guide who knows what's good and what can be improved and how can be improved. ChatGPT will be able to handle the implementation.
In programming you don't have to worry too much about hallucinations because it won't work at all if it hallucinates.
It hallucinates and it doesn't compile, fine.
It hallucinates and flips a 1 with a -1; oops that's a lot of lost revenue. But it compiled, right?
It hallucinates, and in 4% of cases rejects a home loan when it shouldn't because of a convoluted set of nested conditions, only there is no one on staff that can explain the logic of why something is laid out the way it is and I mean, it works 96% of the time so don't rock the boat.
Oops, we just oppressed a minority group or everyone named Dave because you were lazy.
I'd be curious to see how a non expert could perform a non-trivial programming task using ChatGPT. It's good at writing code snippets which is occasionally useful. But give it a large program that has a bug which isn't a trivial syntax error, and it won't help you.
> In programming you don't have to worry too much about hallucinations because it won't work at all if it hallucinates.
You still have to worry for your job if you're unable to write a working program.
Similar experience. I recently needed to turn a list of files into a certain tree structure. It is a non-trivial problem with a little bit of flavor of algorithm. I was wondering if GPT can save me some time there. No. It never gave me the correct code. I tried different prompts and even used different models (including the latest GPT 4 Turbo), none of the answers were correct, even after follow-ups. By then I already wasted 20 minutes of time.
> Self-driving trucks were going to upend the trucking industry in ten years, ten years ago.
And around the same time, 3D printing was going to upend manufacturing; bankrupting producers as people would just print what they needed (including the 3D printers themselves).
A few weeks ago, I was stumped on a problem, so I asked ChatGPT (4) for an answer.
It confidently gave me a correct answer.
Except that it was "correct," if you used an extended property that wasn't in the standard API, and it did not specify how that property worked.
I assume that's because most folks that do this, create that property as an extension (which is what I did, once I figured it out), so ChatGPT thought it was a standard API call.
Since it could have easily determined whether or not it was standard, simply by scanning the official Apple docs, I'm not so sure that we should rely on it too much.
ChatGPT seems to invent plausible API calls when there's nothing that would do the job. This is potentially useful, if you have control of the API. Undesirable if you don't. It doesn't know.
There's a Swiss town which had autonomous shuttles running for 5 years (2015-2021) [1].
There's at least two companies (Waymo and Cruise) running autonomous taxi services in US cities that you can ride today.
There have been lots of incorrect promises in the world of self-driving trucks/cars/buses but companies have gotten there (under specific constraints) and will generalize over time.
It should be noted that the Waymo and Cruise experiments in their cities are laughably unprepared for actual chaotic traffic, often fail in completely unpredictable ways and are universally hated by locals. Autonomous buses and trams are much more successful because the problem is much easier too.
Those "autonomous" vehicles have as much to do with real autonomy as today's "AI" has in common with real self-conscious intelligence. You can only fake it so long, and it is an entirely different ballgame.
I remember we had spam filters 20 years ago, and nobody called them "AI", just ML. Todays "AI" is ML, but on a larger scale. In a sense, a million monkeys typing on typewriters will eventually produce all the works of Shakespeare. Does this make them poets?
If it's not a life or death situation (like a self-driving truck slamming into a van full of children or whatever), I don't think people will care much. Non-tech people (i.e. managers, PMs) don't necessarily understand/care if the code is not perfect and the barrier for "good enough" is much lower. I think we will see a faster adoption of this tech...
No. If the code generated by chatgpt cannot even pass the unit test it generates in the same response (or is just completely wrong) and requires significant amount of human work to fix it, it is not usable AI.
That's what I am running into on an everyday basis.
I have to ask, though: if ChatGPT has by most accounts gotten better at coding by leaps and bounds in the last couple years, might that not also indicate that your test isn't useful?
I agree this is the first time there is sort of irrefutable objective evidence that the tests are not measuring something secularly useful for programming anymore. There has been an industry wide shift against leetcode for a long time nonetheless.
Push comes to shove, it always tends to come down to short term cost.
If it gets the job done, and it's wildly cheaper than the status quo (Net Present Value savings). they'll opt for it.
The only reason the trucks aren't out there gathering their best data, that's real world data, is regulation.
Businesses will hire consultants at a later stage to do risk assessment and fix their code base.
> Every few months I see if ChatGPT can pass it. It hasn’t. It can’t. It isn’t even close.
As someone currently looking for work, I'm glad to hear that.
About 6 months ago, someone was invited to our office and the topic came up. Their interview tests were all easily solved by ChatGPT, so I've been a bit worried.
My take on LLMs is as follows: even if its effectiveness scales exponentially with time(it doesn't), so does the complexity of programs with (statistically speaking) each line of code.
Assuming a LLM gets 99% of the lines correct, after 70 lines the chance of having at least one of them wrong is already around 50%. A LLM effective enough to replace a competent human might be so expensive to train and gather data for that it will never achieve a return on investment.
Last time I used ChatGPT effectively was to find a library that served a specific purpose. All of the four options it gave me were wrong, but I found what I wanted among the search results when I looked for them.
The more automated ones will separately write tests and code, and if the code doesn't compile or pass the test, give itself the error messages and update its code.
Code Interpreter does this a bit in Chat-GPT Plus with some success.
I don't think it needs much more than a GPT-4 level LLM, and a change in IDEs and code structure, to get this working well enough. Place it gets stuck it'll flag to a human to help.
We'll see though! Lots of startups and big tech companies are working on this.
I understand your concern, but isn't it apples v oranges.
Yes, ChatGPT can't pass a particular test of X to Y. But does that matter when ChatGPT is both the designer and the developer? How can it be wrong, when its answer meets the requirements of the prompt? Maybe it can't get from X to Y, but if its Z is as good as Y (to the prompter) then X to Y isn't relevant.
Sure there will be times when X to Y is required but there are plenty of other times where - for the price - ChatGPT's output of Z will be considered good enough.
"We've done the prototype (or MVP) with ChatGPT...here you finish it."
It is likely that LLM have an upper border of capability. Similarly with denoising AI like stable diffusion.
You can put even more data into it and refine the models, but the growth in capability has diminishing returns. Perhaps this is how far this strategy can bring us, although I believe they can still be vastly improved and what they can already offer is nevertheless impressive.
I have no illusion about the craft of coding becomes obsolete however. On the contrary, I think the tooling for the "citizen developer" are becoming worse, as well as the ability for abstraction in common users since they are fenced into candyland.
You must be interviewing good junior front-end devs. I have seen the opposite as gpt-4 can put a simple straightforward front-end while juniors will go straight to create-react-app or nextjs.
Are the junior devs expected to code it without running it and without seeing it rendered, or are they allowed to iterate on the code getting feedback from how it looks on screen and from the dev tools? If it is the second one, you need to give the agent the same feedback including screen shots of any rendering issues to GPT4-V and all relevant information in dev tools for it to be a fair comparison. Eventually there will be much better tooling for this to happen automatically.
We have devs that use AI assist, but it’s to automate the construction of the most mindless boilerplate or as a more advanced form of auto complete.
There is no AI that comes close to being able to design a new system or build a UI to satisfy a set of customer requirements.
These things just aren’t that smart, which is not surprising. They are really cool and do have legitimate uses but they are not going to replace programmers without at least one order of magnitude improvement, maybe more.
Cool...so what's the test? We can't verify if you're talking shit without knowing the parameters of your test.
AI isn't capable of generating the same recipe for cookies as my grandma, she took the recipe to her grave. I loved her cookies they were awesome...but lots of people thought they were shit but I insist that they are mistaken.
Unfortunately, I can't prove I'm right because I don't have the recipe.
If you can get it to stop parroting clauses about how "as an AI model" it can't give advice or just spewing a list of steps to achieve something - I have found it to be a pretty good search engine for obscure things about a technology or language and for searching for something that would otherwise require a specific query that google is unhelpful searching for.
I've told people, every experiment I do with it, it seems to do better than asking stack overflow, or helps me prime some code that'll save me a couple of hours, but still requires manual fix ups and a deep understanding of what it generates so I can fix it up.
Basically the gruntest of grunt work it can do. If I explain things perfectly.
I'm probably bad at writing prompts, but in my limited experience, I spend more time reviewing and correcting the generated code than it would have taken to write it myself. And that is just for simple tasks. I can't imagine thinking a llm could generate millions of lines of bug free code.
Asking GPT to do a task for me currently feels like asking a talented junior to do so. I have to be very specific about exactly what it is I'm looking for, and maybe nudge it in the right direction a couple of times, but it will generally come up with a decent answer without me having to sink a bunch of time into the problem.
If I'm honest though I'm most likely to use it for boring rote work I can't really be bothered with myself - the other day I fed it the body of a Python method, and an example of another unit test from the application's test suite, then asked it to write me unit tests for the method. GPT got that right on the first attempt.
Personally, for me this flow works fine AI does the first version -> I heavily edit it & debug & write tests for it -> code does what I want -> I tell AI to refactor this -> tests pass and the ticket is done.
> It answers questions confidently but with subtle inaccuracies.
This is a valid challenge we are facing as well. However, remember that ChatGPT which many coders use, is likely training on interactions so you have some human reinforcement learning correcting its errors in real-time.
How is it trained on reactions? Do people give it feedback? In my experience in trying I stop asking when it provides something useful or something so bad I give up (usually the latter I'm afraid). How would it tell a successful answer from a failing one?
Have you tried the new Assistants API for your front-end test? In my experience it is _significantly_ better than just plain ol’ ChatGPT for code generation.
Making that claim but not sharing the "simple test" feels a bit pointless tbh.
Edit: I see, they don't want it to be scraped (cf. https://news.ycombinator.com/item?id=38260496), though as another poster pointed out, submitting it might be enough for it to end up in the training data.
As long as this is true, ChatGPT is going to be a programmer's tool, not a programmer's replacement. I know that my job as I know it will vanish before I enter retirement age, but I don't worry it will happen in the next few years because of this.
I have the same experience with the test I give my back-end devs. ChatGPT can't even begin to decode an encoded string if you don't tell it which encoding was used.
ChatGPT is great at some well defined, already solved problems. But once you get to the messy real world, the wheels come off.
I hope that you are testing this on GPT-4/ChatGPT Plus. The free ChatGPT is completely not representative of the capabilities or the accuracy of the paid model.
I tried doing this and it actually took longer due to all of the blind alleys it led me down.
There is stuff that it can do that appears magically competent at but it's almost always cribbed from the internet, tweaked with trust cues removed and often with infuriating, subtle errors.
I interviewed somebody who used it (who considered that "cheating") and the same thing happened to him.
You got everyone talking about how GPT isn’t that bad at coding etc but everyone is missing the point.
The no code industry is massive. Most people don’t need a dev to make their website already. They use templates and then tweak them through a ui. And now you have Zapier, Glide, Bubble etc.
LLMs won’t replace devs by coding entire full stack web apps. They’ll replace them because tools will appear on the market that handle the 99% cases so well that there is just less work to do now.
I collaborate with front-end teams that use a low-code front-end platform. When they run into things that aren’t built-in, they try to push their presentation logic up the stack for the “real” programming languages to deal with.
Do people seriously consider this the waning days of the craft? I don’t understand that.
My view is that I am about to enter the quantum productivity period of coding.
I am incredibly excited about AI assistance on my coding tasks, because it improves not only what I’m writing, but also helps me to learn as I go. I have never had a better time writing software than I have in the last year.
I’ve been writing software for a few decades. But now I’m able to overcome places where I get stuck and have almost a coach available to help me understand the choices I’m making and make suggestions constantly. And not just wandering over to a fellow cuders desk to ask them about a problem I am facing, but actually give me some productive solutions that are actually inspirational to the outcome.
It’s amazing.
So why do people think that coding is coming to some kind of end? I don’t see any evidence that artificial intelligence coding assistants are about to replace coders, unless you… suck badly at building things, so what are people getting on about?
I feel like somebody came along and said, “foundations are now free, but you still get to build a house. But the foundations are free.”
I still have to build a house, and I get to build an entire house and architect it and design it and create it and socialize it and support it and advocate for it and explain it to people who don’t understand it but… I don’t have to build a foundation anymore so it’s easier.
I agree it's amazing. But your comment doesn't touch on the key economic question that will decide for how many people it will be this amazing new dev experience.
If AI makes developers twice as productive (maybe a few years down the road with GPT-6), will this additional supply of developer capacity get absorbed by existing and new demand? Or will there be half as many developers? Or will the same number of developers get paid far less than today?
These questions arise even if not a single existing dev job can be completely taken over by an AI.
A secondary question is about the type of work that lends itself to AI automation. Some things considered "coding" require knowing a disproportionate number of tiny technical details within a narrowly defined context in order to effect relatively small changes in output. Things like CSS come to mind.
If this is the sort of coding you're doing then I think it's time to expand your skillset to include a wider set of responsibilities.
Considering how much the craft has expanded - when in high school, I wrote an application for pocket for a small business in Borland Delphi 7. The domain knowledge I needed for that was knowing the programming environment, and a bit about Windows.
Nowadays, like the rest of the full-stack 'web' developers, I work on complex webapps that use Typescript, HTML, CSS, Kubernetes, Docker, Terraform, Postgres, bash, GitHub Actions, .NET, Node, Python, AWS, Git. And that isn't even the full list.
And it's not even a flex, all of the above is used by a relatively straightforward LoB app with some hairy dependencies, a CI/CD pipeline + a bit of real world messiness.
I need to have at least a passing familiarity with all those technologies to put together a working application and I'm sure I'm not alone with this uphill struggle. It's a staggering amount to remember for a single person, and LLMs have been a godsend.
> “If AI makes developers twice as productive (maybe a few years down the road with GPT-6), will this additional supply of developer capacity get absorbed by existing and new demand? Or will there be half as many developers? Or will the same number of developers get paid far less than today?”
Something to remember is that every new innovation in software development only raises the expectations of the people paying the software developers.
If developers are 3x as productive, then the goals and features will be 3x big.
The reason for this is that companies are in competition, if they lag behind, then others will eat up the market.
The company that fires 50% of their staff because of “AI Assistance” is not going to be able to compete with the company that doesn’t fire their staff and still uses “AI Assistance”…
Increased developer productivity will lead to a lot more software development, at all levels, rather than less.
1. "In economics, the Jevons paradox occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced."
I’m very bullish. My dad and I were talking about this the other day, but there are still so many quality-of-life improvements that are bottlenecked by not having enough developers or enough time.
For example, deploying hardware to all subways to track their status and location with higher accuracy. I want a truly smart meal planner that can look in my fridge and my eating habits and tell me what I need to eat. I want a little plugin for Obsidian that will let me sync tasks with Linear.
There are tons of tiny little pieces of technology that would be useful but their economic value is low or unclear; if developers become more efficient at not only writing code but testing, deploying, getting feedback, these become possible. Or, the LLMs can’t become great at all of those things, and developers keep their jobs (and high pay). You can’t have it both ways.
It is way "easier" to be productive as a developer today vs 20 years ago. I can easily build web apps fast that would have taken a big team to build if even possible. Yet, more devs today than 20 years ago. If every dev is suddenly 20 times more productive (however that would be measured) means that some projects that in the past would not be "profitable"/"worth it" are now no brainers and companies will invest in doing those projects.
Regarding economics, this has been brought up about automation for many decades now. Humans adapt, adjust and productivity output increases to even higher levels.
Yes, it may harm certain professions, careers and people, but the only option is to accept and adapt. We can talk all day and night about how sad and unfair this is, but unfortunately there is nothing stopping such technological advancements.
Though a little blunt at times, the libertarian economist and journalist Henry Hazlitt talks about this in great detail in his book, Economics in One Lesson, which I highly recommend to anyone no matter what belief system. He also wrote many essays on it, including one I'll share here:
https://fee.org/articles/the-curse-of-machinery/
Reasons why (supposition, not necessarily in agreement, just arguments I am familiar with) -
It's because foundations are now free but nobody understands how they work anymore - or soon won't hence the waning as opposed to disappeared. There are whole levels that a coder needed to understand in the past that recent entrants to the field do not understand, that can still subtly affect how things work, if no one understands the things the craft depends on, then the craft is waning.
For anyone who started programming more than 13 years ago in the most widespread programming discipline for the past few decades (Web technologies), which in this career makes you super-old, the craft is waning because every year it becomes more and more difficult to understand the whole stack that the web depends on. Not even considering the whole stack of technologies that my first paragraph alluded to.
For Frontend coders it is waning because there are ever increasing difficulties to find out how to do something someone else did by looking at their code - modern build technologies means looking at the code of a site is not worthwhile. And people were already complaining about that 13+ years ago.
If you have kids or outside work responsibilities then in combination with this and the need to produce things and the ever increasing eating of stuff by software (opening new problems and business areas one might need to be cognizant of) it becomes less possible, for those not in school, to hone their craft via purposeful working. For this reason it may be that the craft is waning.
Finally improving productivity is not necessarily something that correlates with improving the craft - industrialization may have improved productivity and made many products available to many people that did not have them before, but it is pretty well known that it was not beneficial to the crafts. Perhaps the feeling is the same here.
Isn't this a slight over-generalization from web dev? If you learned programming in the pre-web era then you weren't able to learn how programs work by studying the shipped artifacts, but the craft wasn't waning, far from it.
I learned HTML in the mid nineties and even then I don't honestly recall learning very much from View Source. HTML has never been particularly easy to read even when written by the rare breed who are fanatical about "semantic" markup (in quotes because even so-called semantic markup doesn't communicate much in the way of useful semantics). HTML lacks extremely basic abstraction features needed to make code readable, like (up until very recently) any kind of templating or components system, so even if you were trying to learn from Yahoo! in 1996 you'd be faced with endless pages of Perl-generated HTML boilerplate. So I think most of us learned the old fashioned way, from books, tutorials and copious amounts of fiddling.
I think what will happen is what happened to hardware. You used to come across people who could do things like solder components to a board and hook up various ICs as well. Heck, I did this at uni.
Now that layer of craftsmen is gone. You are either an uber-expert in how computer hardware works, or you just buy the hardware and treat it like a kind of magic.
Traditionally to become an expert you went through an intermediate stage, showed your interest, and got trained. Your hobby could turn into a profession, since the step up was gentle and interested people were needed to do some of the supporting work.
Nowadays if you're going to work in a chip fab, it's not because you were a soldering iron kid. You go through a quite long academic process that doesn't really recruit from the ranks of the hobbyists, though of course you'd expect there to be some natural interest. But the jump isn't gentle, you end up learning some pretty advanced things that make soldering seem like the stone age.
Software has this layer of craftsmen still, though it is rapidly dying, and not just from LLMs. There's a enormous number of people who can more or less hook up a website and make it work, without knowing everything about how exactly it does that. There's also plenty of excel people and python scripting people who use code in their day-to-day, but can't tell you advanced concepts. There's a lot of modern services that make this sort of thing easier (wordpress etc), and this level of skill is very learnable by people in the developing world. It's not like you can't become a real expert in those parts of the world, but economically it makes sense that there are a lot of intermediately skilled there.
What will happen with GPT and the like is the experts will not need as much help from the juniors. If you're a software architect in charge of a whole system, you won't just sketch out the skeleton of the system and then farm out the little tasks to junior coders. Maybe there are pieces that you'd farm out, but you will definitely save on the number of juniors because an LLM will give you what you need.
The result being that we'll get fewer people trained to the highest levels, but those who are will be much more productive. So those guys will indeed be entering the quantum age, but it strands a lot of people.
Yes, the same thing happened before. There was a generation that tinkered with cars, there was a generation that tinkered with radios. There was a generation that tinkered with computer hardware. And then there was a generation that tinkered with software, and now it's going away.
The real question for me is whether AI will put humans out of most intellectual work, not just programming. Then, even if AI is shared equitably and satisfies our every need, most of us become some kind of sponges that don't need to think for a living. Or human intelligence will remain but only as a peacock's tail, like physical strength is now.
Maybe a true "friendly AI" would recognize the human need to be needed, for human capabilities to stay relevant, and choose to put limits on itself and other AIs for that reason.
My sense is that the doomers in software these days are either inexperienced and lack perspective from industry shifts over the years or weren't very good to begin with and could not build beyond basic crud and data shipping applications of varying complexity.
> My sense is that the doomers in software these days are either inexperienced and lack perspective from industry shifts over the years or weren't very good to begin with and could not build beyond basic crud and data shipping applications of varying complexity.
My experience rather is that such people often (though not always) are quite good programmers, but came to a different conclusion on how it makes sense to develop software than the direction which the industry shifted to (and often have good, though sometimes non-mainstream reasons for their opinions). Thus, despite being good programmers, "they fell out fashion" with "how you are (by today's fashion/hype) supposed to build software on this current day". So, they became more and more hard to employ and thus more and more frustrated (just to be clear: in my experience they are often quite right in their opinions).
I mean most people in most jobs are average, or below. Programmers aren't immune: most of them are bad at their jobs on aggregate. Of course they can be replaced by LLM output: their output is already half broken too. I'm only slightly exaggerating for effect, but it's been 17 years for me in this industry and I've seen that over and over again.
He is. Later in the article he explains a task he did during the working day to his wife. It seemed like his employer uses some sort of custom tableview control, or he was working on a business report. Honestly it sounded like the bigger threat to that particular task was someone finding an open source library that did it already, but it illustrated the point and the story was nicely written.
That’s interesting. I think there’s a real shift going on in terms of what coding actually means I’ve never produced so much code so quickly and it’s disconcerting to the part of me that wants to feel like I earned the outcome on a pretty deep level because I can just generate shit that works so fast and then just edit it until I get exactly what I’m looking for and it feels like cheating in a weird way.
I still don’t feel that way about front end frameworks for the web. Oh my God what what are people doing?
The author asserts that he is a professional programmer as one of his day jobs, as well as handling the “serious programming” in his hobby projects with a mostly-hardware guy who is also an ex-professional-programmer, but whose coding skills are out-of-date.
His LinkedIn has nothing more current than Jan 2017 at Genius, and that role (one of only two jobs with what looks like an IC-like title, “Developer”, that lasted more than a couple months), has a description that indicates it wasn't mostly an IC position but drifting among recruiting, acting product lead, PM, and something like an assistant to the CEO roles.
He really has a resume that screams of having decent communications skills and professional network (and maybe manangement skills) and wanting to be seen as a programmer but consistently either needing to find a way out of an environment or into a different role in the same org before the checks his communications skills wrote that his tech skills couldn't back caught up to him.
Waning days of the career associated with it. Similar to how old craftsmanship has been superseded by machine/industrialisation.
The need for mastery of the craft will be lesser, and so mastery of it will wane, as people depend on AI instead. Then if you ever need someone to do it without AI, you might no longer be able to find anyone with the craftsmanship know-how to do it.
Also, it'll probably bring down how good coding is as a career. As productivity enters this quantum state, you'll need less engineers, and they'll need less qualifications, which might translate to less jobs, worse pay, and an increased expectation in productivity that makes you dependent on AI, as you can no longer meet these expectations and compete with others if you also dont fully leverage AI.
Open a ChatGPT account. Set up a GitHub account if you don’t have one. Think of a common problem and see if a library exists in your favourite language to do what you want. If it doesn’t exist, explain your requirements to ChatGPT. You can even tell it the same thing you put in your comment here: “I want to learn how to use ChatGPT to augment my programming skills”. Make sure it’s done properly with automated tests and you have a licence (ChatGPT will help you with both).
Bonus points for doing things like using GitHub actions for CI and publishing it to a package repository with guidance from ChatGPT.
The author of the article seems to be a mediocre coder; perhaps they have not had enough experience in the domain to arrive at a big-picture perspective.
This is funny because the conclusion GP arrives at as more or less the exact same one that TFA ends on. Literally, the last two sentences: "I shouldn’t worry that the era of coding is winding down. Hacking is forever."
> My view is that I am about to enter the quantum productivity period of coding.
It comes down to whether society's demand for programmers is fixed, or if it scales with how productive programmers are, or more likely some mix of these scenarios. If the demand is fixed, then you are just taking someone's job by being more productive.
Certain programming jobs will be replaced with AI. My prediction is that by the end of 2024 we will have a commercial AI that does an okay job of converting UI designs to frontend code. By the end of 2025 a commercial AI will be able to create UI designs and produce frontend code that's at least 80% complete.
I honestly think we’re entering an age where a skilled entrepreneur will be able to run a $10M/year business all by themselves with AI assisted coding, design and marketing.
> I honestly think we’re entering an age where a skilled entrepreneur will be able to run a $10M/year business all by themselves with AI assisted coding, design and marketing.
You forgot the accounting, taxation, legal and compliance stuff, which at least in Germany takes a huge amount of the entrepreneur's time. :-(
Yeah, I found TFA to be very annoying with its mix of nostalgia (despite the author not even being 40 years old ?!?) and silly examples.
1.) Apple’s programming environment being forbidding : no shit, Apple is well known for the hoops developers have to jump through to be allowed into their monopoly, why any self-respecting hacker would ever want to code for iOS (and now even MacOS, Android, Windows) is beyound me. I guess that's partially because they are US-based, where it's still somewhat respectable / not illegal ?
2.) Visual C++ : similar deal, just a few years later the bar for graphical interfaces would be significantly lowered with the likes of Python+Qt, Java, HTML+CGI... and even back then for non-graphical interfaces we had BASIC.
(The design and programming of graphical interfaces is its own distinct profession for a reason, jumbling it in with «coding» as a big indistinct mass just because the same person used to have to do both regardless of their level of proficiency is missing the point.)
3.) > When I got into programming, it was because computers felt like a form of magic. The machine gave you powers but required you to study its arcane secrets—to learn a spell language. This took a particular cast of mind. I felt selected. I devoted myself to tedium, to careful thinking, and to the accumulation of obscure knowledge. Then, one day, it became possible to achieve many of the same ends without the thinking and without the knowledge. Looked at in a certain light, this can make quite a lot of one’s working life seem like a waste of time.
It's not the LLMs that got rid of this, it's already Web search engines, years ago ! (It also takes skill to «prompt engineer» a search engine to find you a good answer. Not to mention that one of the still ongoing - but for how long ? - issues, is the enshittification of them through Google's monopoly. It should get better once they are out of the picture.)
4.) Proposition 5 of Euclid’s Elements I
This seems to be another good example of how a lot of these issues stem from the lack of good teaching / documentation. Consider how much clearer it would be if merely you split it into two propositions !
It also seems to be a good example of an arbitrary goal of hardly any practical significance ? (FizzBuzz, that a good fraction of CS college students are unable to grok, would be a much better example ?) But perhaps I am wrong, and a mathematician / mathematics teacher can explain ?
I have way better examples : I am still pissed that we keep teaching students pi instead of tau and pseudovectors (papering over the difference with vectors !!) instead of geometric algebra. Imagine we still had to do math with words like before the modern era (sqrt() also comes to mind), instead of mathematical notation !
5.) There's also a whole big chunk about coders being (/ having been) a highly valued profession, and it's the likely loss of that that the article seems to be mostly deploring, but we shouldn't confuse it with the other things.
I don't see it - and by that I don't mean I don't think AI can write good code and get better over time. I just don't see how it would work as a workflow to replace (most) devs by AI.
If I take a junior programmer's task, say, creating CRUD endpoints. Describing the requirement in a way that matches exactly what I want will probably take more time that doing the coding assisted by something like copilot. Can we really imagine a non technical user using an AI having it do development from A to Z? What if the generated code has a bug, can we really imagine that at no point someone will need to be in the loop? Even if a tech person intervenes in the case of a bug how much time would be lost to investigate what the AI wrote and trying to understand what happened in retrospect - the time or cost saved to write the code would be lost quickly. Writing code is a small part of the job after all. LLMs are good at generating code, but they are fundamentally not problem solvers.
The technology is amazing but I think LLMs will just be another tool in the arsenal for devs. It's also an amazing tutor. It can avoid having to call a developper for some self contained problems (writing a script to scrape content from a web page for example).
I agree in the sense that, in its current state, will not replace devs entirely. But it can make the workflow much easier for many devs, to the point where you'd possibly need less of them or can get more done with the same amount.
But basically I have found that it is a really powerful general purpose assistant/brainstorming pall for a lot of the things that would normally eat up a lot of time.
To expand on that, it isn't even limited to code, but also surrounding tasks. I have used it to round out documentation in various ways. Either by giving it a lot of the rough information and asking to write coherent documentation for me, or by giving me feedback.
The other way around it has helped me onboard on new projects by giving it bits and pieces of written text where I had trouble understanding what the document said.
In the same sense when dealing with management bullshit I have used it to just presenting it with what was asked, telling it my view on it and then asking it to come up with responses based around certain perspectives. Which meant I had to spend less mental bandwidth on inane stuff there as well.
And yes, a lot of it can also be achieved in a team with other people. But, those people aren't always around and also have their own things to do. The advantage of tools like ChatGPT is that they don't get tired, so I can really channel my inner toddler and just keeping asking "why" until I am satisfied with the answer.
Even if there are other people available to turn to for help, ChatGPT can also help in refining the questions you want to ask.
At this stage, it's like a junior. It's pretty useful for the boilerplate or mundane stuff and even with implementation of algorithms from human description to computer code.
It's also very good at converting a code from one language to another and getting stuff done when directed properly.
I can definitely see how large impact it will have on the employment prospects of many. It's not replacing the engineers yet, but those who specialise in a tech as an implementation specialist are screwed. Even the increased productivity alone will reduce the demand.
In most real world systems increased efficiency leads to more demand, not less - barring physical constraints. Usually because we can now apply the _thing_ to way more complex scenarios. I predict way more software will be created per year and more complex software will exist and evolve at a faster pace.
Once you make agriculture more efficient we can't really eat way more than today so we need less people working there. But if you make software easier to write, I think you'll just end up with way more software because the complexity of human needs and processes is unbounded unlike eating food.
I’ve spent a long time in the public sector digitalisation in Denmark and where every other “no-code/low-code” or whatever you call the tools that claim to not require programmers have failed miserably GPT is succeeding. Yes, yes, I know some of you sell these tools and some of you have even been very successful in that effort but in my anecdotal experience from 98 municipalities, it’s never once worked, not for long anyway.
By contrast we now have digitally inclined employees creating and automating things with the help of CharGPT. Much of it is terrible in terms of longevity the same way the previous RPA or workflow tools were, but unlike those, people can now also maintain them. At least as long as you keep the digitally inclined employees onboard, because they’re still not software developers and things like scaling, resource usage, documentation, error handling and so on isn’t being done. But it could be, at least for around 90% of it which will frankly never really matter enough to warrant an actual software developer anyway because it mostly “frees” a few hours a month through its automation.
But with this ability, and the improvements in things like SharePoint online (and likely it’s competition that I don’t know about) a lot of the stuff you’d need either staffed software developers or external consultants to handle, can be handled internally.
This isn’t the death of software engineering as such. Like I said, it doesn’t scale and it’s also going to create a bunch of issues in the long term as more and more of this “amateur” architecture needs to function together, but at the same time, if you need to Google how to get some random lines of text from a dictionary, then I’m not sure you aren’t in danger either. And I really don’t say this to be elitist, but everything you google program can be handled fairly easily by GPT and “good enough” that it’s simply going to happen more and more in our industry.
If you look into my history you’ll see that I’m both impressed and unimpressed by LLMs (well GPT, let’s be honest the others all suck). This is because it really hasn’t been able to help us develop anything in our daily work. It writes most of our documentation, and it’s scary good at it. It also does a lot of code-generation, like auto-generating types/classes/whatever you call them from Excel data mapping sheets + CRUD functionality. Sure, we did that before by writing some short CLI scripts, but now GPT does it for the most part. So it’s not like we don’t use it, but for actual well designed code that handles business logic in a way that needs efficiency? Yeah, it’s outright terrible. Maybe that will change over time, so far it’s not improved in the slightest.
But look at our industry as a whole. I know a lot of HN users work in American Tech or startups, but in the European non-tech Enterprise industry and the massive IT and consultant industry supporting it, there are a lot of developers who basically do what GPT excels at, and as these tools get better, we’re frankly just going to need a lot fewer software developers in general. Most of those people will likely transition into other jobs, like using GPT, but many won’t as other professions start doing their coding themselves.
What worries me the most, however, is that we still teach a lot of CS students exactly what GPT is good at. I’m an external examiner for CS students at academy level, and GPT can basically ace all of their curriculum because it’s mainly focused on producing a lot of “easy” code for businesses. I’m scared a lot of those students are going to have a rough time once LLMs really pick up, unless the curriculum changes. It won’t change in time though, because it’s already been sort of “outdated” for a decade because of how slowly our higher educations adapt to the real world here in Denmark.
What you're describing is really about empowering users and pushing code where it was not possible before due to cost - so basically confirming efficiency paradox.
> What I learned was that programming is not really about knowledge or skill but simply about patience, or maybe obsession. Programmers are people who can endure an endless parade of tedious obstacles.
This captures the reason I'm optimistic about AI-assisted programming.
The learning curve for getting started programming is horribly steep - and it's not because it's hard, it's because it's frustrating. You have to sweat through six months of weird error messages and missing semicolons before you get to the point where it feels like you're actually building things and making progress.
Most people give up. They assume they're "not smart enough" to learn to program, when really they aren't patient enough to make it through all of that muck.
I think LLMs dramatically impact that initial learning curve. I love the idea that many more people will be able to learn basic programming - I think every human being deserves to be able to use computers to automate tedious repetitive tasks in their lives.
>You have to sweat through six months of weird error messages and missing semicolons before you get to the point where it feels like you're actually building things and making progress.
Computers are rude and honest and humans prefer a pretty lie to an ugly truth. Programmers must appreciate the ugly truth in their day-to-day lives more than any other profession. (Physical engineering and construction workers and repairers also need this virtue, but less often since their feedback cycles are slower.)
More than doctors? Feedback cycles in the OR and ICU get pretty fucking short.
I do think the overall premise is silly, programming isn’t that special in this regard in my opinion. Most professions are like this, they just might not be the most visible ones like politics, journalism, show biz.
Software developer have more to learn from other professions than they often think (the old engineering professions understand this a bit better)
Honestly I think its a step back. This is like saying google translate has made everyone fluent in Spanish. At the end of the day you still need to do some vetting and understanding how code works to effectively use chatgpt. the actual writing of the code was never the hard part of software development. If this thing is only speeding up developers by six months time, then thats kind of a waste of compute compared to just offering computer science classes in schools imo, plus you'd get a much stronger generation of engineers from the latter.
> At the end of the day you still need to do some vetting and understanding how code works to effectively use chatgpt. the actual writing of the code was never the hard part of software development
I completely agree with you on that. Most of being a good software engineer is skills that ChatGPT won't help you with.
But you can't even start to learn those skills if you quit in the first six months because of the vertical learning curve tied to all of that syntax trivia.
Definitely. And think about all the other ways LLMs can improve quality of life!
In any complex software project with lots of users there is guaranteed to be an effectively endless backlog of bug tickets that are, in effect, abandoned. I think a few months ago some bug got fixed in Firefox that was ~25 years old. In most compilers and frameworks there's going to be a pile of tickets of the form, "improve error message when X happens" that just never floats to the top because programmer time is too expensive to justify working on it. Over time the difference between a senior engineer and a junior becomes not so much intelligence or even actual experience but just the accumulated scar tissue of all the crap we have to wade through to make anything work, thanks to the giant backlogs of bugs and stupid usability problems that never get solved before the product reaches EoL anyway.
For AI to fully replace programmers is going to require quite a few more major breakthroughs, but setting one loose on a bug tracker and asking it to churn out trivial fixes all day is well within reach. That will both make human programming more fun, and easier to learn.
> I think LLMs dramatically impact that initial learning curve. I love the idea that many more people will be able to learn basic programming - I think every human being deserves to be able to use computers to automate tedious repetitive tasks in their lives.
This honestly made me feel so happy. I'm reminded of the average person who is bound by the limitations of the apps they use, the popularity of "How to Automate Using Python" books...
With this new tech, people are no longer bound by that limitation. I think that's pretty cool.
I just asked ChatGPT a thing, and then on a hunch I asked if there wasn't a built-in function that does the same, and it indeed remembered there is such a function.
What if that second question had been automatically and transparently asked? What if there is a finite list of such "quiet reflecting" questions that dramatically increases the value and utility of the LLM output?
I have been having the following debate with my friend who does AI and neural network stuff:
Him: Coding will soon be obsolete, it will all be replaced by chatgpt-type code gen.
Me: OK but the overwhelming majority of my job as a "senior engineer" is about communication, organizational leadership, and actually understanding all the product requirements and how they will interface with our systems. Yes, I write code, but even if most of that were augmented with codegen, that would barely even change most of what I do.
But now we introduce a junior engineer into the mix: _their_ job is none of those things, it's just to take the issues as filed, and implement them. They don't get the hard problems to solve, they get both the task and the acceptance criteria, and for them a future version of CodeGPT or whatever it'll be called will completely replace their programming skills. And then, 10 years later, they'll be the senior engineer. And then what?
Because today's seniors will be retired in a decade or two, and as they get replaced by people who actually benefited from automatic code generation, the concept of "coding" will (if this trend keeps up) absolutely become a thing that old timers used to do before we had machines to do it for us.
These junior engineers still will need to validate that whatever the LLM implemented works and fits the requirements. If it doesn't work they need to figure out why it doesn't work.
It might not be in the same way of current day developers, but I don't foresee a near future where developers don't learn to understand code to some degree.
For example, I know a lot of people who work in the low-code development sphere of things. A lot of the developers there barely see any code if any. Yet, when you talk with them they talk about a lot of the same issues and problem-solving but in slightly different terms and from a slightly different perspective. But, the similarities are very much there as the problems are fundamentally the same.
With generated code I feel like this will also be similarly true.
> communication, organizational leadership, and actually understanding all the product requirements
These problems sound like a result of working with people. Smaller but more capable teams because of AI will need less leaders and less meetings. Everything will become much more efficient. Say goodbye to all the time spent mentoring junior engineers, soon you won't have any
> Say goodbye to all the time spent mentoring junior engineers, soon you won't have any
Yeah... no. Not with LLMs as they currently are. They are great as an assisted tool, but still need people to validate their output and then put that output to work. Which means you need people who can understand that output, which are developers. Which also means that you need to keep training developers in order to be able to validate that output.
The more nuanced approach would be saying that the work of developers will change. Which I agree with, but is also has been true over the past few decades. Developers these days are already working with a hugely different tool chain compared to developers a decade ago. It is an always evolving landscape and I don't think we are at a point yet where developers will be outright replaced.
We might get there at some point, but not with current LLMs.
Yeah I agree with this - it's much of my experience as a professional developer, too. I'm trying to navigate the organization, connect with other teams, understand what needs to get done.
The code I write feels like a side-effect of what I actually do.
I think that if that were the case, the change would be brutal. First, because as a comment below suggests, fewer people would be involved, so coordination would be simplified. Second, because many more people could access these coordination positions, and I think it would be likely that other professions would take on those roles, professions or personality types that are not usually "good coders" but now wouldn't need to be, since the machine itself could explain, for example, the general functioning of what is being produced. Therefore, I would expect the job field to be radically affected and salaries severely reduced.
Wait before product-gen AI emerges. No, seriously. Do folks here not see it's possible even today with a complex system based on LLMs? It's a matter of time.
No. I think those of us that work on enterprise software within massive orgs know the level of AI needed to do any portion of our job is leaps and bounds ahead of what is currently available. I can see some distant future where maybe this is possible, but I doubt we'll be using AI based on transformers by that point...
I read you. Connecting AI directly to a bank account and removing a human from the loop is a logical next step. It's a classical Paperclip Factory scenario though, i.e. playing with fire[0], yes, but it's nothing novel.
CEO is your scapegoat for a bad quarter. Throw all your eggs in the ai basket and you get a bad quarter, whats left to try? Companies don't like to admit they failed and walk things back to how they are. There's probably only a few off the shelf gpts you can throw in to replace your sacked one. Compared with 8 billion potential CEOs on earth you can go through to make the shareholders happy about a blood sacrifice.
It feels like this article was not written by a programmer, and it feels like a number of the commenters are not professional engineers. What part of a programmer jobs can AI realistically replace in the near term?
For the sake of argument, let’s say it could replace the coding part cost effectively. Can it still do all the other parts? Take ambiguous requirements and seek clarity from design, product, etc. (instructing an AI to a sufficient degree to build a complex feature could almost be a coding task itself). Code reviews. Deal with random build failures. Properly document the functionality so that other programmers and stakeholders can understand. Debug and fix production issues. And that’s just a subset.
Realistically, in the future there will be a long phase of programmers leveraging AI to be more efficient before there’s any chance that AI can effectively replace a decent programmer.
This will be an advantage to engineers who think on the more abstract side of the spectrum. The “lower level” programming tasks will be consumed first.
I suspect automated code reviews and doing high-quality automatic documentation (i.e. better than current standards in most projects) will be fully within the capabilities of LLMs soon. Fixing random build failures will probably follow...
So then the question is what % of a programmers job might be taken by this, and does the remaining % require a different skillset.
There are programmers that are great at coding, but complain loudly when the business gives slightly ambiguous requirements because they see their job as coding, not clarifying business rules. This group are more likely to be impacted than the programmers who will happily work in ambiguous situations to understand the business requirements.
Both code review and documentation require architectural knowledge to execute properly for a large app. This is not within the reach of current AI, and won’t be for a long time.
Like many other communities, programming has historically had its share of gatekeeping, and it's often easy to forget that "programming" spans a wide range of abilities and skill levels.
So, while GP might be technically correct in some narrow sense, I would be less quick to judge the OP article author. Some years hence, anyone who is not actively building (as opposed to using) one of these LLMs might be dismissed as "not a real programmer" (because by then, that will be the only form of programming in existence).
The thing about GPT is that it has approximate knowledge of nearly everything. It makes errors, but the errors are seemingly uncorrelated with human errors. And it knows stuff that I could take hours to search for, especially as Google becomes more and more useless.
Personally, I use it for scripting and as an executive function aid.
I haven’t seen an AI tool that can take a Figma mock up and some fuzzy requirements and turn that into real code in an existing codebase unaided by humans. Given what I’ve seen of current AI tools that still seems a long way away.
While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.
But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.
Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering
LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow, and, as such, a very good complement to a programmer's skills – seems to me like a net positive at least in the near to medium term.
I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.
My concern isn't about an LLM replacing me. My concern is our CIO will think it can, firing first, and thinking later.
The training systems we use for LLMs are still so crude. ChatGPT has never interacted with a compiler. Imagine learning to write code by only reading (quite small!) snippets on GitHub. That’s the state llms are in now. It’s only a matter of time before someone figures out how to put a compiler in a reinforcement learning loop while training an LLM. I think the outcome of that will be something that can program orders of magnitude better. I’ll do it eventually if nobody else does it first. We also need to solve the “context” problem - but that seems tractable to me too.
For all the computational resources they use to do training and inference, our LLMs are still incredibly simple. The fact they can already code so well is a very strong hint for what is to come.
I think that sentence nails it. For the people who consider "searching stackoverflow and copy/pasting" as programming, LLMs will replace your job, sure. But software development is so much more, critical thinking, analysing, gathering requirements, testing ideas and figuring out which to reject, and more.
I wouldn't be so sure it will be very long before solving big, hard, and complex problems is within reach...
Nice thing about Stack Overflow is it’s self-correcting most of the time thanks to,
https://xkcd.com/386/
GPT not so much.
When I’m stumped, it’s usually on a complex and very multi-faceted problem where the full scope doesn’t fit into the human brain very well. And for these problems, GPT will produce some borderline unworkable solutions. It’s like a jack of all trades and master of none in code. It’s knowledge seems a mile wide and an inch deep.
Granted, it could be different for junior to mid programmers.
So my usage has mostly been for it to play a more advanced rubber duck to bounce ideas and concepts off of and to do some of the more tedious scripting work (that I still have to double check thoroughly).
At some point GPT and other LLMs might be able to replace what I do in large parts. But that's still a while off.
I think much of using it well is understanding what it can and can’t do (though of course this is a moving target).
It’s great when the limiting factor is knowledge of APIs, best practices, or common algorithms. When the limiting factor is architectural complexity or understanding how many different components of a system fit together, it’s less useful.
Still, I find I can often save time on more difficult tasks by figuring out the structure and then having GPT-4 fill in the blanks. It’s a much better programmer once you get it started down the right path.
Here's an example I used the other day: Our project had lost access to our YT channel, which had 350+ videos on it (due to someone's untimely passing and a lack of redundancy). I had used yt-dlp to download all the old videos, including descriptions. Our community manager had uploaded all the videos, but wasn't looking forward to copy-and-pasting every description into the new video.
So I offered to use GPT-4 to write a python script to use the API to do that for her. I didn't know anything about the YT API, nor am I an expert in python. I wouldn't have invested the time learning the YT API (and trying to work through my rudimentary python knowledge) for a one-off thing like this, but I knew that GPT-4 would be able to help me focus on what to do rather than how to do it. The transcript is here:
https://chat.openai.com/share/936e35f9-e500-4a4d-aa76-273f63...
By contrast, I don't think there's any possible way the current generation could have identified, or helped fix, this problem that I fixed a few years ago:
https://xenbits.xenproject.org/xsa/xsa299/0011-x86-mm-Don-t-...
(Although it would be interesting to try to ask it about that to see how well it does.)
The point of using GPT-4 should be to take over the "low value" work from you, so that you have more time and mental space to focus on the "high value" work.
The one time I've found ChatGPT to be genuinely useful is when I asked it to explain a bash script to me, seeing as bash is notoriously inscrutable. Still, it did get a detail wrong somehow.
i mean raise your hand if debugging code that looks obviously correct is the part of programming you enjoy most?
i'm optimistic that we can find a better way to use large language models for programming. run it in a loop trying to pass a test suite, say, or deliver code together with a proof-assistant-verified correctness proof
In a larger sense though I think I have looked for projects that allowed a certain artistic license rather than the more academic code that you measure its worth in cycles, latency or some other quantifiable metric.
I have thought though for some time that the kind of coding that I enjoyed early in my career has been waning long before ChatGPT. I confess I began my career in a (privileged it seems now) era when the engineers were the ones minding the store, not marketing.
Lately, I have been using ChatGPT and the OpenAI API to do exactly that for a few projects. I used it to help me round out the design, brainstorm about approaches, tune database requirements, etc. I basically got to the point where I had a proof of concept for all the separate components in a very short amount of time. Then for the implementation it was a similar story. I already had a much more solid idea (technical and functional design, if you will) of how I wanted to implement things than I normally do. And, for most of the things where I would get slowed down normally, I could just turn to the chat. Then by just telling it what part I had trouble with, it would get me back on track in no time.
Having said all that, I couldn't have used it in such a way without any knowledge of programming. Because if you just tell it that you want to "create an application that does X" it will come up with overly broad solution. All the questions and problems I presented to it were based from a position where I already knew the language, platform and had a general sense of requirements.
Why make something that produces low level code based off of existing low level code instead of building up meaningful abstractions to make development easier and ensure that low level code was written right?
Basically react and other similar abstractions for other languages did more to take "coding" out of creating applications than gpt ever will IMO.
The core of what we do never changes - get input from user, show error, get input again, save the input, show the input.
Now it just got more complicated, even though 20 years later most of this could be a dull Rails or a Django app.
And AI will probably do the decent CRUD part, but you will still need an expert for the hard parts of software.
But my favourite bit is refining and optimising the code!
Finding the patterns and abstractions I can make to DRY it out.
That's the bit I like :-)
Wrestling APIs and trying to understand inadequate documentation is the worst part!
There were probably a lot of loom weavers that felt the same about their tools. But the times, they are a-changing.
You're not the minority. You're the majority. The majority can't look reality in the face and see the end. They lie to themselves.
>While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.
Everyone and I mean everyone knows that if fails often. Use some common sense here. Why was the article written despite the fact that Everyone knows what you know? Because of the trendline. What AI was yesterday versus what it is today heralds what it will be tomorrow and every tomorrow AI will be failing less and less and less until it doesn't fail at all.
>But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.
Ever the optimist. The reality is we don't know if it's good or bad. It can be both or it can weigh heavily in one direction. Most likely it will be both given the fact that our entire careers can nearly be replaced.
>Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering
Sure. AI isn't going to end hobbies. It's going to end careers and ways of life. Hobbies will most likely survive.
This sentiment parrots Sam Altman's and Musk's insistence that "AI" is super-powerful and dangerous, which is baseless rhetoric.
When I’m writing business logic unique to this specific domain then please stop mumbling bs at me.
I cannot really envision a more empowering thing for the common person. It should really upset the balance of power.
I think we'll see, soon, that we've only just started building with code. As a lifelong coder, I cannot wait to see the day when anyone can program anything.
Automatic program generation from human language really feels like the same problem with machine translation between human languages. I have an elementary understanding of French and so when I see a passage machine translated into French (regardless of software, Google Translate or DeepL) I cannot find any mistakes; I may even learn a few new words. But to the professional translator, the passage is full of mistakes, non-idiomatic expressions and other weirdness. You aren't going to see publishers publishing entirely machine translated books.
I suspect the same thing happens for LLM-written programs. The average person finds them useful; the expert finds them riddled with bugs. When the stakes are low, like tourists not speaking the native language, machine translation is fine. So will many run-once programs destined for a specific purpose. When the stakes are high, human craft is still needed.
Deleted Comment
I can ask exactly what I want in English, not by entering a search-term. A search-term is not a question, but a COMMAND: "Find me web-pages containing this search-term".
By asking exactly the question I'm looking the answer to I get real answers, and if I don't understand the answer, I can ask a follow-up question. Life is great and there's still an infinite amount of code to be written.
We tested CoPilot for a bit but for whatever reason, it sometimes produced nice boilerplate but mostly just made one-line suggestions that were slower than just typing if I knew what I was doing. It was also strangely opinionated about what comments should say. In the end it felt like it added to my mental load by parsing and deciding to take or ignore suggestions so I turned it off. Typing is (and has been for a while) not the hard part of my job anyway.
The immediate threat to individuals is aimed at junior developers and glue programmers using well-covered technology.
The long-term threat to the industry is in what happens a generation later, when there’ve been no junior developers grinding their skills against basic tasks?
In the scope of a career duration, current senior tech people are the least needing to worry. Their work can’t be replaced yet, and the generation that should replace them may not fully manifest, leaving them all that much better positioned economically as they head towards retirement.
But that is just one part of being a good software engineer. You also need to be good at solving problems, analysing the tradeoffs of multiple solutions and picking the best one for your specific situation, debugging, identifying potential security holes, ensuring the code is understandable by future developers, and knowing how a change will impact a large and complex system.
Maybe some future AI will be able to do all of that well. I can't see the future. But I'm very doubtful it will just be a better LLM.
I think the threat from LLMs isn't that it can replace developers. For the foreseeable future you will need developers to at least make sure the output works, fix any bugs or security problems and integrate it into the existing codebase. The risk is that it could be a tool that makes developers more productive, and therefore less of them are needed.
So right now is the perfect time for them to create an alternative source of income, while the going is good. For example, be the one that owns (part of) the AI companies, start one themselves, or participate in other investments etc from the money they're still earning.
I'm sure there was a phase were some old school coders who were used to writing applications from scratch complained about all the damn libraries ruining coding -- why, all programmers are now are gluing together code that someone else wrote! True or not, there are still programmers.
But the worst problem I ever had was a vice president (acquired when our company was acquired) who insisted that all programming was, should, and must by-edict be only about gluing together existing libraries.
Talk about incompetent -- and about misguided beliefs in his own "superior intelligence".
I had to protect my team of 20+ from him and his stupid edicts and complaints, while still having us meet tight deadlines of various sorts (via programming, not so much by gluing).
Part of our team did graphical design for the web. Doing that by only gluing together existing images makes as little sense as it does for programming.
But… we’d need far, far fewer programmers. And programming was the last thing humans were supposed to be able to do to ear a living.
And once 100% of the problems that can be solved with software are already solved with software... that's pretty much post-scarcity, isn't it?
I don't, however, think that we're anywhere near being replaced by the AI overlords.
Maybe it'll take the coding part of my job and hobbies away from me one day, but even then, I feel that is more of an opportunity than a threat - there are many hobby projects I'd like to work on that are too big to do from scratch where using LLMs are already helping make them more tractable as solo projects and I get to pick and choose which bits to write myself.
And my "grab bag" repo of utility code that doesn't fit elsewhere has had its first fully GPT4 written function. Nothing I couldn't have easily done myself, but something I was happy I didn't have to.
For people who are content doing low level, low skilled coding, though, it will be a threat unless they learn how to use it to take a step up.
But I worry, because it is owned and controlled by a limited few who would likely be the sole benefactors of its value.
Open source may trail openai if they come out with a 20x improvement, but I'm not sure the dystopian future playing out is as likely as I would have thought it 1-2 years ago.
Ultimately, the most valuable coders who will remain will be a smaller number of senior devs that will dwindle over time.
Unfortunately, AI is likely to reduce and suppress tech industry wages in the long-term. If the workers had clue, rather than watching their incomes gradually evaporate and sitting on their hands, they should organize and collectively bargain even more so than Hollywood actors.
I've come to state something like this as "programming is writing poetry for many of your interesting friends somewhere on the autistic spectrum". Some of those friends are machines, but most of those friends are your fellow developers.
The best code is poetry: our programming languages give a meter and rhyme and other schemes to follow, but what we do within those is creative expression. Machines only care about the most literal interpretations of these poems, but the more fantastic and creative interpretations are the bread and butter of software design. This is where our abstractions grow, from abstract interpretations. This is the soil in which a program builds meaning and comprehension for a team, becomes less the raw "if-this-then-that" but grows into an embodiment of a business' rules and shares the knowledge culture of the whys and hows of what the program is meant to do.
From what I've seen, just as the literal interpretations are the ones most of interest to machines, these machines we are building are most good at providing literal interpretable code. There's obviously a use for that. It can be a useful tool. But we aren't writing our code just for the solely literal minded among us and there's so much creative space in software development that describes/neeeds/expands into abstraction and creative interpretation that for now (and maybe for the conceivable future) that still makes so many differences between just software and good software (from the perspectives of long-term team maintainability, if nothing deeper).
Er... it didn't get out, right? Right!?
The massive shared model could do better if it was fed on your company's private source-code... but that's something that probably isn't/shouldn't-be happening.
Only if you like technofeudalism—it’s not like you’re going to own any piece of that future.
Have you noticed AI becoming more and more open source like it still was at the start of the year, or has that kinda seized up? What gives?
It’s called a moat, it’s being dug, you’re on the wrong side of it.
If I could automate my own work, I would gladly switch to just being the PM for my LLM.
To be fair, there is an abstract worry that being smart will no longer be valuable in society if AI replaces all brain work. But I think we are far from that. And a world where that happens is so DIFFERENT from ours, I think I'd be willing to pay the price.
Losing that would be a real shame.
Despite that here on HN you have people cheering them on, excited for it. Tech is one of the last good paying fields and these people don't realize it's not a matter of changing career, because there won't be anything better to retrain in.
They are cheering on their own doom.
Now, we can just nonstop build and try everything. Yay.
Some work coding can be like that; but some is just wading through a mass of stuff to fix or improve something uninteresting.
Deleted Comment
Perhaps "prompt engineering" will be the higher-level language that sticks, or perhaps it will fail to find purchase in industry for the same reasons.
The same C++ or Java or Haskell code run with the same inputs twice, will cause the same result[0]. This repeatability is the magic that enables us to build the towering abstractions that are modern software.
And to a certain mind (eg, mine), that's one of the deepest joys of programming. The fact that you can construct an unimaginably complex system by building up layer by layer these deterministic blocks. Being able to truly understand a system up to abstraction boundaries far sharper than anything in the world of atoms.
LLMs based "programming" threatens to remove this determinism and, sadly for people like me, devalue the skill of being able to understand and construct such systems.
[0]Yes, there are exceptions (issues around concurrency, latency, memory usage), but as a profession we struggle mightily to tame these exceptions back to being deterministic because there's so much value in it.
I will admit, when Copilot first became a thing in 2021, I had my own “I’m about to become obsolete” moment.
However, it’s become clear to me, both through my own experience and through research that has been conducted, that modern LLMs are fundamentally flawed and are not on the path to general intelligence.
We are stuck with ancient (in AI terms) technology. GPT 4 is better than 3.5, but not in a fundamental way. I expect much the same from 5. This technology is incredibly flawed, and in hindsight, once we have actual powerful AI, I think we’ll laugh at how much attention we gave it.
Not at all.
I was very impressed at first but it's gotten to the point where I can no longer trust anything it says other than very high level overviews. For example, I asked it to help me implement my own sound synthesizer from scratch. I wanted to generate audio samples and save them to wave files. The high level overview was helpful and enabled me to understand the concepts involved.
The code on the other hand was subtly wrong in ways I simply couldn't be sure of. Details like calculating the lengths of structures and whether something did or did not count towards the length were notoriously difficult for it to get right. Worse, as a beginner just encountering the subject matter I could not be sure if it was correct or not, I just thought it didn't look right. I'd ask for confirmation and it would just apologize and change the response to what I expected to hear. I couldn't trust it.
It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.
It's really great for this.
I've found it useful for taking some pattern I've cranking on with an extensive API and finishing the grunt work for me... it generally does a very good job if you teach it properly. I recently had to do a full integration of the AWS Amplify Auth library and instead of grinding for half a day to perfect every method, it just spits out the entire set of actions and reducers for me with well-considered state objects. Again, it needs guidance from someone with a clue, so don't fear it taking my job anytime soon.
Jaron Lanier has some ideas about the space in between turing test and blade runner.
The first film goers, watching simple black and white movies thought that they were uncanny. A train coming towards the screen, would make audiences jump and duck. When people first heard gramophones, they reported that it is indistinguishable from live orchestra.
As we learn a technology, we learn to recognize. Get a feel for its limitations and strengths. The ability to detect that technology, is a skill. Less impressive over time.
It's hard not to be impressed when a thing does a thing that you did not think it could do.
We didn't move on to being unimpressed when the thing cannot do the thing we thought it be able to do.
I am seeing people seriously using the "Please write an expression for me which adds 2 and 2" prompt in order to get the "2+2" expression they need – advocating that they got it with magical efficiency. In all honesty, I don't like writing too much, and writing code for me is always shorter and faster than trying to describe it in general-purpose language, that is why we need code in the first place.
I was also fooled and gave it too much credit, if you engage in a philosophical discussion with it it seems purpose-built for passing the turing test.
If LLMs are good at one thing, it's tricking people. I can't think of a more dangerous or valueless creation.
For a real example: once you start analyzing an AI image with a critical mind, you see that most of the image violates basic rules of composition, perspective and anatomy. The art is frankly quite trash, and once you see it it is hard to unsee.
I can do that with scaffolding or copy past template and change.
I did not try and I did not see someone actually giving existing code asking GPT to fix or change it. So that is something I’d try.
Also the idea that we'll need less engineers is bogus. Technology doesn't reduce the amount of work we do, it just increases productivity and puts more strain on individuals to perform. With AI spitting out unmaintainable code nobody understands I can only see more work for more engineers as the amount of code grows.
In aggregate, they are just the phenomenal of an extremely high risk high reward investment environment.
Most tech companies do not need cash to scale. There are few factories to be built. What they need is risk capital. The big successes alphabet, Facebook, Amazon.. these winds are so big, that they really do "justify" the bubbles.
Amazon alone, arguably justifies the '90s dotcom bubble. The tens of billions invested into venture, IPOs... A balanced portfolio accrued over the period, was probably profitable in the long term... Especially if the investor kept buying through and after the crash.
IDK that anyone actually invests in risky startups that way, but just as a thought device..
What are you talking about? ChatGPT came out only a year ago, GPT-4 less than a year ago. That's the opposite of ancient technology, it's extremely recent.
It answers questions confidently but with subtle inaccuracies. The code that it produces is the same kind of non-sense that you get from recent bootcamp devs who’ve “mastered” the 50 technologies on their eight page résumé.
If it’s gotten better, I haven’t noticed.
Self-driving trucks were going to upend the trucking industry in ten years, ten years ago. The press around LLMs is identical. It’s neat but how long are these things going to do the equivalent of revving to 100 mph before slamming into a wall every time you ask them to turn left?
I’d rather use AI to connect constellations of dots that no human possibly could, have an expect verify the results, and go from there. I have no idea when we’re going to be able to “gpt install <prompt>” to get a new CLI tool or app, but, it’s not going to be soon.
It was one of the incidents that made me to stop front-end development.
As an exercise, I recently asked ChatGPT to produce similar CSS and it did so flawlessly.
I’m certainly a middling programmer when it comes to CSS. But with ChatGPT I can produce stuff close to the quality of what the CSS masters do. The article points this out: middling generalists can now compete with specialists.
I use ChatGPT every day for many tasks in my work and find it very helpful, but I simply do not believe this.
> The article points this out: middling generalists can now compete with specialists.
I'd say it might allow novices to compete with middling generalists, but even that is a stretch. On the contrary, ChatGPT is actually best suited to use by a specialist who has enough contextual knowledge to construct targeted prompts & can then verify & edit the responses into something optimal.
I can’t get ChatGPT to outperform a novice. And now I’m having candidates argue that they don’t need to learn the fundamentals because LLMs can do it for them.. Good luck HTML/CSS expert who couldn’t produce a valid HTML5 skeleton. Reminds me of the pre-LLM guy who said he was having trouble because usually uses React.. So I told him he could use React. I don’t mean to rag on novices but these guys really seemed to think the question was beneath them.
If you want to get back into front-end read “CSS: The Definitive Guide”. Great book, gives you a complete understanding of CSS by the end.
A big difference is that the expert asks different questions, off in the tails of the distribution, and that's where these LLMs are no good. If you want a canonical example of something, the median pattern, it's great. As the ask heads out of the input data distribution the generalization ability is weak. Generative AI is good at interpolation and translation, it is not good with novelty.
(Expert and know-nothing context dependent here.)
One example: I use ChatGPT frequently to create Ruby scripts for this and that in personal projects. Frequently they need to call out other tools. ChatGPT 4 consistently fails to properly (and safely!) quote arguments. It loves the single-argument version of system which uses the shell. When you ask it to consider quoting arguments, it starts inserting escaped quotes, which is still unsafe (what if the interpolated variable contains a quote in its name). If you keep pushing, it might pull out Shell.escape or whatever it is.
I assume it reproduces the basic bugs that the median example code on the internet does. And 99% of everything being crap, that stuff is pretty low quality, only to be used as an inspiration or a clue as to how to approach something.
They can maybe compete in areas where there has been a lot of public discussion about a topic, but even that is debatable as there are other tasks than simply producing code (e.g. debugging existing stuff). In areas where there's close to no public discourse, ChatGPT and other coding assistance tools fail miserably.
They can't, and aren't even trying to. It's OpenAI that's competing with the specialists. If the specialists go out of business, the middling generalists obviously aren't going to survive either so in the long term it is not in the interest of the "middling generalists" to use ChatGPT for code generation. What is in their interest is to become expert specialists and write better code both than ChatGPT currently can, and than "middling generalists". That's how you compete with specialists, by becoming a specialist yourself.
Speaking as a specialist occupying a very, very er special niche, at that.
I want to say that this has been the state of a lot of software development for a while now, but then, the problems that need to be solved don't require specialism, they require people to add a field to a database or to write a new SQL query to hook up to a REST API. It's not specialist work anymore, but it requires attention and meticulousness.
I've seen the similar claims made on Twitter by people with zero programming ability claiming they've used ChatGPT to build an app. Although 99% of the time what they've actually created is some basic boilerplate react app.
> middling generalists can now compete with specialists.
Middling generalists can now compete with individuals with a basic understanding assuming they don't need to verify anything that they've produced.
UX and UI are not some secondary concerns that engineers should dismiss as an annoying "state of our team" nuance. If you can't produce a high quality outcome you either don't have the skills or don't have the right mindset for the job.
This scenario reminds me of:
If a job's worth doing, do it yourself. If it's not worth doing, give it to Rimmer.
Except now it's "give it to ChatGPT"
I think at least in the short term, this is where AI's power will lie. Augmentation, not replacement.
I did try asking ChatGPT about system-related stuff several times and had given up since then. The answers are worthless if not wrong, unless the questions are trivial.
ChatGPT works if it needs to answer a question that was already answered before. If you are facing a genuinely new problem, then it's just a waste of time.
But the good news is that "simple generic CSS" is the kind of thing that most good programmers consider to be essentially busywork, and they won't miss doing it.
Great point. That's been my experience as well. I'm a generalist and ChatGPT can bring me up to speed on the idiomatic way to use almost any framework - provided it's been talked about online.
I use it to spit out simple scripts and code all day, but at this point it's not creating entire back-end services without weird mistakes or lots of hand holding.
That said, the state of the art is absolutely amazing when you consider that a year ago the best AIs on the market were Google or Siri telling me "I'm sorry I don't have any information about that" on 50% of my voice queries.
That being said, humans watch too much tv/movies. ;)
This is why you're going to get a ton of gatekeepers asking you to leetcode a bunch of obscure stuff with zero value to business, all to prove you're a "real coder". Like the OP.
Then use LaTex and PDF. CSS is not for designing pixel perfect documents.
Dead Comment
Your experience is very different from mine anyway. I am a grumpy old backend dev that uses formal verification in anger when I consider it is needed and who gets annoyed when things don't act logical. We are working with computers, so everything is logical, but no; I mean things like a lot of frontend stuff. I ask our frontend guy; 'how do I center a text', he says 'text align'. Obviously I tried that, because that would be logical, but it doesn't work, because frontend is, for me, absolutely illogical. Even frontend people actually have to try-and-fail; they cannot answer simple questions without trying like I can in backend systems.
Now, in this new world, I don't have to bother with it anymore. If copilot doesn't just squirt out the answer, then chatgpt4 (and now my personal custom gpt 'front-end hacker' who knows our codebase) will fix it for me. And it works, every day, all day.
https://chat.openai.com/share/4e958c34-dcf8-41cb-ac47-f0f6de...
finalAlice's Children have no parent. When you point this out, it correctly advises regarding the immutable nature of these types in F#, then proceeds to produce a new solution that again has a subtle flaw: Alice -> Bob has the correct parent... but Alice -> Bob -> Alice -> Bob is missing a parent again.
Easy to miss this if you don't know what you're doing, and it's the kind of bug that will hit you one day and cause you to tear your hair out when half your program has a Bob-with-parent and the other half has an Orphan-Bob.
Phrase the question slightly differently, swapping "Age: int" with "Name: string":
https://chat.openai.com/share/df2ddc0f-2174-4e80-a944-045bc5...
Now it produces invalid code. Share the compiler error, and it produces code that doesn't compile but in a different way -- it has marked Parent mutable but then tried to mutate Children. Share the new error, and it concludes you can't have mutable properties in F#, when you actually can, it just tried marking the wrong field mutable. If you fix the error, you have correct code, but ChatGPT-4 has misinformed you AND started down a wrong path...
Don't get me wrong - I'm a huge fan of ChatGPT, but it's nowhere near where it needs to be yet.
If you need to tweak your prompt until you get the correct result, then we still need coders who can tell that the code is wrong.
Ask Product Managers to use ChatGPT instead of coders and they will ask for 7 red lines all perpendicular to each other with one being green.
https://www.youtube.com/watch?v=BKorP55Aqvg
Rust mostly because it's relatively new, and there isn't a native YAML parser in Rust (there is a translation of libfyaml). Also you can't bullshit your way out of Rust by making bunch of void* pointers.
Super neat trick the first time you encounter it, feels like alien tech from the future.
Then you find all the holes. Use it for months/years and you notice the holes aren't really closing.. The pace of improvement is middling compared to the gap to it meeting the marketing/rhetoric. Eventually using them feels more like a chore than not using them.
It's possible some of these purely data driven ML approaches don't work for problems you need to be more than 80% correct on.
Trading algos that just need to be right 55% of the time to make money, recommendation engines that present a page of movies/songs for you to scroll, Google search results that come back with a list you can peruse, Spam filters that remove some noise from your inbox.. sure.
But authoritative "this is the right answer" or "drive the car without murdering anyone".. these problems are far harder.
I used to think about these things differently: I felt that because our models of reality are just models, they aren't really something humanity should be proud of that much. Nature is more messy than the models, but we develop them due to our limitations.
AI is a model, too, but of far greater complexity, able to describe reality/nature more closely than what we were able to achieve previously. But now I've begun to value these simple models not because they describe nature that well but because they impose themselves on nature. For example, law, being such a model, is imposed on reality by the state institutions. It doesn't describe the complexity of reality very well, but it makes people take roles in its model and act in a certain way. People now consider whether something is legal or not (instead of moral vs immoral), which can be more productive. In software, if I implement the exchange of information based on an algorithm like Paxos/Raft, I get provable guarantees compared to if I allowed LLMs to exchange information over the network directly.
They still do an alright job, but you get that exact situation of 'eh, its just okay'.
Its the ability to use those responses when they are good, and knowing when to move on from using an LLM as a tool.
It did a decent job at trivial things like creating function parameters out of a variable tho.
However, it seems only a matter of time before even this challenge is overcome, and when that happens the question will remain whether it's a real capability or just a data leak.
It's like the analysis and research phase of problem solving is just being skipped over in favor of not having to understand the mechanics of the problem you're trying to solve. Just reeks of massive technical debt, untraceable bugs, and very low reliability rates.
I dunno, maybe LLMs will get good enough eventually, but at the moment it feels plausible to me that there's some kind of an upper limit caused by its very nature of working from a collection of previous code. I guess we'll see...
When you have something that kind of works, tell ChatGPT what the problems are and ask for refinement.
IMHO currently the weak point of LLMs is that they can't really tell what's adequate for human consumption. You have to act as a guide who knows what's good and what can be improved and how can be improved. ChatGPT will be able to handle the implementation.
In programming you don't have to worry too much about hallucinations because it won't work at all if it hallucinates.
It hallucinates and it doesn't compile, fine. It hallucinates and flips a 1 with a -1; oops that's a lot of lost revenue. But it compiled, right? It hallucinates, and in 4% of cases rejects a home loan when it shouldn't because of a convoluted set of nested conditions, only there is no one on staff that can explain the logic of why something is laid out the way it is and I mean, it works 96% of the time so don't rock the boat. Oops, we just oppressed a minority group or everyone named Dave because you were lazy.
> In programming you don't have to worry too much about hallucinations because it won't work at all if it hallucinates.
You still have to worry for your job if you're unable to write a working program.
I ended up implementing the thing myself.
And around the same time, 3D printing was going to upend manufacturing; bankrupting producers as people would just print what they needed (including the 3D printers themselves).
It confidently gave me a correct answer.
Except that it was "correct," if you used an extended property that wasn't in the standard API, and it did not specify how that property worked.
I assume that's because most folks that do this, create that property as an extension (which is what I did, once I figured it out), so ChatGPT thought it was a standard API call.
Since it could have easily determined whether or not it was standard, simply by scanning the official Apple docs, I'm not so sure that we should rely on it too much.
I'm fairly confident that could change.
There's at least two companies (Waymo and Cruise) running autonomous taxi services in US cities that you can ride today.
There have been lots of incorrect promises in the world of self-driving trucks/cars/buses but companies have gotten there (under specific constraints) and will generalize over time.
[1] https://www.saam.swiss/projects/smartshuttle/
I remember we had spam filters 20 years ago, and nobody called them "AI", just ML. Todays "AI" is ML, but on a larger scale. In a sense, a million monkeys typing on typewriters will eventually produce all the works of Shakespeare. Does this make them poets?
That's what I am running into on an everyday basis.
I don't want my program to be full of bugs.
HN’s takes are honestly way too boomer-tier about LLMs.
At the risk of going off on a tangent, we already have the technology to allow self-driving trucks for a few decades now.
The technology is so good that it can even be used to transport multiple containers in one go.
The trick is to use dedicated tracks to run these autonomous vehicles, and have a central authority monitoring and controlling traffic.
These autonomous vehicles typically go by the name railway.
The only reason the trucks aren't out there gathering their best data, that's real world data, is regulation.
Businesses will hire consultants at a later stage to do risk assessment and fix their code base.
As someone currently looking for work, I'm glad to hear that.
About 6 months ago, someone was invited to our office and the topic came up. Their interview tests were all easily solved by ChatGPT, so I've been a bit worried.
Assuming a LLM gets 99% of the lines correct, after 70 lines the chance of having at least one of them wrong is already around 50%. A LLM effective enough to replace a competent human might be so expensive to train and gather data for that it will never achieve a return on investment.
Last time I used ChatGPT effectively was to find a library that served a specific purpose. All of the four options it gave me were wrong, but I found what I wanted among the search results when I looked for them.
Code Interpreter does this a bit in Chat-GPT Plus with some success.
I don't think it needs much more than a GPT-4 level LLM, and a change in IDEs and code structure, to get this working well enough. Place it gets stuck it'll flag to a human to help.
We'll see though! Lots of startups and big tech companies are working on this.
Yes, ChatGPT can't pass a particular test of X to Y. But does that matter when ChatGPT is both the designer and the developer? How can it be wrong, when its answer meets the requirements of the prompt? Maybe it can't get from X to Y, but if its Z is as good as Y (to the prompter) then X to Y isn't relevant.
Sure there will be times when X to Y is required but there are plenty of other times where - for the price - ChatGPT's output of Z will be considered good enough.
"We've done the prototype (or MVP) with ChatGPT...here you finish it."
You can put even more data into it and refine the models, but the growth in capability has diminishing returns. Perhaps this is how far this strategy can bring us, although I believe they can still be vastly improved and what they can already offer is nevertheless impressive.
I have no illusion about the craft of coding becomes obsolete however. On the contrary, I think the tooling for the "citizen developer" are becoming worse, as well as the ability for abstraction in common users since they are fenced into candyland.
There is no AI that comes close to being able to design a new system or build a UI to satisfy a set of customer requirements.
These things just aren’t that smart, which is not surprising. They are really cool and do have legitimate uses but they are not going to replace programmers without at least one order of magnitude improvement, maybe more.
AI isn't capable of generating the same recipe for cookies as my grandma, she took the recipe to her grave. I loved her cookies they were awesome...but lots of people thought they were shit but I insist that they are mistaken.
Unfortunately, I can't prove I'm right because I don't have the recipe.
Don't be my grandma.
If you’re just trying to one-shot it - that’s not really how you get the most from them.
Vaguely: Questions that most people think they know the correct answers to but, in my experience, don’t.
Small consolation if it can nonetheless get lots of other cases right.
>It answers questions confidently but with subtle inaccuracies.
Small consolation if coding is reduced to "spot and fix inaccuracies in ChatGPT output".
Basically the gruntest of grunt work it can do. If I explain things perfectly.
If I'm honest though I'm most likely to use it for boring rote work I can't really be bothered with myself - the other day I fed it the body of a Python method, and an example of another unit test from the application's test suite, then asked it to write me unit tests for the method. GPT got that right on the first attempt.
This is a valid challenge we are facing as well. However, remember that ChatGPT which many coders use, is likely training on interactions so you have some human reinforcement learning correcting its errors in real-time.
Deleted Comment
What is it?
Edit: I see, they don't want it to be scraped (cf. https://news.ycombinator.com/item?id=38260496), though as another poster pointed out, submitting it might be enough for it to end up in the training data.
I’m one of those noob programmers and it has helped me create products far beyond my technical capabilities
Right, which means its a force multiplier for specialists, rather than something that makes generalists suddenly specialists.
ChatGPT is great at some well defined, already solved problems. But once you get to the messy real world, the wheels come off.
ROT13: https://chat.openai.com/share/ae7c311d-ab23-4425-bdfa-c2314e...
HEX: https://chat.openai.com/share/4b0740b7-53c0-4776-bb00-ab65b4...
What kind of encoded string did you use?
Deleted Comment
There is stuff that it can do that appears magically competent at but it's almost always cribbed from the internet, tweaked with trust cues removed and often with infuriating, subtle errors.
I interviewed somebody who used it (who considered that "cheating") and the same thing happened to him.
The no code industry is massive. Most people don’t need a dev to make their website already. They use templates and then tweak them through a ui. And now you have Zapier, Glide, Bubble etc.
LLMs won’t replace devs by coding entire full stack web apps. They’ll replace them because tools will appear on the market that handle the 99% cases so well that there is just less work to do now.
This has all happened before of course.
Deleted Comment
My view is that I am about to enter the quantum productivity period of coding.
I am incredibly excited about AI assistance on my coding tasks, because it improves not only what I’m writing, but also helps me to learn as I go. I have never had a better time writing software than I have in the last year.
I’ve been writing software for a few decades. But now I’m able to overcome places where I get stuck and have almost a coach available to help me understand the choices I’m making and make suggestions constantly. And not just wandering over to a fellow cuders desk to ask them about a problem I am facing, but actually give me some productive solutions that are actually inspirational to the outcome.
It’s amazing.
So why do people think that coding is coming to some kind of end? I don’t see any evidence that artificial intelligence coding assistants are about to replace coders, unless you… suck badly at building things, so what are people getting on about?
I feel like somebody came along and said, “foundations are now free, but you still get to build a house. But the foundations are free.”
I still have to build a house, and I get to build an entire house and architect it and design it and create it and socialize it and support it and advocate for it and explain it to people who don’t understand it but… I don’t have to build a foundation anymore so it’s easier.
Shoot me down. I’m not relating here at all.
If AI makes developers twice as productive (maybe a few years down the road with GPT-6), will this additional supply of developer capacity get absorbed by existing and new demand? Or will there be half as many developers? Or will the same number of developers get paid far less than today?
These questions arise even if not a single existing dev job can be completely taken over by an AI.
A secondary question is about the type of work that lends itself to AI automation. Some things considered "coding" require knowing a disproportionate number of tiny technical details within a narrowly defined context in order to effect relatively small changes in output. Things like CSS come to mind.
If this is the sort of coding you're doing then I think it's time to expand your skillset to include a wider set of responsibilities.
Nowadays, like the rest of the full-stack 'web' developers, I work on complex webapps that use Typescript, HTML, CSS, Kubernetes, Docker, Terraform, Postgres, bash, GitHub Actions, .NET, Node, Python, AWS, Git. And that isn't even the full list.
And it's not even a flex, all of the above is used by a relatively straightforward LoB app with some hairy dependencies, a CI/CD pipeline + a bit of real world messiness.
I need to have at least a passing familiarity with all those technologies to put together a working application and I'm sure I'm not alone with this uphill struggle. It's a staggering amount to remember for a single person, and LLMs have been a godsend.
Something to remember is that every new innovation in software development only raises the expectations of the people paying the software developers.
If developers are 3x as productive, then the goals and features will be 3x big.
The reason for this is that companies are in competition, if they lag behind, then others will eat up the market.
The company that fires 50% of their staff because of “AI Assistance” is not going to be able to compete with the company that doesn’t fire their staff and still uses “AI Assistance”…
Increased developer productivity will lead to a lot more software development, at all levels, rather than less.
1. "In economics, the Jevons paradox occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced."
For example, deploying hardware to all subways to track their status and location with higher accuracy. I want a truly smart meal planner that can look in my fridge and my eating habits and tell me what I need to eat. I want a little plugin for Obsidian that will let me sync tasks with Linear.
There are tons of tiny little pieces of technology that would be useful but their economic value is low or unclear; if developers become more efficient at not only writing code but testing, deploying, getting feedback, these become possible. Or, the LLMs can’t become great at all of those things, and developers keep their jobs (and high pay). You can’t have it both ways.
Yes, it may harm certain professions, careers and people, but the only option is to accept and adapt. We can talk all day and night about how sad and unfair this is, but unfortunately there is nothing stopping such technological advancements.
Though a little blunt at times, the libertarian economist and journalist Henry Hazlitt talks about this in great detail in his book, Economics in One Lesson, which I highly recommend to anyone no matter what belief system. He also wrote many essays on it, including one I'll share here: https://fee.org/articles/the-curse-of-machinery/
It's because foundations are now free but nobody understands how they work anymore - or soon won't hence the waning as opposed to disappeared. There are whole levels that a coder needed to understand in the past that recent entrants to the field do not understand, that can still subtly affect how things work, if no one understands the things the craft depends on, then the craft is waning.
For anyone who started programming more than 13 years ago in the most widespread programming discipline for the past few decades (Web technologies), which in this career makes you super-old, the craft is waning because every year it becomes more and more difficult to understand the whole stack that the web depends on. Not even considering the whole stack of technologies that my first paragraph alluded to.
For Frontend coders it is waning because there are ever increasing difficulties to find out how to do something someone else did by looking at their code - modern build technologies means looking at the code of a site is not worthwhile. And people were already complaining about that 13+ years ago.
If you have kids or outside work responsibilities then in combination with this and the need to produce things and the ever increasing eating of stuff by software (opening new problems and business areas one might need to be cognizant of) it becomes less possible, for those not in school, to hone their craft via purposeful working. For this reason it may be that the craft is waning.
Finally improving productivity is not necessarily something that correlates with improving the craft - industrialization may have improved productivity and made many products available to many people that did not have them before, but it is pretty well known that it was not beneficial to the crafts. Perhaps the feeling is the same here.
I learned HTML in the mid nineties and even then I don't honestly recall learning very much from View Source. HTML has never been particularly easy to read even when written by the rare breed who are fanatical about "semantic" markup (in quotes because even so-called semantic markup doesn't communicate much in the way of useful semantics). HTML lacks extremely basic abstraction features needed to make code readable, like (up until very recently) any kind of templating or components system, so even if you were trying to learn from Yahoo! in 1996 you'd be faced with endless pages of Perl-generated HTML boilerplate. So I think most of us learned the old fashioned way, from books, tutorials and copious amounts of fiddling.
Now that layer of craftsmen is gone. You are either an uber-expert in how computer hardware works, or you just buy the hardware and treat it like a kind of magic.
Traditionally to become an expert you went through an intermediate stage, showed your interest, and got trained. Your hobby could turn into a profession, since the step up was gentle and interested people were needed to do some of the supporting work.
Nowadays if you're going to work in a chip fab, it's not because you were a soldering iron kid. You go through a quite long academic process that doesn't really recruit from the ranks of the hobbyists, though of course you'd expect there to be some natural interest. But the jump isn't gentle, you end up learning some pretty advanced things that make soldering seem like the stone age.
Software has this layer of craftsmen still, though it is rapidly dying, and not just from LLMs. There's a enormous number of people who can more or less hook up a website and make it work, without knowing everything about how exactly it does that. There's also plenty of excel people and python scripting people who use code in their day-to-day, but can't tell you advanced concepts. There's a lot of modern services that make this sort of thing easier (wordpress etc), and this level of skill is very learnable by people in the developing world. It's not like you can't become a real expert in those parts of the world, but economically it makes sense that there are a lot of intermediately skilled there.
What will happen with GPT and the like is the experts will not need as much help from the juniors. If you're a software architect in charge of a whole system, you won't just sketch out the skeleton of the system and then farm out the little tasks to junior coders. Maybe there are pieces that you'd farm out, but you will definitely save on the number of juniors because an LLM will give you what you need.
The result being that we'll get fewer people trained to the highest levels, but those who are will be much more productive. So those guys will indeed be entering the quantum age, but it strands a lot of people.
The real question for me is whether AI will put humans out of most intellectual work, not just programming. Then, even if AI is shared equitably and satisfies our every need, most of us become some kind of sponges that don't need to think for a living. Or human intelligence will remain but only as a peacock's tail, like physical strength is now.
Maybe a true "friendly AI" would recognize the human need to be needed, for human capabilities to stay relevant, and choose to put limits on itself and other AIs for that reason.
My experience rather is that such people often (though not always) are quite good programmers, but came to a different conclusion on how it makes sense to develop software than the direction which the industry shifted to (and often have good, though sometimes non-mainstream reasons for their opinions). Thus, despite being good programmers, "they fell out fashion" with "how you are (by today's fashion/hype) supposed to build software on this current day". So, they became more and more hard to employ and thus more and more frustrated (just to be clear: in my experience they are often quite right in their opinions).
I still don’t feel that way about front end frameworks for the web. Oh my God what what are people doing?
The author asserts that he is a professional programmer as one of his day jobs, as well as handling the “serious programming” in his hobby projects with a mostly-hardware guy who is also an ex-professional-programmer, but whose coding skills are out-of-date.
His LinkedIn has nothing more current than Jan 2017 at Genius, and that role (one of only two jobs with what looks like an IC-like title, “Developer”, that lasted more than a couple months), has a description that indicates it wasn't mostly an IC position but drifting among recruiting, acting product lead, PM, and something like an assistant to the CEO roles.
He really has a resume that screams of having decent communications skills and professional network (and maybe manangement skills) and wanting to be seen as a programmer but consistently either needing to find a way out of an environment or into a different role in the same org before the checks his communications skills wrote that his tech skills couldn't back caught up to him.
The need for mastery of the craft will be lesser, and so mastery of it will wane, as people depend on AI instead. Then if you ever need someone to do it without AI, you might no longer be able to find anyone with the craftsmanship know-how to do it.
Also, it'll probably bring down how good coding is as a career. As productivity enters this quantum state, you'll need less engineers, and they'll need less qualifications, which might translate to less jobs, worse pay, and an increased expectation in productivity that makes you dependent on AI, as you can no longer meet these expectations and compete with others if you also dont fully leverage AI.
What's the best starting point for my personal stuff? I am generally put off by youtube tutorials promising to make me a better coder.
Bonus points for doing things like using GitHub actions for CI and publishing it to a package repository with guidance from ChatGPT.
It comes down to whether society's demand for programmers is fixed, or if it scales with how productive programmers are, or more likely some mix of these scenarios. If the demand is fixed, then you are just taking someone's job by being more productive.
You forgot the accounting, taxation, legal and compliance stuff, which at least in Germany takes a huge amount of the entrepreneur's time. :-(
1.) Apple’s programming environment being forbidding : no shit, Apple is well known for the hoops developers have to jump through to be allowed into their monopoly, why any self-respecting hacker would ever want to code for iOS (and now even MacOS, Android, Windows) is beyound me. I guess that's partially because they are US-based, where it's still somewhat respectable / not illegal ?
2.) Visual C++ : similar deal, just a few years later the bar for graphical interfaces would be significantly lowered with the likes of Python+Qt, Java, HTML+CGI... and even back then for non-graphical interfaces we had BASIC.
(The design and programming of graphical interfaces is its own distinct profession for a reason, jumbling it in with «coding» as a big indistinct mass just because the same person used to have to do both regardless of their level of proficiency is missing the point.)
3.) > When I got into programming, it was because computers felt like a form of magic. The machine gave you powers but required you to study its arcane secrets—to learn a spell language. This took a particular cast of mind. I felt selected. I devoted myself to tedium, to careful thinking, and to the accumulation of obscure knowledge. Then, one day, it became possible to achieve many of the same ends without the thinking and without the knowledge. Looked at in a certain light, this can make quite a lot of one’s working life seem like a waste of time.
It's not the LLMs that got rid of this, it's already Web search engines, years ago ! (It also takes skill to «prompt engineer» a search engine to find you a good answer. Not to mention that one of the still ongoing - but for how long ? - issues, is the enshittification of them through Google's monopoly. It should get better once they are out of the picture.)
4.) Proposition 5 of Euclid’s Elements I
This seems to be another good example of how a lot of these issues stem from the lack of good teaching / documentation. Consider how much clearer it would be if merely you split it into two propositions !
It also seems to be a good example of an arbitrary goal of hardly any practical significance ? (FizzBuzz, that a good fraction of CS college students are unable to grok, would be a much better example ?) But perhaps I am wrong, and a mathematician / mathematics teacher can explain ?
I have way better examples : I am still pissed that we keep teaching students pi instead of tau and pseudovectors (papering over the difference with vectors !!) instead of geometric algebra. Imagine we still had to do math with words like before the modern era (sqrt() also comes to mind), instead of mathematical notation !
5.) There's also a whole big chunk about coders being (/ having been) a highly valued profession, and it's the likely loss of that that the article seems to be mostly deploring, but we shouldn't confuse it with the other things.
Deleted Comment
If I take a junior programmer's task, say, creating CRUD endpoints. Describing the requirement in a way that matches exactly what I want will probably take more time that doing the coding assisted by something like copilot. Can we really imagine a non technical user using an AI having it do development from A to Z? What if the generated code has a bug, can we really imagine that at no point someone will need to be in the loop? Even if a tech person intervenes in the case of a bug how much time would be lost to investigate what the AI wrote and trying to understand what happened in retrospect - the time or cost saved to write the code would be lost quickly. Writing code is a small part of the job after all. LLMs are good at generating code, but they are fundamentally not problem solvers.
The technology is amazing but I think LLMs will just be another tool in the arsenal for devs. It's also an amazing tutor. It can avoid having to call a developper for some self contained problems (writing a script to scrape content from a web page for example).
I commented about somewhere else in this thread: https://news.ycombinator.com/item?id=38259425
But basically I have found that it is a really powerful general purpose assistant/brainstorming pall for a lot of the things that would normally eat up a lot of time.
To expand on that, it isn't even limited to code, but also surrounding tasks. I have used it to round out documentation in various ways. Either by giving it a lot of the rough information and asking to write coherent documentation for me, or by giving me feedback.
The other way around it has helped me onboard on new projects by giving it bits and pieces of written text where I had trouble understanding what the document said.
In the same sense when dealing with management bullshit I have used it to just presenting it with what was asked, telling it my view on it and then asking it to come up with responses based around certain perspectives. Which meant I had to spend less mental bandwidth on inane stuff there as well.
And yes, a lot of it can also be achieved in a team with other people. But, those people aren't always around and also have their own things to do. The advantage of tools like ChatGPT is that they don't get tired, so I can really channel my inner toddler and just keeping asking "why" until I am satisfied with the answer. Even if there are other people available to turn to for help, ChatGPT can also help in refining the questions you want to ask.
It's also very good at converting a code from one language to another and getting stuff done when directed properly.
I can definitely see how large impact it will have on the employment prospects of many. It's not replacing the engineers yet, but those who specialise in a tech as an implementation specialist are screwed. Even the increased productivity alone will reduce the demand.
Once you make agriculture more efficient we can't really eat way more than today so we need less people working there. But if you make software easier to write, I think you'll just end up with way more software because the complexity of human needs and processes is unbounded unlike eating food.
Also called the efficiency paradox https://en.m.wikipedia.org/wiki/Jevons_paradox
I’d argue that companies cutting their software budget will soon be overtaken by their competition. Software is never done, after all.
By contrast we now have digitally inclined employees creating and automating things with the help of CharGPT. Much of it is terrible in terms of longevity the same way the previous RPA or workflow tools were, but unlike those, people can now also maintain them. At least as long as you keep the digitally inclined employees onboard, because they’re still not software developers and things like scaling, resource usage, documentation, error handling and so on isn’t being done. But it could be, at least for around 90% of it which will frankly never really matter enough to warrant an actual software developer anyway because it mostly “frees” a few hours a month through its automation.
But with this ability, and the improvements in things like SharePoint online (and likely it’s competition that I don’t know about) a lot of the stuff you’d need either staffed software developers or external consultants to handle, can be handled internally.
This isn’t the death of software engineering as such. Like I said, it doesn’t scale and it’s also going to create a bunch of issues in the long term as more and more of this “amateur” architecture needs to function together, but at the same time, if you need to Google how to get some random lines of text from a dictionary, then I’m not sure you aren’t in danger either. And I really don’t say this to be elitist, but everything you google program can be handled fairly easily by GPT and “good enough” that it’s simply going to happen more and more in our industry.
If you look into my history you’ll see that I’m both impressed and unimpressed by LLMs (well GPT, let’s be honest the others all suck). This is because it really hasn’t been able to help us develop anything in our daily work. It writes most of our documentation, and it’s scary good at it. It also does a lot of code-generation, like auto-generating types/classes/whatever you call them from Excel data mapping sheets + CRUD functionality. Sure, we did that before by writing some short CLI scripts, but now GPT does it for the most part. So it’s not like we don’t use it, but for actual well designed code that handles business logic in a way that needs efficiency? Yeah, it’s outright terrible. Maybe that will change over time, so far it’s not improved in the slightest.
But look at our industry as a whole. I know a lot of HN users work in American Tech or startups, but in the European non-tech Enterprise industry and the massive IT and consultant industry supporting it, there are a lot of developers who basically do what GPT excels at, and as these tools get better, we’re frankly just going to need a lot fewer software developers in general. Most of those people will likely transition into other jobs, like using GPT, but many won’t as other professions start doing their coding themselves.
What worries me the most, however, is that we still teach a lot of CS students exactly what GPT is good at. I’m an external examiner for CS students at academy level, and GPT can basically ace all of their curriculum because it’s mainly focused on producing a lot of “easy” code for businesses. I’m scared a lot of those students are going to have a rough time once LLMs really pick up, unless the curriculum changes. It won’t change in time though, because it’s already been sort of “outdated” for a decade because of how slowly our higher educations adapt to the real world here in Denmark.
This captures the reason I'm optimistic about AI-assisted programming.
The learning curve for getting started programming is horribly steep - and it's not because it's hard, it's because it's frustrating. You have to sweat through six months of weird error messages and missing semicolons before you get to the point where it feels like you're actually building things and making progress.
Most people give up. They assume they're "not smart enough" to learn to program, when really they aren't patient enough to make it through all of that muck.
I think LLMs dramatically impact that initial learning curve. I love the idea that many more people will be able to learn basic programming - I think every human being deserves to be able to use computers to automate tedious repetitive tasks in their lives.
Computers are rude and honest and humans prefer a pretty lie to an ugly truth. Programmers must appreciate the ugly truth in their day-to-day lives more than any other profession. (Physical engineering and construction workers and repairers also need this virtue, but less often since their feedback cycles are slower.)
I do think the overall premise is silly, programming isn’t that special in this regard in my opinion. Most professions are like this, they just might not be the most visible ones like politics, journalism, show biz.
Software developer have more to learn from other professions than they often think (the old engineering professions understand this a bit better)
I completely agree with you on that. Most of being a good software engineer is skills that ChatGPT won't help you with.
But you can't even start to learn those skills if you quit in the first six months because of the vertical learning curve tied to all of that syntax trivia.
In any complex software project with lots of users there is guaranteed to be an effectively endless backlog of bug tickets that are, in effect, abandoned. I think a few months ago some bug got fixed in Firefox that was ~25 years old. In most compilers and frameworks there's going to be a pile of tickets of the form, "improve error message when X happens" that just never floats to the top because programmer time is too expensive to justify working on it. Over time the difference between a senior engineer and a junior becomes not so much intelligence or even actual experience but just the accumulated scar tissue of all the crap we have to wade through to make anything work, thanks to the giant backlogs of bugs and stupid usability problems that never get solved before the product reaches EoL anyway.
For AI to fully replace programmers is going to require quite a few more major breakthroughs, but setting one loose on a bug tracker and asking it to churn out trivial fixes all day is well within reach. That will both make human programming more fun, and easier to learn.
This honestly made me feel so happy. I'm reminded of the average person who is bound by the limitations of the apps they use, the popularity of "How to Automate Using Python" books...
With this new tech, people are no longer bound by that limitation. I think that's pretty cool.
LLMs writing code is the beginning, but low or no code is more ideal for most people. With LLM assistance.
I just asked ChatGPT a thing, and then on a hunch I asked if there wasn't a built-in function that does the same, and it indeed remembered there is such a function.
What if that second question had been automatically and transparently asked? What if there is a finite list of such "quiet reflecting" questions that dramatically increases the value and utility of the LLM output?
Him: Coding will soon be obsolete, it will all be replaced by chatgpt-type code gen.
Me: OK but the overwhelming majority of my job as a "senior engineer" is about communication, organizational leadership, and actually understanding all the product requirements and how they will interface with our systems. Yes, I write code, but even if most of that were augmented with codegen, that would barely even change most of what I do.
Because today's seniors will be retired in a decade or two, and as they get replaced by people who actually benefited from automatic code generation, the concept of "coding" will (if this trend keeps up) absolutely become a thing that old timers used to do before we had machines to do it for us.
It might not be in the same way of current day developers, but I don't foresee a near future where developers don't learn to understand code to some degree.
For example, I know a lot of people who work in the low-code development sphere of things. A lot of the developers there barely see any code if any. Yet, when you talk with them they talk about a lot of the same issues and problem-solving but in slightly different terms and from a slightly different perspective. But, the similarities are very much there as the problems are fundamentally the same.
With generated code I feel like this will also be similarly true.
These problems sound like a result of working with people. Smaller but more capable teams because of AI will need less leaders and less meetings. Everything will become much more efficient. Say goodbye to all the time spent mentoring junior engineers, soon you won't have any
Yeah... no. Not with LLMs as they currently are. They are great as an assisted tool, but still need people to validate their output and then put that output to work. Which means you need people who can understand that output, which are developers. Which also means that you need to keep training developers in order to be able to validate that output.
The more nuanced approach would be saying that the work of developers will change. Which I agree with, but is also has been true over the past few decades. Developers these days are already working with a hugely different tool chain compared to developers a decade ago. It is an always evolving landscape and I don't think we are at a point yet where developers will be outright replaced.
We might get there at some point, but not with current LLMs.
and then slowly we run out of seniors with nobody to replace them
The code I write feels like a side-effect of what I actually do.
https://www.buzzmaven.com/old-engineer-hammer-2/
[0] Actually, playing with hypnodrones.
LLMs are best at doing the stuff senior engineers do that's NOT coding.
For the sake of argument, let’s say it could replace the coding part cost effectively. Can it still do all the other parts? Take ambiguous requirements and seek clarity from design, product, etc. (instructing an AI to a sufficient degree to build a complex feature could almost be a coding task itself). Code reviews. Deal with random build failures. Properly document the functionality so that other programmers and stakeholders can understand. Debug and fix production issues. And that’s just a subset.
Realistically, in the future there will be a long phase of programmers leveraging AI to be more efficient before there’s any chance that AI can effectively replace a decent programmer.
This will be an advantage to engineers who think on the more abstract side of the spectrum. The “lower level” programming tasks will be consumed first.
So then the question is what % of a programmers job might be taken by this, and does the remaining % require a different skillset.
There are programmers that are great at coding, but complain loudly when the business gives slightly ambiguous requirements because they see their job as coding, not clarifying business rules. This group are more likely to be impacted than the programmers who will happily work in ambiguous situations to understand the business requirements.
So, while GP might be technically correct in some narrow sense, I would be less quick to judge the OP article author. Some years hence, anyone who is not actively building (as opposed to using) one of these LLMs might be dismissed as "not a real programmer" (because by then, that will be the only form of programming in existence).
Personally, I use it for scripting and as an executive function aid.
Deleted Comment