Frontend seems to involve visual design, which backend obviously doesn't (usually). Other than that, I don't see the difference, as soon as you're working on things more complex than simple static pages. Both will gradually become more efficiently done through smart use of AI tech.
But replacement? ChatGPT and GPT4 can't solve real world problems without heavy time investment of an engineer to prompt-engineer a semi-usable (if lucky) solution that still needs a lot of debugging (so coding yourself is still way faster for sufficiently complex things) and adjustment to your codebase.
Will people get left behind that don't adapt? For sure. That's always the case. If productivity is significantly increased with use of such AI tools, then those that won't use it due to whatever reason, including incompetence, will be let go eventually. All others will be fine, as always. This fear-mongering is actually good for us that can adapt, makes us more valuable.
Let's talk in 2030 again. Then in 2035. Then in 2040. And so on until we have AGI that generates you a new revenue stream just by reading the thoughts of a non-technical middle-manager, by which time we will probably have completely different problems.
People afraid of AI automating things just don't know how it works. I mean, no one does really, but if you went to University and studied AI, statistics, machine learning, deep learning and some of the variations then you'd know that that shit isn't intelligent or magic.
It predicts the most likely next tokens + some randomness based on what's in the training data. It can't do more than that. That it can conjure up well known algorithms and some variations of them that help solve some specific problems that "look complex" is not a surprise, but alas, a mostly useless party trick that scares some people apparently.
Frontend development involves a large number of fuzzy human bits that are constantly moving underneath you. Yes, your backend dependencies can change and you may need to update an API version, but in frontend you have browsers, screen readers, frameworks, and all sorts of massive icebergs largely outside of your control constantly moving.
In the backend, you probably have a language that is fairly stable. Write some unit tests, write some integration tests, but things are predictable and mockable.
Writing UI (properly) requires testing against several versions of several browsers, each with their own rendering quirks. UI tests - screenshots, whatever - are inherently flakier and when something breaks it's harder to know for sure if it's your fault.
You need to test against a variety of screen readers and accessibility tools. None of them are well documents, all of them parse the DOM (or native accessibility stack if you're not doing a web app) in a slightly different way, and the standards are constantly changing. As a bonus, you're legally obligated to get it "right", not that anyone can tell you precisely what "right" is.
Visual design isn't a walk in the park either. The designers will tell you to use color X. Its contrast ratio against colors Y, Z, and B on various pages will hopefully meet the contrast ratios, but then someone will point out that in exactly one of Windows' four different high contrast schemes this ends up being black on black and you will stare into a dark Lovecraftian abyss wondering what hellish algorithms Microsoft uses to pick colors. You'll fix HC Black and break HC white, you'll fix white and break black #2, and you will eventually throw your computer into the Marianas trench and go repair bicycles for a living.
These are valid differences, but none of them seem particularly challenging to an AI, assuming some equivalence between image and text recognition. To a human, sure because it is more difficult to formally specify. Arguably an AI could benefit from the fuzziness because the set of acceptable outputs is larger.
And I’d also say that the argument is a bit straw man-y because it considers (some of) the complexities of frontend, with a fairly ideal view of the backend. Once you bring concurrency, scale and state, backend can become fairly Lovecraftian as well.
> Frontend seems to involve visual design, which backend obviously doesn't (usually). Other than that, I don't see the difference
My thought, from a long career or full-stack development, is that front-end is very much about how people interact with things. It's more a matter of doing a thing with strong attention to the details of the display, interactions, and behaviors in different environments.
Back-end development is more about how machines interact with each other. Things are more deterministic, but instead of scrutinizing minutae of human-machine interaction, I'm thinking about how a machine will do this hundreds or thousands of times per second.
There seems like a disconnect between the breathless AI hype and reality. I tend to be on team-LLM (having exposure to it in my daily work), and my stance is that it's going to revolutionize some narrow domains and features of products, but the idea that it will replace white collar labor generally seems poorly substantiated by what we know so far.
What we see ChatGPT being good at is largely boilerplate. The folks who are like "I am 5x more productive now with ChatGPT" raise alarm bells in my head - it suggests that they were typing the programming equivalent of pablum for most of their day.
Yeah, if you ask ChatGPT to author some simple HTML it will do so with a refreshing level of accuracy (though far from being able to do without extensive supervision) - but who is writing plain HTML? Likewise it's cool that it can generate a sort function whole cloth... but who is writing their own sort functions?
The trick with boilerplate is that the industry has already spent the past two decades automating large amounts of it away via frameworks and baking much of the functionality into programming languages themselves. If you're writing boilerplate for so much of your day that a boilerplate-spitting AI massively increases your productivity, I'd argue you've got bigger problems!
The trend - across both backend and frontend dev - is to make things more declarative and close the gap between what the developer wants and the amount of code needed to implement it. Use these technologies - it has made people dramatically more productive, but didn't require a LLM!
If you find yourself spending a significant portion of your day writing boilerplate, stop doing that! There are many options for you that allow you to more directly express what you want!
I find an engineer's value is in creating things have not been created before. As the author of the article says, there's already lots of great tools for making simple sites. So far, I've found that LLMs are not good at making novel software, it's basically a glorified stackoverflow. But that's still quite useful… for a programmer!
That's exactly it. Ultimately programming is just telling the computer what you want it to do - something that a LLM is unable to do. There may be a future where an AI is smart enough and demonstrates enough analytical, cognition, and deduction ability that it can take over specifying what a computer should do - but it's not today and it's not immediately at-hand. Nor is anything that can do so likely to emerge from LLMs, as opposed to a fundamentally different architecture of AI.
For most of our industry's history telling a computer what to do involved a lot of very painful and boring things that aren't really related to what you actually want from the computer. Moving registers around. Assigning variables. Managing memory.
But the entire time we've also been converging on doing less and less of this rote busywork. Advances in frameworks and programming languages has, in each iteration, gotten us closer and closer to simply telling the computer what you want, reducing the need for this rote busywork.
So now we have an AI that is quite good (though again: far from good enough to be unsupervised) at this busywork. For people whose jobs are largely consisted of this busywork this may be highly impactful - but your job as a programmer is to minimize this busywork to the maximum extent possible anyway.
Meta: Stuff like this doesn’t really fit in HN imho. Besides being low effort it just feels weird to read things like this on a site literally called hacker news. I feel like someone misspelled “tired techies complaining about the same thing over and over again”.
Is it that much harder to come up with a genuinely funny or constructive answer or, you know, go outside, and touch some grass instead of posting?
Flexbox makes it a lot easier, but it’s not fool proof. I’ve had situations where even flex refused to center something because it was ensured strangely (but out of my control), or because the containers size wasn’t being inherited correctly (again be a use of parent elements out of my control).
Compared to something like Qt/QML’s anchors and layouts, flexbox still feels like a crude hack, to me.
These demos all give me the same vibe as "we prevented aging in mice". It's really impressive and interesting but I don't think we should be worried about our jobs disappearing any more than what living to 200 will be like.
I see posts like this comparing past attempts, and in this article even 1990s Homestead, and I immediately think this person does not get it.
Additionally all the statements that follow along the lines of "GPT today has a hard time with xyz" are all problematic too. Today is not tomorrow, and what it can do tomorrow is what we should be talking about.
The fact is in just 3 months we've gone from fairly basic code reviews to super helpful code reviews with code examples and intelligent code contextual comments.
Comparing GPT to Homestead or any number of early attempts to translate mockup to HTML+JS is unhelpful. This is wildly different technology. Talking about what GPT can do today and landing at "it's not good enough" is also missing the point, we are nowhere near the limits of this tech and it's evolving so fast that these assessments are not insightful in the slightest.
Talking about GPT as being merely a probability evaluator is also demonstrating the authors limited insight. WE ARE ALL probability evaluators. Humans work on prior experience, employ heuristics to make decisions based on that. GPT, indeed all AI, is more or less based on that paradigm. The difference is an AI can hold the whole of its learning in a perfectly memorized model. It doesn't forget what's in that model and can draw on billions of data points to make its probabilistic determinations. We pull on bias, false memories, misunderstandings, and so on. Now, that's not to say AIs won't also be afflicted by these comprehension errors, but that just doubles down on AI doing what humans do which is astounding either way.
Jobs are going to be lost, it's a fact. Even if it's just that junior who was learning code reviews and supporting basic development, that job will be around for maybe a year or two at the rate we are moving.
> This is wildly different technology. Talking about what GPT can do today and landing at "it's not good enough" is also missing the point, we are nowhere near the limits of this tech and it's evolving so fast that these assessments are not insightful in the slightest.
Just as often with AI's detractors who claim that it "merely" predicts the next word, I have to wonder if those making these grandiose claims about the near future of AI have actually used GPT-4 and its predecessors in their work. There has to be some sort of logical fallacy regarding "well [insert literally any limitation here] won't exist in GPT(n+1) at this rate."
> Jobs are going to be lost, it's a fact.
What percentage of a programmers' time have compilers automated? It's gotta be multiple orders of magnitude of efficiency gained. I could see a GPT-4 based GitHub Copilot being something like a 3-5x improvement on development in the best use cases (e.g. CRUD development built on public frameworks.) That's a colossal impact, but it's hard to even put on the scale to other ways programmers have automated their work in the past. Every time programming has automated giant swaths of programmer time, the end result has been that programmers can generate far more economic value even faster. This end result has consistently (and paradoxically) led to far more work and jobs for developers than before.
> Even if it's just that junior who was learning code reviews and supporting basic development, that job will be around for maybe a year or two at the rate we are moving.
At least in my admittedly niche area of work, juniors just aren't all that helpful. The time they take away from seniors or other members of the team makes them at net negative for at least a few months. It's often a year or more before we reach break even. Junior positions are investments in potential future independent contributors at a time where filling IC roles is still extremely difficult.
> I have to wonder if those making these grandiose claims about the near future of AI have actually used GPT-4 and its predecessors in their work
I have used GPT-3 a fair bit, and used GPT-4 today. The differences are impressive, scary impressive so far. I am basing my `GPT(n+1)` observation on the rapid rate the tech is filling in holes that previous detractors pointed out.
The differences in code reviews is amazing. In GPT-3 it basically explained the method and said it looked good, "add some comments" was its only suggestion. GPT-4 on the other hand gave a code snippet to explain why JS "await" was unnecessary, suggested better variable names, and suggested moving some imports to a more simple file structure to reduce verbose import paths.
I think that's a beyond expectations improvement, it's so impressive it beats out the humans who reviewed the code and we implemented one of the suggestions.
If GPT-4 had ingested the entire code-base code reviews would easily be automated, humans would still do their passes but as a co-worker GPT would be invaluable. It would find swaths of missed efficiencies, fix slow SQL queries, look for testing gaps, and perhaps even assist in refactoring.
Personally I think we are underestimating the potential, I feel like we are where the world was in 1990 looking at the emerging internet. So many people underestimated the impact of the internet, many technology writers said it was hyperbole, couldn't replace bricks and mortar stores, email was never going to be for everyone, and so on. All of it wrong, profoundly missing the potential.
This stuff is beating out expectations at every release and I think that is reason enough to take it seriously and be much more considered when downplaying the hype.
As for junior devs, we don't hire IC1 for the same reasons you outlined. However, small studios do because they are cheap and serve a purpose where it's just WordPress maintenance and HTML POCs. Those jobs will disappear.
> Jobs are going to be lost, it's a fact. Even if it's just that junior who was learning code reviews and supporting basic development, that job will be around for maybe a year or two at the rate we are moving.
And how is that not a disaster for those who are out of work?
I'm a dev using ChatGPT and CoPilot. I work more efficiently as a result.
Currently I don't think LLMs represent a risk to my job in that I work for a small company and personally handle many aspects of the development process on my own. However, at a larger company where those roles are spread out I could see some of them going away soon.
I work on a large WordPress theme. If an LLM were trained on that theme it could possibly have the global scope I need to make changes and understand their impact. It won't be long before we have software that allows you to easily train an LLM for your use. That could threaten my job. But then, my company would still need someone to manage and QA the results.
So I see fewer jobs, but not the end of the profession.
I agree. I simplify it as boilerplate coding versus maintenance coding.
I worked at a job where I built out UI templates for a larger system. I could see that job reducing to perhaps more of an QA or automation engineer to manage the pipeline. Too bad, though, as that job was menial but gave me the longer term skills of muscle memory to write out HTML code.
But replacement? ChatGPT and GPT4 can't solve real world problems without heavy time investment of an engineer to prompt-engineer a semi-usable (if lucky) solution that still needs a lot of debugging (so coding yourself is still way faster for sufficiently complex things) and adjustment to your codebase.
Will people get left behind that don't adapt? For sure. That's always the case. If productivity is significantly increased with use of such AI tools, then those that won't use it due to whatever reason, including incompetence, will be let go eventually. All others will be fine, as always. This fear-mongering is actually good for us that can adapt, makes us more valuable.
Let's talk in 2030 again. Then in 2035. Then in 2040. And so on until we have AGI that generates you a new revenue stream just by reading the thoughts of a non-technical middle-manager, by which time we will probably have completely different problems.
People afraid of AI automating things just don't know how it works. I mean, no one does really, but if you went to University and studied AI, statistics, machine learning, deep learning and some of the variations then you'd know that that shit isn't intelligent or magic.
It predicts the most likely next tokens + some randomness based on what's in the training data. It can't do more than that. That it can conjure up well known algorithms and some variations of them that help solve some specific problems that "look complex" is not a surprise, but alas, a mostly useless party trick that scares some people apparently.
Frontend development involves a large number of fuzzy human bits that are constantly moving underneath you. Yes, your backend dependencies can change and you may need to update an API version, but in frontend you have browsers, screen readers, frameworks, and all sorts of massive icebergs largely outside of your control constantly moving.
In the backend, you probably have a language that is fairly stable. Write some unit tests, write some integration tests, but things are predictable and mockable.
Writing UI (properly) requires testing against several versions of several browsers, each with their own rendering quirks. UI tests - screenshots, whatever - are inherently flakier and when something breaks it's harder to know for sure if it's your fault.
You need to test against a variety of screen readers and accessibility tools. None of them are well documents, all of them parse the DOM (or native accessibility stack if you're not doing a web app) in a slightly different way, and the standards are constantly changing. As a bonus, you're legally obligated to get it "right", not that anyone can tell you precisely what "right" is.
Visual design isn't a walk in the park either. The designers will tell you to use color X. Its contrast ratio against colors Y, Z, and B on various pages will hopefully meet the contrast ratios, but then someone will point out that in exactly one of Windows' four different high contrast schemes this ends up being black on black and you will stare into a dark Lovecraftian abyss wondering what hellish algorithms Microsoft uses to pick colors. You'll fix HC Black and break HC white, you'll fix white and break black #2, and you will eventually throw your computer into the Marianas trench and go repair bicycles for a living.
And I’d also say that the argument is a bit straw man-y because it considers (some of) the complexities of frontend, with a fairly ideal view of the backend. Once you bring concurrency, scale and state, backend can become fairly Lovecraftian as well.
My thought, from a long career or full-stack development, is that front-end is very much about how people interact with things. It's more a matter of doing a thing with strong attention to the details of the display, interactions, and behaviors in different environments.
Back-end development is more about how machines interact with each other. Things are more deterministic, but instead of scrutinizing minutae of human-machine interaction, I'm thinking about how a machine will do this hundreds or thousands of times per second.
What we see ChatGPT being good at is largely boilerplate. The folks who are like "I am 5x more productive now with ChatGPT" raise alarm bells in my head - it suggests that they were typing the programming equivalent of pablum for most of their day.
Yeah, if you ask ChatGPT to author some simple HTML it will do so with a refreshing level of accuracy (though far from being able to do without extensive supervision) - but who is writing plain HTML? Likewise it's cool that it can generate a sort function whole cloth... but who is writing their own sort functions?
The trick with boilerplate is that the industry has already spent the past two decades automating large amounts of it away via frameworks and baking much of the functionality into programming languages themselves. If you're writing boilerplate for so much of your day that a boilerplate-spitting AI massively increases your productivity, I'd argue you've got bigger problems!
The trend - across both backend and frontend dev - is to make things more declarative and close the gap between what the developer wants and the amount of code needed to implement it. Use these technologies - it has made people dramatically more productive, but didn't require a LLM!
If you find yourself spending a significant portion of your day writing boilerplate, stop doing that! There are many options for you that allow you to more directly express what you want!
For most of our industry's history telling a computer what to do involved a lot of very painful and boring things that aren't really related to what you actually want from the computer. Moving registers around. Assigning variables. Managing memory.
But the entire time we've also been converging on doing less and less of this rote busywork. Advances in frameworks and programming languages has, in each iteration, gotten us closer and closer to simply telling the computer what you want, reducing the need for this rote busywork.
So now we have an AI that is quite good (though again: far from good enough to be unsupervised) at this busywork. For people whose jobs are largely consisted of this busywork this may be highly impactful - but your job as a programmer is to minimize this busywork to the maximum extent possible anyway.
Meta: Stuff like this doesn’t really fit in HN imho. Besides being low effort it just feels weird to read things like this on a site literally called hacker news. I feel like someone misspelled “tired techies complaining about the same thing over and over again”.
Is it that much harder to come up with a genuinely funny or constructive answer or, you know, go outside, and touch some grass instead of posting?
Compared to something like Qt/QML’s anchors and layouts, flexbox still feels like a crude hack, to me.
Yes, it was semantically ugly but it %(#% worked every #%#% time.
Deleted Comment
For those of us who know we won't be replaced, we only benefit from this FUD
Additionally all the statements that follow along the lines of "GPT today has a hard time with xyz" are all problematic too. Today is not tomorrow, and what it can do tomorrow is what we should be talking about.
The fact is in just 3 months we've gone from fairly basic code reviews to super helpful code reviews with code examples and intelligent code contextual comments.
Comparing GPT to Homestead or any number of early attempts to translate mockup to HTML+JS is unhelpful. This is wildly different technology. Talking about what GPT can do today and landing at "it's not good enough" is also missing the point, we are nowhere near the limits of this tech and it's evolving so fast that these assessments are not insightful in the slightest.
Talking about GPT as being merely a probability evaluator is also demonstrating the authors limited insight. WE ARE ALL probability evaluators. Humans work on prior experience, employ heuristics to make decisions based on that. GPT, indeed all AI, is more or less based on that paradigm. The difference is an AI can hold the whole of its learning in a perfectly memorized model. It doesn't forget what's in that model and can draw on billions of data points to make its probabilistic determinations. We pull on bias, false memories, misunderstandings, and so on. Now, that's not to say AIs won't also be afflicted by these comprehension errors, but that just doubles down on AI doing what humans do which is astounding either way.
Jobs are going to be lost, it's a fact. Even if it's just that junior who was learning code reviews and supporting basic development, that job will be around for maybe a year or two at the rate we are moving.
Just as often with AI's detractors who claim that it "merely" predicts the next word, I have to wonder if those making these grandiose claims about the near future of AI have actually used GPT-4 and its predecessors in their work. There has to be some sort of logical fallacy regarding "well [insert literally any limitation here] won't exist in GPT(n+1) at this rate."
> Jobs are going to be lost, it's a fact.
What percentage of a programmers' time have compilers automated? It's gotta be multiple orders of magnitude of efficiency gained. I could see a GPT-4 based GitHub Copilot being something like a 3-5x improvement on development in the best use cases (e.g. CRUD development built on public frameworks.) That's a colossal impact, but it's hard to even put on the scale to other ways programmers have automated their work in the past. Every time programming has automated giant swaths of programmer time, the end result has been that programmers can generate far more economic value even faster. This end result has consistently (and paradoxically) led to far more work and jobs for developers than before.
> Even if it's just that junior who was learning code reviews and supporting basic development, that job will be around for maybe a year or two at the rate we are moving.
At least in my admittedly niche area of work, juniors just aren't all that helpful. The time they take away from seniors or other members of the team makes them at net negative for at least a few months. It's often a year or more before we reach break even. Junior positions are investments in potential future independent contributors at a time where filling IC roles is still extremely difficult.
I have used GPT-3 a fair bit, and used GPT-4 today. The differences are impressive, scary impressive so far. I am basing my `GPT(n+1)` observation on the rapid rate the tech is filling in holes that previous detractors pointed out.
The differences in code reviews is amazing. In GPT-3 it basically explained the method and said it looked good, "add some comments" was its only suggestion. GPT-4 on the other hand gave a code snippet to explain why JS "await" was unnecessary, suggested better variable names, and suggested moving some imports to a more simple file structure to reduce verbose import paths.
I think that's a beyond expectations improvement, it's so impressive it beats out the humans who reviewed the code and we implemented one of the suggestions.
If GPT-4 had ingested the entire code-base code reviews would easily be automated, humans would still do their passes but as a co-worker GPT would be invaluable. It would find swaths of missed efficiencies, fix slow SQL queries, look for testing gaps, and perhaps even assist in refactoring.
Personally I think we are underestimating the potential, I feel like we are where the world was in 1990 looking at the emerging internet. So many people underestimated the impact of the internet, many technology writers said it was hyperbole, couldn't replace bricks and mortar stores, email was never going to be for everyone, and so on. All of it wrong, profoundly missing the potential.
This stuff is beating out expectations at every release and I think that is reason enough to take it seriously and be much more considered when downplaying the hype.
As for junior devs, we don't hire IC1 for the same reasons you outlined. However, small studios do because they are cheap and serve a purpose where it's just WordPress maintenance and HTML POCs. Those jobs will disappear.
And how is that not a disaster for those who are out of work?
Currently I don't think LLMs represent a risk to my job in that I work for a small company and personally handle many aspects of the development process on my own. However, at a larger company where those roles are spread out I could see some of them going away soon.
I work on a large WordPress theme. If an LLM were trained on that theme it could possibly have the global scope I need to make changes and understand their impact. It won't be long before we have software that allows you to easily train an LLM for your use. That could threaten my job. But then, my company would still need someone to manage and QA the results.
So I see fewer jobs, but not the end of the profession.
I worked at a job where I built out UI templates for a larger system. I could see that job reducing to perhaps more of an QA or automation engineer to manage the pipeline. Too bad, though, as that job was menial but gave me the longer term skills of muscle memory to write out HTML code.