Some threads that come to mind:
- Are these language models better than current offshore outsourced coders? These can code too, sort of, and yet they don't threaten the software industry (much).
- What would SEs do if any layperson can say "hey AI slave, write me a program that..."? What would we, literally, do? Are there other, undersaturated professions we'd go into, where analytical thinking is required? Could we, ironically, wake up in a future where thinking skills are taken over by machine, and it's other skills - visual, physical labour, fine motor skills - that remain unautomated?
- Are we even the first ones in the firing line? Clearly, for now AI progress is mostly in text-based professions; we haven't seen a GPT equivalent for video comprehension, for example. Are lawyers at risk? Writers?
- What can SEs do, realistically, to protect themselves? Putting the genie back in the bottle is not, as discussed many times in other threads, an option.
- Or is it all bogus (with justification), and we're fine?
No doubt ChatGPT will chip in...
Even the test cases that it generates can be deceptive. They look convincing but upon closer inspection, sometimes are just not really testing anything.
One was to generate a webpage with a button to add textboxes. Each textbox should have a unique remove button. When the site gets down to only one textbox on the page, it should not allow the user to remove the last textbox.
After several iterations, it wasn't able to do it, often with hilarious results! (I asked it to only remove the _selected_ textbox, but if you clicked remove, it would just delete all the elements from the site)
I think the real value for me would be using this to generate a starting point.
I have a text document on my computer that has a long list of small coding projects that I want to get to at some point. But the activation energy required to stop watching Youtube and start coding is high enough that several projects have languished in my inbox for years.
If I could just feed my ideas into chatGPT and get a starting point, it would be much easier to keep going and fix small errors/add additional features than to start a project from scratch.
As someone using chatgpt all week— 30ish hours so far- and copilot since I got access early one. This fits with my xp very well. It’s like a junior programmer. You need the know how to know if your junior programmer is doing it right.
This is mostly my take. We are at the stage of truck drivers ten years ago who might have been freaking out about self-driving taking their jobs, and here we are in 2022 with a truck driver shortage.
I don't think GPT can be useful to the point of replacing software engineering without a consistent mental model of itself, time, and the world, and I predict it will approach the limits of what advanced-search pattern-matching can do without getting anywhere near those AGI capabilities.
Software reiterates a lot, and generated solutions can be quickly vetted. A truck driving to the wrong destination or locking up en route is a much larger issues than a few seconds spent determining that a generated solution has a bug or doesn’t match a spec exactly.
If AI can stand on the shoulders of giants and people can vet its outputs, I’m fairly sure it will become more capable and safer to implement than self driving trucks, much faster.
The question I have is that if we can build more, faster, will we run out of work or will more people simply make more things?
There are huge incentives for wealthy companies to run more and more code on their infrastructure. Can we do more business digitally? Will it scale to provide more programming work, even if it’s heavily AI-assisted?
Well sure. That's why we have Wordpress. Javascript frameworks. And ten thousand other things. All the plug-ins of the last ten years have made me a more productive developer. But it hasn't reduced the demand for developers.
ChatGPT+ will definitely have some affect on junior devs but us more experience folk should be fine... for now..
One thing about GPT is that it only knows what we know at the moment. That indicates to me that it won’t be great for learning new technologies until humans generate content it can regurgitate. That alone might give juniors an edge against it (assuming they are gradually replaced by a robot pooping out dumb logic) - they might be able to specialize in learning what models don’t yet know, or what they can’t be good at.
Just guessing here. I’d love to hear a rebuttal to get a sense of where people think things are going.
Though I don’t think GPT is “there” yet, I can see it getting there by 2030. I think it’s seriously worth considering: how will people learn to program in 10 years, how will they remain relevant through periods of their career where an AI can generate better solutions than they can, and how will more experienced engineers adapt to those changes?
It affects both. If a single team was to be split with 5 juniors and 5 seniors, ChatGPT significantly reduces that headcount from 5 juniors to 0, and 5 seniors to 2 or 3.
With many companies cutting costs and with the cheap money getting dried up, no-one is safe. HN may not like it but, the same thing that has happened to digital artists with Stable Diffusion which was welcomed on this site, now has happened to programmers and I see lots of worrying and frowns everywhere.
It appears that StackOverflow (which lots of juniors and senior developers use) has just become irrelevant.
Citation needed. I haven't heard of any massive disruption in the commission art market since stable diffusion went public, and I don't think something less-impactful(a different way to search old stack overflow posts) is going to cause a massive disruption either.
Stack overflow still beats chatgpt in one area that it can never compete. Coming up with new solutions to new questions. If all we needed answered was the same questions, chatgpt would be sufficient since it's essentially a compressed version of our current knowledge. We don't really have a way to update it with "new knowledge" other than "train it again".
The point is exactly that most of those meetings are happening everywhere for the same reason and thus GPT25 might already know all the answers that you need.
Also given enough general framework skills, I'm pretty sure the AI will be able to build stuff like a good junior dev.
It can generate something that looks like what a person would have wrote based on its compressed probabilities, but that's very different from being an "an artofficial intelligence". At best it's a Chinese Room.
First you'll cut the bottom 10%, then the bottom 20% etc. The pie will only be shrinking.
There are a lot of sub-areas of expertise and practice that someone with a JD might choose to specialize in. I have some small personal experience in (technically) advising a failed/defunct startup that sought to solve the problem of patent search using an AI. This was years ago, now...maybe around 2018ish. The endeavor failed for various reasons, but it did provide some insight that's relevant to your question here.
As these language models become more advanced (and much more accurate) I think there will be a number of ways in which they will disrupt existing domains of human expertise. Note that I used the word disrupt and not displace. In the patent search space that I was lightly involved in for a short time I basically learned how expensive and time-consuming a good patent search actually is. The machines were planned to be leveraged to drastically reduce the time and cost of typical prior art searches, but would still require human touch-points to interpret results and make final decisions/reports. I think this sort of use-case is much more in line with what will ultimately happen in any sort of foreseeable future. The AI will supplement and ease the previously-human-only task. It will not supplant/replace it.
Politicians make laws.
Once you're a more senior engineer there's a lot more than just writing code. Designing a system, worrying about maintainability, operational burden, scaling, etc. are where you might spend your time more.
I'd argue that even for "programming" the usefulness is debatable. These models spit out relatively correct code but mainly in the sense that they regurgitate something akin to a SO answer. There are plenty of subtle logical errors though and that debugging exercise often takes longer than just writing the code. Lots of code references other libraries, etc. and APIs do change frequently. So ensuring what's generated even works as expected is a fair amount of effort.
Still, the chasm between "engineering" work and "programming" work is only going to get bigger as a result of tooling like this. I expect a lot of what's currently outsourced to overseas IT consultancies can be replaced with half the staff leveraging these tools. The bottleneck has always been producing the exact requirements, tightly scoped tasks, etc. though. We're no closer here.
AI assisted coding or anything else will not replace the professionals who had been doing it manually so far. It will only make them more productive. They can do more work in less time and charge more for the enhanced productivity.
These AI seem so confident when they output BS that it makes you doubt yourself. Now imagine if some code looks coherent but you find that each line does something slightly different that what the variable names and other method calls suggest. Now you can't trust the names to build a mental image of the code; you have to follow each method call to find out exactly what it does. It would be worst than looking at obfuscated names because you may think you know what is going on.
My knowledge of exact functions is poor. I might know that I can use Framer library to animate on-page elements, but I have little to no understanding of the exact function needed to animate an object from, say, left-to-right on hover.
My normal workflow was to either read the documentation or search StackOverflow for answers. I would then have to rework the function to fit my current use case.
Now, I've been asking chatGPT directly to build the exact function for me.
So far, it's been a massive timesaver. I'll probably learn more if I dig through the documentation, but since I'm a hobbyist, not a professional, it's much more convenient for me to just get the information I need, without digging through Stackoverflow or documentation.
Obviously this doesn't matter if we think chatGPT is so good that you'll never need to read documentation yourself, but I think this is one of those situations where you need to be an expert before you're allowed to break the rules. Without experience, you won't know if chatGPT is really giving you everything you'd get from reading the docs yourself, or only a small and potentially inaccurate slice.
As a software dev of 10 years, I've done the "googling and reading documentation" a fair bit, which is kind of like stumbling around in the dark and feeling around to get a sense of where things are. For some well-defined, well-documented things, using ChatGPT to do the same is like having having an overconfident junior-intermediate dev to pair with who's familiar in a stack that I'm not. I still have to guide it a fair bit, and adjust my expectations to account for that overconfidence. But it can absolutely guide me as well, and teach me new things.
Also I'm senior and sometimes don't get to program for long periods of time. What I find is that when I don't program I get worse and solving higher level problems. The important part of programming is not about knowing APIs etc. It is modeling a problem and its solution in a domain that forces you to be precise. For that reason I would say to junior developers: Keep programming. It will make you a better problem solver and it will make you better at the things that chatGPT can't do.
My point is that it's making newbies like me way more productive than we have any right to be.
What really annoys me is that it will probably further train itself on this text I'm writing now. I am writing it in the spirit of exchange with other similar people. Not in the spirit of some mechanical turk worker for OpenAI.
Scale matters, and robot and human inspiration are not ethically equivalent even if you think they are mechanically equivalent.