Readit News logoReadit News
Posted by u/rich_sasha 3 years ago
Discuss HN: Software Careers Post ChatGPT+
We've all seen it - ChatGPT genuinely solving coding puzzles. Clearly, clearly, that's a long way from building MVP products, designing new programming languages or writing "Hello World" in Haskell. But it's also a long way since even GPT-3, never mind status quo 10 years ago. It would be cool to discuss what a future looks like where "human operators" of programming are competing against a machine. I don't think it is imminent, but equally I think it's less imminent than I did a week ago.

Some threads that come to mind:

- Are these language models better than current offshore outsourced coders? These can code too, sort of, and yet they don't threaten the software industry (much).

- What would SEs do if any layperson can say "hey AI slave, write me a program that..."? What would we, literally, do? Are there other, undersaturated professions we'd go into, where analytical thinking is required? Could we, ironically, wake up in a future where thinking skills are taken over by machine, and it's other skills - visual, physical labour, fine motor skills - that remain unautomated?

- Are we even the first ones in the firing line? Clearly, for now AI progress is mostly in text-based professions; we haven't seen a GPT equivalent for video comprehension, for example. Are lawyers at risk? Writers?

- What can SEs do, realistically, to protect themselves? Putting the genie back in the bottle is not, as discussed many times in other threads, an option.

- Or is it all bogus (with justification), and we're fine?

No doubt ChatGPT will chip in...

hansonkd · 3 years ago
I've been using ChatGPT all weekend to generate code and what I found was this:

  * Its absurdly good at coding and following types. For example if you change a type in Rust to be an Option, it will refactor the code to properly handle Options in the parts it used them. but it isn't perfect.
  * It gets it well enough there. It can generate test cases so its easy to test to see if it works.
but in the end after hours and hours of trying to coax the AI, it was unable to do what I wanted, build a b-tree in Python. it built a binary tree just fine, but trying to have it generalize to a b-tree was a problem.

  * it introduced many many many subtle errors. Like variables not being initialized or not splitting the children correctly.
  * Its implementation worked when all keys inserted were in order, but not when they were out of order.
  * it would miss and leave out variables.
  * It would frequently have index errors from trying to access lists out of bounds.
  * writing the code in Rust was almost impossible. it would constantly have wrong types or move errors.
Overall I couldn't recommend this to anyone without a strong CS background. It introduces far too many subtle bugs in code that are are almost impossible to review because the code it produces is so convincing that you go "hmm maybe it knows what it is talking about" but in the end you have no idea what you should trust.

Even the test cases that it generates can be deceptive. They look convincing but upon closer inspection, sometimes are just not really testing anything.

SxC97 · 3 years ago
I also tried to get it to generate some simple code examples.

One was to generate a webpage with a button to add textboxes. Each textbox should have a unique remove button. When the site gets down to only one textbox on the page, it should not allow the user to remove the last textbox.

After several iterations, it wasn't able to do it, often with hilarious results! (I asked it to only remove the _selected_ textbox, but if you clicked remove, it would just delete all the elements from the site)

I think the real value for me would be using this to generate a starting point.

I have a text document on my computer that has a long list of small coding projects that I want to get to at some point. But the activation energy required to stop watching Youtube and start coding is high enough that several projects have languished in my inbox for years.

If I could just feed my ideas into chatGPT and get a starting point, it would be much easier to keep going and fix small errors/add additional features than to start a project from scratch.

lmarcos · 3 years ago
Seems like it has the same coding abilities I had when I started to write code many years ago. Wonder how fast it can become better... I bet: exponentially faster than me.
qualudeheart · 3 years ago
> Overall I couldn't recommend this to anyone without a strong CS background. It introduces far too many subtle bugs in code that are are almost impossible to review because the code it produces is so convincing that you go "hmm maybe it knows what it is talking about" but in the end you have no idea what you should trust.

As someone using chatgpt all week— 30ish hours so far- and copilot since I got access early one. This fits with my xp very well. It’s like a junior programmer. You need the know how to know if your junior programmer is doing it right.

rakejake · 3 years ago
Can you ask chatGPT why it wrote a certain line or why it made a certain decision? I still think explainability is missing in these models, but even if it able to come up with something, what is the guarantee that the explanation is not bullshit?
joshuahedlund · 3 years ago
> Or is it all bogus (with justification), and we're fine?

This is mostly my take. We are at the stage of truck drivers ten years ago who might have been freaking out about self-driving taking their jobs, and here we are in 2022 with a truck driver shortage.

I don't think GPT can be useful to the point of replacing software engineering without a consistent mental model of itself, time, and the world, and I predict it will approach the limits of what advanced-search pattern-matching can do without getting anywhere near those AGI capabilities.

steve_adams_86 · 3 years ago
I think these are much different. Trucking depends on humans due to infrastructure and complex problems of law, vision, last mile details, etc. I agree about near term limitations of this kind of model, but I think highly contextual and refined models for software will prove to be powerful enough (edit: by powerful enough, I mean powerful enough to transform how people write software) in many cases.

Software reiterates a lot, and generated solutions can be quickly vetted. A truck driving to the wrong destination or locking up en route is a much larger issues than a few seconds spent determining that a generated solution has a bug or doesn’t match a spec exactly.

If AI can stand on the shoulders of giants and people can vet its outputs, I’m fairly sure it will become more capable and safer to implement than self driving trucks, much faster.

The question I have is that if we can build more, faster, will we run out of work or will more people simply make more things?

There are huge incentives for wealthy companies to run more and more code on their infrastructure. Can we do more business digitally? Will it scale to provide more programming work, even if it’s heavily AI-assisted?

joshuahedlund · 3 years ago
> Software reiterates a lot,

Well sure. That's why we have Wordpress. Javascript frameworks. And ten thousand other things. All the plug-ins of the last ten years have made me a more productive developer. But it hasn't reduced the demand for developers.

gardenhedge · 3 years ago
I mentioned it in a previous comment. It's common for senior engineers not to write that much code. They spend time their time on meetings, planning, creating architectures, presenting solutions, discussing solutions, triaging, keeping up-to-date with tech, clarifying business cost, working on waste avoidance, reviewing code, streamlining processes, vetting new tech/solutions and, in general, understanding everything that is going on.

ChatGPT+ will definitely have some affect on junior devs but us more experience folk should be fine... for now..

lordswork · 3 years ago
How do junior devs ever become senior devs if an AI can replace the work of all junior devs?
steve_adams_86 · 3 years ago
This is a great question.

One thing about GPT is that it only knows what we know at the moment. That indicates to me that it won’t be great for learning new technologies until humans generate content it can regurgitate. That alone might give juniors an edge against it (assuming they are gradually replaced by a robot pooping out dumb logic) - they might be able to specialize in learning what models don’t yet know, or what they can’t be good at.

Just guessing here. I’d love to hear a rebuttal to get a sense of where people think things are going.

Though I don’t think GPT is “there” yet, I can see it getting there by 2030. I think it’s seriously worth considering: how will people learn to program in 10 years, how will they remain relevant through periods of their career where an AI can generate better solutions than they can, and how will more experienced engineers adapt to those changes?

Gigachad · 3 years ago
They will probably manage. People manage to become programmers without having to learn how basic electronics, cpus, operating systems, etc work. You just skip over those solved problems.
rvz · 3 years ago
> ChatGPT+ will definitely have some affect on junior devs but us more experience folk should be fine... for now..

It affects both. If a single team was to be split with 5 juniors and 5 seniors, ChatGPT significantly reduces that headcount from 5 juniors to 0, and 5 seniors to 2 or 3.

With many companies cutting costs and with the cheap money getting dried up, no-one is safe. HN may not like it but, the same thing that has happened to digital artists with Stable Diffusion which was welcomed on this site, now has happened to programmers and I see lots of worrying and frowns everywhere.

It appears that StackOverflow (which lots of juniors and senior developers use) has just become irrelevant.

Xelynega · 3 years ago
> ChatGPT significantly reduces that headcount from 5 juniors to 0, and 5 seniors to 2 or 3.

Citation needed. I haven't heard of any massive disruption in the commission art market since stable diffusion went public, and I don't think something less-impactful(a different way to search old stack overflow posts) is going to cause a massive disruption either.

Stack overflow still beats chatgpt in one area that it can never compete. Coming up with new solutions to new questions. If all we needed answered was the same questions, chatgpt would be sufficient since it's essentially a compressed version of our current knowledge. We don't really have a way to update it with "new knowledge" other than "train it again".

seydor · 3 years ago
I think the opposite, LLMs will be used to build the optimal high level scaffolding and implementation, but low level devs will be needed to check and verify the code. As we ve seen so far, AI automates the brainy part, but not the long tail or parts that need physical access (eg safety drivers, warehouse workers)
rakejake · 3 years ago
I'd argue that reading and verifying code correctness is the brainy job.
jhoelzel · 3 years ago
i strongly disagree.

The point is exactly that most of those meetings are happening everywhere for the same reason and thus GPT25 might already know all the answers that you need.

Also given enough general framework skills, I'm pretty sure the AI will be able to build stuff like a good junior dev.

Xelynega · 3 years ago
The algorithm doesn't have any "general framework skills" though because it's an algorithm, not a person.

It can generate something that looks like what a person would have wrote based on its compressed probabilities, but that's very different from being an "an artofficial intelligence". At best it's a Chinese Room.

gardenhedge · 3 years ago
In that case GPT25 can do all the tech work and all the business decisions and all the marketing work. It will just do everything.
rich_sasha · 3 years ago
Yeah, agreed. I wonder though if it will start nibbling away at the bottom of the pyramid.

First you'll cut the bottom 10%, then the bottom 20% etc. The pie will only be shrinking.

nicholasjarnold · 3 years ago
> Are lawyers at risk?

There are a lot of sub-areas of expertise and practice that someone with a JD might choose to specialize in. I have some small personal experience in (technically) advising a failed/defunct startup that sought to solve the problem of patent search using an AI. This was years ago, now...maybe around 2018ish. The endeavor failed for various reasons, but it did provide some insight that's relevant to your question here.

As these language models become more advanced (and much more accurate) I think there will be a number of ways in which they will disrupt existing domains of human expertise. Note that I used the word disrupt and not displace. In the patent search space that I was lightly involved in for a short time I basically learned how expensive and time-consuming a good patent search actually is. The machines were planned to be leveraged to drastically reduce the time and cost of typical prior art searches, but would still require human touch-points to interpret results and make final decisions/reports. I think this sort of use-case is much more in line with what will ultimately happen in any sort of foreseeable future. The AI will supplement and ease the previously-human-only task. It will not supplant/replace it.

seydor · 3 years ago
Lawyers can always make a law to pay themselves, programmers can't
adamckay · 3 years ago
Lawyers don't make laws, they argue about them.

Politicians make laws.

alfalfasprout · 3 years ago
Here's the problem-- this will automate the type of "programming" that's just looking something up on stackoverflow and more or less copy/pasting the answer. There's a lot of that out there.

Once you're a more senior engineer there's a lot more than just writing code. Designing a system, worrying about maintainability, operational burden, scaling, etc. are where you might spend your time more.

I'd argue that even for "programming" the usefulness is debatable. These models spit out relatively correct code but mainly in the sense that they regurgitate something akin to a SO answer. There are plenty of subtle logical errors though and that debugging exercise often takes longer than just writing the code. Lots of code references other libraries, etc. and APIs do change frequently. So ensuring what's generated even works as expected is a fair amount of effort.

Still, the chasm between "engineering" work and "programming" work is only going to get bigger as a result of tooling like this. I expect a lot of what's currently outsourced to overseas IT consultancies can be replaced with half the staff leveraging these tools. The bottleneck has always been producing the exact requirements, tightly scoped tasks, etc. though. We're no closer here.

yashg · 3 years ago
Let's say you have an idea for a mobile app, but you have never coded anything and you don't want to learn Android/iOS programming to make this app. You want to convert your idea into an actual app that is deployed on the app store where people can download it on their phone. What would you do? Normally you'd hire a programmer and get them to build, test and deploy this app. Now you have ChatGPT, are you going to use it to build and deploy this app? No. You will still hire a programmer who will probably use ChatGPT to write some or most of the code as opposed to writing all the code manually. This will save them a lot of time and since they can complete the project faster, they will charge a lesser amount to you, it may still be higher on a per hour basis but it's a win-win for both you and the programmer.

AI assisted coding or anything else will not replace the professionals who had been doing it manually so far. It will only make them more productive. They can do more work in less time and charge more for the enhanced productivity.

cranium · 3 years ago
I really fear for the day I need to debug some AI-generated legacy code. It's not really the algorithmic part that scares me, but the naming and code architecture.

These AI seem so confident when they output BS that it makes you doubt yourself. Now imagine if some code looks coherent but you find that each line does something slightly different that what the variable names and other method calls suggest. Now you can't trust the names to build a mental image of the code; you have to follow each method call to find out exactly what it does. It would be worst than looking at obfuscated names because you may think you know what is going on.

djmips · 3 years ago
Some of the legacy code I have to debug makes me wonder if someone already had a GPT 5 years ago... Seriously - it's alien code - at the very least this person doesn't think like me at all.
zTehRyaN · 3 years ago
That is a really useful insight! I share your fear about human-conducted debugging of AI-generated code
spaceman_2020 · 3 years ago
A perspective from a non-professional who has been teaching himself to code:

My knowledge of exact functions is poor. I might know that I can use Framer library to animate on-page elements, but I have little to no understanding of the exact function needed to animate an object from, say, left-to-right on hover.

My normal workflow was to either read the documentation or search StackOverflow for answers. I would then have to rework the function to fit my current use case.

Now, I've been asking chatGPT directly to build the exact function for me.

So far, it's been a massive timesaver. I'll probably learn more if I dig through the documentation, but since I'm a hobbyist, not a professional, it's much more convenient for me to just get the information I need, without digging through Stackoverflow or documentation.

burkaman · 3 years ago
FYI this is probably not a good habit if you're trying to teach yourself, rather than just trying to get some task done. Reading documentation and searching StackOverflow are genuinely useful skills that take practice to get good at. Asking chatGPT is equivalent to just asking a friend for the answer, which is fine if you want to be efficient but not ideal for learning.

Obviously this doesn't matter if we think chatGPT is so good that you'll never need to read documentation yourself, but I think this is one of those situations where you need to be an expert before you're allowed to break the rules. Without experience, you won't know if chatGPT is really giving you everything you'd get from reading the docs yourself, or only a small and potentially inaccurate slice.

pcthrowaway · 3 years ago
ChatGPT generally goes into a lot of details about its decisions, and provides detailed explanations. You still have to fact-check it, or verify by running the code, because it will make mistakes, but if that happens you can say "Hey, this isn't quite right because ..., how do I actually do this" and it will usually figure it out.

As a software dev of 10 years, I've done the "googling and reading documentation" a fair bit, which is kind of like stumbling around in the dark and feeling around to get a sense of where things are. For some well-defined, well-documented things, using ChatGPT to do the same is like having having an overconfident junior-intermediate dev to pair with who's familiar in a stack that I'm not. I still have to guide it a fair bit, and adjust my expectations to account for that overconfidence. But it can absolutely guide me as well, and teach me new things.

discreteevent · 3 years ago
I think that sometimes just copying and pasting from stack overflow is not much better than using chatGPT. But I agree with you about reading documentation. When you read the docs you build up a model of the system in your head. You can then play with this model in your head and come up with good solutions. This seems to be exactly what chatGPT can't do.

Also I'm senior and sometimes don't get to program for long periods of time. What I find is that when I don't program I get worse and solving higher level problems. The important part of programming is not about knowing APIs etc. It is modeling a problem and its solution in a domain that forces you to be precise. For that reason I would say to junior developers: Keep programming. It will make you a better problem solver and it will make you better at the things that chatGPT can't do.

spaceman_2020 · 3 years ago
I understand that and I'm fine with it, especially since I'm using it for a hobby project, and mostly looking up non-core libraries that I'll likely not use often again (such as framer motion).

My point is that it's making newbies like me way more productive than we have any right to be.

discreteevent · 3 years ago
It will be interesting if it replaces stackoverflow considering that it probably trained itself on a lot of the questions and answers. On the one hand it's not much different than training on github or how google put translators out of business by using their translations. But it is just a more direct connection that demonstrates how these guys are funneling the wealth generated by other people's work up to themselves. Before stackoverflow the state of questions and answers on the web was really bad and full of noise. They took a risk and put a lot of effort and engineering knowledge into building it.

What really annoys me is that it will probably further train itself on this text I'm writing now. I am writing it in the spirit of exchange with other similar people. Not in the spirit of some mechanical turk worker for OpenAI.

burkaman · 3 years ago
I agree and I think this is similar to some people's very legitimate objections to Stable Diffusion and DALL-E. When people put artwork up on the internet they were expecting a handful of human beings to draw some enjoyment and maybe inspiration from it. They were not expecting billions of identical robots to ingest it in a nanosecond and remember and build off of it for eternity.

Scale matters, and robot and human inspiration are not ethically equivalent even if you think they are mechanically equivalent.