Long story...
I have been using ChatGPT for a while, and moved to the Plus subscription for their GPT-4 model, which I must say, is quite good.
1. ChatGPT makes us very productive. Personally, in my early 40s, I feel my brain is back in 20s.
2. I no longer feel the need to hire juniors. This is a short-term positive and maybe a long-term negative. [[EDIT: I may have implied a wrong meaning. To clarify - nobody's going yet because of ChatGPT. It is just raising the bar high and higher. What took me years to learn, this thing can do already and much more. And I cannot predict the financial future of OpenAI or the markets in general.]]
A lot of stuff I used to delegate to fellow humans are now being delegated to ChatGPT. And I can get the results immediately and at any time I want. I agree that it cannot operate on its own. I still need to review and correct things. I have do that even when working with other humans. The only difference is that I can start trusting a human to improve, but I cannot expect ChatGPT to do so. Not that it is incapable, but because it is restricted by OpenAI.
And I have gotten better at using it. Calling myself a prompt-engineer sounds weird.
With all the good, I am now experiencing the cons, stress and burnout:
1. Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly. Now, when I have some idea I want to try out - I start working on it immediately with the help of AI. Earlier I just used to put a note in the todo-list and stash it for the next day.
2. The outputs with ChatGPT are so fast, that my "review load" is too high. At times it feels like we are working for ChatGPT and not the other way around.
3. ChatGPT has the habit of throwing new knowledge back at you. Google does that too, but this feels 10x of Google. Sometimes it is overwhelming. Good thing is we learn a lot, bad thing is that if often slows down our decision making.
4. I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.
5. I have zero doubt that AI is setting the bar high, and it is going to take away a ton of average-joe desk jobs. GPT-4 itself is quite capable and organisations are yet to embrace it.
And not the least, it makes me worry - what lies with the future models. I am not a layman when it comes to AI/ML - have worked with it until the past few years in the pre-GPT era.
Has anybody experienced these issues? And how do you deal with those?
* I could not resist asking ChatGPT the above - couple of strategies it told me were to "Seek Support from Others" and "Participating in discussions or groups focused on ethical AI". *
> A lot of stuff I used to delegate to fellow humans are now being delegated to ChatGPT. And I can get the results immediately and at any time I want. I agree that it cannot operate on its own. I still need to review and correct things. I have do that even when working with other humans. The only difference is that I can start trusting a human to improve, but I cannot expect ChatGPT to do so. Not that it is incapable, but because it is restricted by OpenAI.
I think this point bears repeating.
The threat of these models isn't that they'll go all Skynet and kill everyone, it's that they'll cause a lot of economic devastation to people who make a living through labor requiring skill and knowledge, especially future generations of skilled labor. Then there will be a decision point: either the senior-level people who thought they were safe get replaced by a more-advanced model, or they don't and there's a future society-level shortage because the pipeline to produce more senior-level people has been shut down (like the OP is doing).
The only people who will come out (relatively) unscathed are the ownership class, like always.
Of course, this is inevitable because it's impossible to question or change our society's ideological assumptions. They must be played out until they utterly destroy society.
Or, for every junior that isn't hired by a business that can't expand its portfolio to exploit greater productivity or can’t figure out how to effectively use LLMs across the experience spectrum, two will be hired in shops that can do those things, and, as with previous software dev productivity increases, greater productivity in the field will mean a broader range of viable applications and more total jobs across all experience levels.
And everybody also gets a pony! Win-win-win situation!
Previous "software dev productivity increases" happened as computing saturation itself increased from a hanful of mainframes to one in every office, then at every desk, then a few in every home, and later one in every hand. Now it's at 100% or close.
It also still required computer operators. LLM are not mere increased productivity of a human computer operator, but automation of productivity so that it can happen without an operator (or with much fewer).
Moreover, all this "increased productivity" still left wage stagnant for 40 years (with basic costs like housing, education, healthcare skyrocketing). It's not like more of it, in the same old corporatism context, bodes better for the future...
Leading up to 2008 you'd think the market would optimize for lenders that checked who they were giving loans to. But that's not what happened. The idiots kept giving out shit loans until the entire market burned down taking out good and bad lenders alike in the aftermath.
As an American parent of young children, I keep being told that college is a scam and I should steer my kids toward the trades. 90+% of the time, I am being told this by a white-collar worker who went to college themselves, and is just bloviating.
When we reach a real crisis point, severe enough to actually consider granting skilled tradespeople access to a fraction of the privilege enjoyed by white-collar workers, then I might consider nudging my kids toward electrician or plumbing work. But under the current social caste system, of course I am going to do everything possible to give my kids access to college and steer them that way.
I believe that virtually everyone, white-collar and blue-collar alike, quietly feels likewise. We make a pretense of giving contrary advice, but mostly just in hopes that other people will move in that direction for us. To take the bullet and help with this imbalance, and also to relieve the intense competition our own kids face.
He does not have paid vacation, good sick leave policies, or good health insurance through his employer. He has witnessed a bunch of on-the-job injuries and one near-fatality, largely caused by his employer pushing hard for the team to complete jobs as fast as possible. He is paid alright, but less than the norm for the people I know with college degrees even after we exclude everybody in software. His job is also physically demanding and may cause problems later in life.
Not exactly a "hey, pick this job and you'll have a great career" story.
Dead Comment
I'm surprised someone hasn't replace politicians with an LLM. Imagine not having to pay their salaries when ChatGPT can send "thoughts and prayers" to Maui over Twitter 24/7.
In my opinion this is the most optimistic of the realistic possible outcomes. In the past when automation put a factory worker out of a job, they were just told to go back to school or "learn to code" which isn't actually a solution for most people. These LLMs disproportionately impact people further up the socioeconomic ladder than prior waves of automation. Maybe our uneven society means that this wave of distribution of a more powerful group of people will be more likely to cause an actual change to how we organize society.
Their whole job is to 'represent' their constituents. A LLM can poll the sentiments of the people far more effectively than they can. I'm sure it could be programmed to accept bribes too to weigh rich people's opinions higher. I'd love to see votes done by 100 different LLMs instead of Senators (a hyperbolic, non literal statement but interesting as a thought experiment I hope)
Politicians should still propose new and altered legislation but actually voting, and being informed to vote, could be massively improved.
Head out of the developed world and you can see this type of society everywhere.
This meme of AI -> upheaval -> basic income utopia has got to die. It’s wishful thinking. It’s “clean coal” for programmers.
Yes. This is pretty much my only concern about these models, and I'm powerfully concerned about this. It's hard to see how this will lead to a good place. It seems more likely that this will lead to increased poverty and multiple socioeconomic crises.
I am even more concerned that very few people are talking about this, and none of the power players in this space are, except for occasional mentions in passing of fantasies like UBI.
People have been talking about the threat of automation since the very beginning of the industrial revolution. It just never plays out nearly that badly, and short-term disruptions are always outweighed by long-term efficiency gains within ~5 years or so; even those who experience the worst career disruption tend to end up better off within that time frame.
I certainly would not like for my career to be disrupted for ~5 years, but the alternative would be worse.
If everything you do for money goes in and out over a wire, be very afraid.
And a part of their role will morph into prompting GPT4 (much like this senior engineer has started doing).
If GPTx ends up in the narrow area where it's universally smarter than junior engs but definitely not capable of being a senior eng, then junior engs will just shift to the little remaining work for senior engs, shadow them for months to years like an apprenticeship.
Of course in that case the total number of eng needed will also decrease (already only a small percent ever get good enough to be considered truly senior), so there will be selection bias toward more intelligent engineers who are a step above GPTx. If none are left, then the profession will be gone and there will be no problem.
That's bunk. The OP is literally "feel[s] the need to hire junior" engineers because he can ChatGPT that work. How are they going to learn a job they won't be given the opportunity to have much faster?
> If GPTx ends up in the narrow area where it's universally smarter than junior engs but definitely not capable of being a senior eng, then junior engs will just shift to the little remaining work for senior engs, shadow them for months to years like an apprenticeship.
That doesn't make much sense. That kind of apprenticeship would be pure charity, so it's not going to happen. No one is going to learn to be a senior engineer in "months," and no one (except someone's rich parents) is going to pay for someone to sit around unproductively in and office for years while they learn. Even interns are required to produce output that adds value. They do that by successfully completing junior-level tasks that need to be done well.
That's bunk. The OP is literally "feel[s] the need to hire junior" engineers because he can ChatGPT that work. How are they going to learn a job they won't be given the opportunity to have much faster?
> If GPTx ends up in the narrow area where it's universally smarter than junior engs but definitely not capable of being a senior eng, then junior engs will just shift to the little remaining work for senior engs, shadow them for months to years like an apprenticeship.
That doesn't make any sense. That kind of apprenticeship is pure charity, so it's not going to happen.
I hear that from a friend in the legal business. Less need for paralegals. Unclear yet if the need for new lawyers will be reduced.
why pay Law school graduate as paralegal, when you can hire associate degree grad with ChatGPT to do the same work?
Occasionally I would see clips from or read reactions to Idiocracy, and be left scratching my head, because somehow, somewhere, there have to be the people who are thinking. The whole conceit of the film is that there are no smart, curious people because it's being bred out of the population. That never made sense to me because you still have to have some smart, curious, creative people somewhere to keep things moving. Our society is quite dependent on the people who silently keep things running in the background.
I can however envision a world where early curiosity is discouraged, and supplanted by a technology that can fill the holes of the entry-level smart people. When everyone is discouraged from starting, and the existing participants age out, then maybe you can get a world where there are no new smart, curious people.
Regarding Idiocracy, once of the background conceits of the film is those kinds of people set up automation to keep things going before they died out (for the reasons clearly explained at the start of the movie). If you pay attention everything in that world is automated: a diagnostic machine with a playskool interface (https://www.youtube.com/watch?v=hmUVo0xVAqE) is what's actually doing the doctor's job, a major company is run by a computer the CEO doesn't understand (https://www.youtube.com/watch?v=jBFREFtFEgs), etc.
I am already seeing this, companies are desperate for senior developers but at the same time they don't want to hire juniors.
If a task can be completed satisfactorily by an automated computer program, was the task really "skilled labor"?
I ask this sincerely, because some of the occupations being replaced/evicted (eg: copywriting) were clearly given more skill value than they should have.
What tasks are you delegating to ChatGPT that were previously done by humans? Most of my input from others is regarding current information specific to the task at hand. I don't see how ChatGPT would have any idea what I'm talking about.
Do you have some specific examples you could share?
A few more:
- "Write a Python script with no extra dependencies which can take a list of URLs and use a HEAD request to find the size of each one and then add those all up" https://simonwillison.net/2023/Aug/3/weird-world-of-llms/#us...
- "Show me code examples of different web frameworks in Python and JavaScript and Go illustrating how HTTP routing works - in particular the problem of mapping an incoming HTTP request to some code based on both the URL path and the HTTP verb" https://til.simonwillison.net/gpt3/gpt4-api-design
- "JavaScript to prepend a <input type="checkbox"> to the first table cell in each row of a table" https://til.simonwillison.net/datasette/row-selection-protot...
- "Write applescript to loop through all of my Apple Notes and output their contents" https://til.simonwillison.net/gpt3/chatgpt-applescript
Programmers are paid not to bang out code, but rather to figure out the mess and crap of the existing codebase and how to selectively add one-two lines to change system's behavior and keep stability of the system.
Fair enough, but i also don't really feel this is threatening anybody's job.
* I'm always using it to munge/generated tables/csv/markdown/json - you can basically throw any copy and paste from a random PDF that's some weird gobbedlygook of tabs, spaces, newlines and get something cleanly formatted. On the one hand, it seems like a waste of computation, but on the other hand, it's way cheaper than my time and there are so many tasks that require using poorly formatted output. Even better, CI will of course write awk/sed for you if you need to do any automation.
* I'm always forgetting the syntax for named byobu sessions (it happily wrote a script to help with that) but I've also been staging some dev servers and it was able to generate the scripts to create new named session and windows, attaching/creating when necessary, handling if the processes were running, and creating the systemd units for spinning these up.
* On this same project it wrote some python scripts for managing SSH tunnels and reverse tunnels, including filtering/logging of error messages, handling jump servers, etc. This is all stuff I've done years ago (and even written lots of docs for), but it was actually way faster for ChatGPT to generate these than digging those out.
* I've been running into issues w/ some HTML5 audio output and needed to swap to websocket streaming w/ webmedia output (which I wasn't familiar with at all). ChatGPT gave me the code to swap into my FastAPI server and the frontend code I had w/o having to do any further research, great.
* I hate Docker setups, and I had issues w/ Nvidia containers and GPUs not showing up w/ my docker config. I was able pass the various error messages and get my problems fixed without spelunking/hair-pulling. Same with figuring out some cross-container network hijinx.
* There's a bunch of one-offs that I might just not have bothered doing, that I can just ask it to do as well - eg, I've previously written code for poisson distributions and the like, so I knew what to ask for, but would have been a huge PITA to dig out exactly how to do it, but took like no effort to just ask GPT4 CI to figure out a one off I just wouldn't have done otherwise: https://chat.openai.com/share/80fa7bc0-e099-4577-bad9-d026e7...
I really wanted to know what vague things OP's humans were doing, but they haven't responded to anything.
I've been reading your blog since the dark ages. Thanks for the great content over the years!
If it was actually a life altering tool (and it might be one day) there wouldn’t need to be an entire industry of people trying to convince everyone that with just one small trick Google doesn’t want you to know, you can quadruple your productivity.
At the very least, its a much more powerful google (dont nitpick my comparison, i realize it hallucinates). Getting the EXACT context of your question is something generalized search/articles online will NEVER give you, and you can read hundreds of pages of docs all day. This is good for certain things, but not when you want to know just a single setting or atomic piece of information. I want to get the smallest amount of accurate information very specifically to my problem, as I'm programming many hours per day on my own companies as a one man show.
My search history on chat gpt includes a few things as examples:
- specific ways SOLID principles could be applied to Go which is non-OOP language
- helping me quickly learn nuances of Lua for configuring neovim, specifically for weird syntax or things annoying to google (ie what does # mean) or what does a specific error mean within the context of the configuration
- more efficient top k algorithms than what I was building for learning purposes
- asking to break down big o complexity of certain types of sort functions and whether they differ from n log n
- helping me learn enough rust to do a bug fix Pr that was annoying me
- x vs s in neovim config for keymap modes
- figuring out why Ruby doesn’t implement descending ranges
Etc etc etc
Add to this the limited usefulness for generating code that's contextual - making some method deep inside a component tree that needs to reference a service class, and pick some dom elements to mutate etc... it requires knowledge and reasoning about the project and overall code structure.
I don't understand how folks are using it as a productivity booster, unless maybe as something like a better StackOverflow?
Where I have found LLM useful is in generating text. Where I used to use a thesaurus I now use LLM to find words to name things in themed UX. But it’s not great at function or variable names, it tends to pick names that look good but don’t precisely describe what something is. LLM is also great at generating text for role play.
Indeed because ChatGPT is excellent at writing text. And because I know exactly what I want to see even if I have a hard time putting it into words myself, I can easily catch the mistakes and hallucinations.
I don't get why there is so much focus on code generating AIs and so little on code analysis. Have AIs do code reviews, write tests and analyze the results, etc... LLMs are awesome at reviewing code, they are able to tell you what's unexpected. And what is unexpected has a good chance of either being a bug or some key element of the code that needs attention. I think I have seen a single article about that, out of hundreds that are about code generation.
Largely I use CGPT for work that's boilerplate/LOC heavy but architecture light, things like writing first drafts of React hooks and the like. It's quite good with constraints like use typescript or use X function to do Y.
I usually give it about two goes if it goes in the wrong direction on the first try. If it seems to not conceptually understand what I'm asking I generally just write it directly rather than tinkering with prompts for 20 minutes.
I also have a couple of longer system prompts saved for converting Vue components to React using the house style and things like that using the playground.
It does fairly well for architecture, if you don't expect too many specifics. It, at least, works as a reasonable sanity check/brainstorm.
All of these LLM becomes less expert the finer resolution you take the context. Keep it high level, and you still have a relatively expert assistant.
Instead I paste the JavaScript and tell ChatGPT to add type definitions. Mostly it gets it right. If it doesn't, it gets me closer.
I don't use it for JS in general because I'm particular about how I write stuff. Though occasionally I'll lean on Copilot to fill out a utility function.
Even for things that I've done before, it's often much easier to ask ChatGPT how to do something than to look through my projects to find how I did it previously. It might sound lazy, but if it takes me several minutes to search through various projects to find that one time I did something, why bother when I can just ask ChatGPT and know in seconds?
I will say that yes, ChatGPT can hallucinate APIs that don't exist, and that can be annoying, but even if it does it 20% of the time, it's still incredibly valuable in the time savings the other 80% of the time it does hit.
same for SQL, if you are not familiar with SQL.
probably could be same with Splunk SPL, Kibana KQL, Prometheus PromQL, or any other DSL that you are not familiar with
I want to contribute, while being fully aware what I'm contributing with. This doesn't lend itself to that.
Deleted Comment
This sounds like the root of your problem, and entirely on your ability to enforce boundaries (which you may or may not have set for yourself). No judgment here; I think we all have struggled with this at one time or another. Or, you know, constantly...
> 4. I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.
I definitely know that feeling. I think the likely outcome writ large is that this FOMO feeling will eventually subside. The economy for years has needed more developers than were available; ChatGPT and friends will result in individuals being able to do more and soak up demand that way instead of increasing supply. The long-term negative effect of this is more likely to be depressed wages instead of massive unemployment in the tech sector.
> 5. I have zero doubt that AI is setting the bar high, and it is going to take away a ton of average-joe desk jobs. GPT-4 itself is quite capable and organisations are yet to embrace it.
Another way of looking at it is that its going to create a number of desk jobs, but those who can't adapt to the tools on the market will suffer in the same way that people who couldn't adapt to the use of spreadsheets, word processors, etc, certainly had fewer job opportunities than those who did. Some people are going to get left behind, no doubt—this is why I'm in favor of a robust social safety net. But even with questionable public support for those people, I don't think anyone today would suggest we should retreat to an economy that didn't have such basic tools as spreadsheets and word processor apps today.
I'm wondering why you're feeling the need to hire juniors because of GPT-4. Is it because GPT-4 has taken up the cognitive load capacity you need for mentoring juniors, or do you feel like GPT "obsoletes" less experienced people?
I think ChatGPT's advice is on the right track. It sounds to me like your experience of using it is kind of like my experience of pairing with someone else of equal-ish ability: productive, but draining, due to the need to constantly pay attention. If so, why not treat it similarly? Most people don't pair all day every day, probably because of the aforementioned cognitive load of doing so.
Last, but not least, while this may seem obvious, you should remember that you are human and not a machine. You need to separate yourself from this thing for at least some portion of your day. The constant stress (and, yes, that dopamine rush you feel when you use it is a kind of stress -- stress isn't always a purely negative thing) will take its toll on you eventually. That's the "burnout" you're perceiving, and the only way to prevent it is to just not let it happen.
Take care of yourself. Socialize and interact with humans, especially close friends and/or SO's as applicable. If you have a pet, spend some time with them. Take a walk.
But, most of all, remember that GPT-x, as smart as it may appear, can't actually learn anything from experience. It can only learn from an expensive and labor-intensive process, and once its training is done, it's frozen in time forever (modulo some fine-tuning, which is essentially an extension of said labor-intensive training process). And, at the end of the day, that just makes it a very versatile, very expensive, and very useful tool, but a tool nonetheless.
To me it feels exactly like finding wikipedia in 2005, or getting an iphone + wikipanion in 2008. The frontiers of my mind have been unleashed. A real bicycle for the mind.
Here are some tactics I use to "turn off gpt":
1. It'll be there tomorrow. The great thing about their threaded model is you can easily find the convo and continue it tomorrow. Remind yourself of that consciously (or tape it to your monitor!)
2. You're not behind, you're ahead. 80% of Americans haven't tried chatgpt. 95% of the world maybe.
3. Don't worry about juniors. They'll still be hired because now they'll ramp up faster and produce better code, using the same tool you're using. Same thing that happened when stackoverflow became popular and junior devs stopped "reading the source code" or "reading man pages."
For all the limitations of GPT4, it truly is great at coding. Exciting times.
idk if anyone realistically compares themselves to the abstract nebulous "everyone". its likely moreso in regards to their socioeconomic band
So maybe the seniors should be worried, since we/they don't have much barrier to entry that means much more competition.
Transitioning from junior to medior (for example) is much more than writing x% better code. It's the process of falling and getting back up. Being stumped and learning when to ask for help (and not just technical, what if the spec is 'wrong'?).
I definitely worry that we are leaving future generations in the dust and that there'll be an experience gap. It's a disservice to take away something from them that we enjoyed ourselves.
No sane company should run on juniors, they're an investment.
ChatGPT will likely be added to the list of dead things that were supposed to "kill" the software developer. I've noticed this pervasive attitude among, what I can only term as, people who actually enjoy LinkedIn. If you understand what I'm saying you can probably already picture the annoying over the top buzzword written below-the-fold post that feels like its only designed to steal braincells. ChatGPT might be able to kill the CRUD developer like WYSIWIG killed HTML programmers. There will be plenty of jobs no one wants ChatGPT to touch. Finance, medicine, and military are some I can imagine without much thinking. "No Code" is on its, what, 4th iteration and still hasn't killed programming. We are more likely to lose our jobs to overseas outsourcing than a stupid rock we tricked into thinking.
I am actually annoyed reading this Ask HN. The level of smugness reminds me of wantrepreneur bros. Woe is me I'm burned out from being so productive. Gag. I'm an actual professional developer. ChatGPT does not provide oodles of value to me. A lot of our juniors and mids use it and I often find problems with the way they copy-and-paste garbage. Admittedly, the copy-and-paste is better. However, to me it reduces to the same StackOverflow problem. Maybe if they were better "prompt engineers" (lol) they might get better output. Or they could take the 30 hours needed to figure out prompts to just simply do better at writing code.
Welcome to the future, where AI subscriptions (self or employee provided) are required for employment, with the majority of your work being management and high level input, where you guide and answering questions for the* AI.
*Probably "The" AI, since there will be one obvious choice for your problem space, which not using would put you at a severe disadvantage.
Seriously though, I've been feeling this somewhat too, lately. The "investment" part of ROI has been shifted significantly, for the "junior" side of things, where I can do "boring" things I wouldn't normally. So, I find myself doing more boring tasks, with a definite net positive outcome, but also everything negative that you described.
The problem with this is that this ROI only the "junior" end of problem space, so, I'm working on more junior problems than I was before.
I think we're somewhat proving that juniors are still needed, to take these tasks. They have been empowered the most, and will still learn and feel creative, working on these problems. More senior people won't. I understand I'm saying this from a point of extreme privilege, but I think most of us need to feel creative, and "enjoy" what we're doing. That means harder problems.
Maybe it's best to still let the juniors continue to do the junior things. There's someone out there that would love to spend all day doing what's burning you out.