People compared the loss of what.cd to the burning of the Library of Alexandria. While I do think that was an exaggeration at the time, I worry about the loss of effort and 'thought-power' when the discord walls get even taller than they are now -- or worse yet -- when the gardens disappear without ever having had the chance to have been archived somewhere.
I was forced almost immediately to go onto YT and find a swift crash course. I'm sure I could've continued prompting it to explain things and I probably could've extrapolated how to implement the code, but with my experience, a 30-60 min condensed video seems more helpful to me, idk.
I guess my point is that there will still be need for those with technical knowledge. AI isnt at a point where it can generate fully functional apps, and I don't really believe this is coming anytime soon. GPTs are relatively new in the AI space and there's already plenty of concern around their future viability.
I think a lot of the fear is just bad faith marketing from openAI, and all the wrapper startups that have sprung up like weeds recently, and they all love to claim to their wrapper is coming for SWE jobs. If i see one more "RIP software engineers," I might literally laugh myself out of my chair.
I saw one hilarious demo on twitter from GPT-engineer. it claimed to be able to generate entire projects, and it did create a handful of project files with ~20 lines of boilerplate code. It's pretty comical that we're all afraid of this tool, myself included.
And to everyone saying "well what about GPT-8," GPTs' fundamental flaw are hallucinations, meaning GPT-8 will still suffer from most, if not all, of the same issues as GPT-4. I don't really believe any version of a GPT will threaten SWE jobs en masse. I've also been seeing some interesting articles warning how training AIs with AI generated output will effectively kill the models within a couple generations, so let's see if "AI is the worst it will ever be today" is actually going to be true.
Regardless, I dont think you have much to worry about, it sounds like you're senior/principle level already and AI has the least threat to anyone in those higher levels. Rest easy, you'll be alright.
My own eyes? Hundreds of thousands thousand different scientific papers, blog posts, news reports and discussion threads that covered this ever since ChatGPT appeared, and especially in the last two months as GPT-4 rolled out?
At this point I'd reconsider if the experts you listened to are in fact experts.
Seriously. It's like saying Manhattan project wasn't a massive breakthrough in experimental physics or military strategy.
Dead Comment
Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.
Dead Comment