I’ve always been the kind of developer that aims to have more red lines than green ones in my diffs. I like writing libraries so we can create hundreds of integration tests declaratively. I’m the kind of developer that disappears for two days and comes back with a 10x speedup because I found two loop variables that should be switched.
There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…
AI can definitely produce a deletion. In fact, I commonly use AI to do this. Copy some code and prompt the AI to make the code simpler or more concise. The output will usually be fewer lines of code.
Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.
I'm no big fan of LLM generated code, but the fact that GP bluntly states "AI will never produce a deletion" despite this being categorically false makes it hard to take the rest of their spiel in good faith.
As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.
I messed around with Copilot for a while and this is one of the things that actually really impressed me. It was very good at taking a messy block of code, and simplifying it by removing unnecessary stuff, sometimes reducing it to a one line lambda. Very helpful!
So its rather that AI amplifies the already existing short-term incentives, increasing the harder to attribute and easier to ignore long-term costs.
The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.
Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).
u/justonceokay's wrote:
> AI will never produce a deletion.
I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".
Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?
Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?
You wrote:
> only focus on adding new features
Yup.
Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.
The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.
Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.
> Unseen were all the sleepless nights we experienced from untested sql queries and regexes and misconfigurations he had pushed in his effort to look good. It always came back to a lack of testing edge cases and an eagerness to ship.
If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
>If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?
Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.
AI writes my unit tests. I clean them up a bit to ensure I've gone over every line of code. But it is nice to speed through the boring parts, and without bringing declarative constructs into play (imperative coding is how most of us think).
If the company values that 10x speedup, there is absolutely still a place for you in this environment. Only now it's going to take five days instead of two, because it's going to be harder to track that down in the less-well-structured stuff that AI produces.
Why are you letting the AI construct poorly structured code? You should be discussing an architectural plan with it first and only signing off on the code design when you are comfortable with it.
If you've ever had to work alongside someone who has, or whose job it is to obtain, all the money... you will find that time to market is very often the ONLY criterion that matters. Turning the crank to churn out some AI slop is well worth it if it means having something to go live with tomorrow as opposed to a month from now.
LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.
>>AI use makes the metric for success speed-to-production
> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
This reminds me of an old software engineering adage.
When delivering a system, there are three choices
stakeholders have:
You can have it fast,
You can have it cheap,
You can have it correct.
Pick any two.
Claude Code removed an npm package (and its tree of deps) from my project and wrote its own more simple component that did the core part of what I needed the package to do.
Wholeheartedly agree. I also feel like I'm sometimes reliving the King Neptune vs Spongebob meme equivalent of coding. No room for Think, Plan, Execute... Only throw spaghetti code at wall.
You're describing the kind of developer who builds foundations, not just features. And yeah, that kind of thinking gets lost when the only thing that's measured is how fast you can ship something that looks like it works
I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code" and "cranking up the strictness of the linter (or type-checker for dynamically-typed languages) to chase out silly bugs" , not to mention the need for better languages / runtime that give better guarantees about correctness.
It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.
You have to go lower down the stack. Don't use AI but write the AI. For the foreseeable future there is a lot of opportunity to make the AI faster.
I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.
C was specifically designed to map 1:1 onto PDP-11 assembly. For example, the '++' operator was created solely to represent auto-increment instructions like TST (R0)+.
C solved the horrible machine code problem by inflicting programmers with the concept of undefined behavior, where blunt instruments called optimizers take a machete to your code. There's a very expensive document locked up somewhere in the ISO vault that tells you what you can and can't write in C, and if you break any of those rules the compiler is free to write whatever it wants.
This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.
The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.
In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!
[0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"
[1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.
The AI companies probably use Python because all the computation happens on the GPU and changing Python control plane code is faster than changing C/C++ control plane code
Not quite true though - I've occasionally passed a codebase to DeepSeek to have it simplify, and it does a decent job. Can even "code golf" if you ask it.
But the sentiment is true, by default current LLMs produce verbose, overcomplicated code
And if it isn't already false it will be false in 6 months, or 1.5 years on the outside. AI is a moving target, and the oldest people among you might remember a time in the 1750s when it didn't talk to you about code at all.
It can absolutely be used to refactor and reduce code, simply asking "Can this be simplified" in reference to a file or system often results in a nice refactor.
However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.
Had a funny conversation with a friend of mine recently who told me about how he's in the middle of his yearly review cycle, and management is strongly encouraging him and his team to make greater use of AI tools. He works in biomedical lab research and has absolutely no use for LLMs, but everyone on his team had a great time using the corporate language model to help write amusing resignation letters as various personalities, pirate resignation, dinosaur resignation etc. I dont think anyone actually quit, but what a great way to absolutely nuke team moral!
I've been getting the same thing at my company. Honestly no idea what is driving it other than hype. But it somehow feels different than the usual hype; so prescribed, as though coordinated by some unseen party. Almost like every out of touch business person had a meeting where they agreed they would all push AI for no reason. Can't put my finger on it.
Rather than some conspiracy, my suspicion is that AI companies accidentally succeded in building a machine capable of hacking (some) people's brains. Not because it's superhumanly intelligent, or even has any agenda at all, but simply because LLMs are specifically tuned to generate the kind of language that is convincing to the "average person".
Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.
> strongly encouraging him and his team to make greater use of AI tools
I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.
AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.
Has your friend talked with current bio research students? It’s very common to hear that people are having success writing Python/R/Matlab/bash scripts using these tools when they otherwise wouldn’t have been able to.
Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.
I'm taking a course on computational health laboratory. I do have to say gemini is helping me a lot, but someone who knows what's happening is going to be much better than us. Our professor told us it is of course allowed to make things with llms, since on the field we will be able to do that. However, I found they're much less precise with bio-informatic libraries than others...
I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms
I have general one-shot success asking chatgpt to make bash/python scripts and one-liners where otherwise it would take 1hr to a day to figure out on my own (and I'd use one of my main languages maybe) or I might not even bother trying, which is great for productivity but also over 90% of my job doesn't need throw-away scripts and one-liners.
That is both hilarious and depressingly on-brand for how AI is being handled in a lot of orgs right now. Management pushes it because they need to tick the "we're innovating" box, regardless of whether it makes any sense for the actual work being done
Our org seems to be taking some benefits from being sped up by using AI tools for code generation (much of it is CRUD or layout stuff), however at times I'm asked for help by colleagues and the first thing I've done is Googled and found the answer and gotten a "Oh right, you can google also" since they've been trying to figure out the issue with ChatGPT or similar.
Gemini loves to leave poetry on our reviews, right below the three bullet points about how we definitely needed to do this refactor but also we did it completely wrong and need to redo it. So we mainly just ignore it. I heard it gives good advice to web devs though.
I really hope that if someone does quit over this, they do it with a fun AI-generated resignation letter. What a great idea!
Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.
If you are not building AI into your workflows right now you are falling behind those that do. It's real, it's here to stay and it's only getting better.
I teach compilers, systems, etc. at a university. Innumerable times I have seen AI lead a poor student down a completely incorrect but plausible path that will still compile.
I'm adding `.noai` files to all the project going forward:
> Yes, and where do you suppose experienced developers come from?
Strictly speaking, you don't even need university courses to get experienced devs.
There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.
Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
> People like that will always be more effective at their job once employed
This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff
In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly
He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"
I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it
But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly
> There will always be individuals that enjoy coding and do so without any formal teaching.
We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.
> There will always be individuals that enjoy coding and do so without any formal teaching.
That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.
> Yes, and where do you suppose experienced developers come from?
Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Don't get me wrong, it will take huge social upheaval to replace the current economic system.
But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.
> criticizing the humans that are using AI to replace workers, instead of criticizing AI itself
I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.
It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.
1. How it's gonna be used and how it'll be a detriment to quality and knowledge.
2. How AI models are trained with a great disregard to consent, ethics, and licenses.
The technology itself, the idea, what it can do is not the problem, but how it's made and how it's gonna be used will be a great problem going forward, and none of the suppliers tell that it should be used in moderation and will be harmful in the long run. Plus the same producers are ready to crush/distort anything to get their way.
... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.
> Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.
> I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.
Couple that with the unavailability of community and social supports and you have a a recipe for disaster.
Companies need to be aware of the long-term affects of relying on AI. It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.
I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...
My strong advice, is to digest every line of generated code; don't let it run ahead of you.
It is absolutely terrifying to watch tools like Cursor generate so much code. Maybe not a great analogy, but it feels like driving with Tesla FSD in New Delhi in the middle of rush hour. If you let it run ahead of you, the amount of code to review will be overwhelming. I've also encountered situations where it is unable to pass tests for code it wrote.
Like TikTok AI Coding breaks human psychology. It is engrained in us that if we have a tool that looks right enough and highly productive we will over-apply it to our work. Even diligent programmers will be lured to accepting giant commits without diligent review and they will pay for it.
Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.
> It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.
I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.
It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.
At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.
100% agree with you, my sentiment is the same. Some time ago I considered making the LLM create tests for me, but decided against it. If I don't understand what needs to be tested, how can I write the code that satisfies this test?
We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.
When LLMs came out I suppressed my inner curmudgeon and dove in, since the technology was interesting to me and seemed much more likely than crypto to be useful beyond crime. Thus, I have used LLMs extensively for many years now and I have found that despite the hype and amazing progress, they still basically only excel first drafts and simple refactorings (where they are, I have to say, incredibly useful for eliminating busy work). But I have yet to use a model, reasoning or otherwise, that could solve a problem that required genuine thought, usually in the form of constructing the right abstraction, bottom up style. LLMs write code like super-human dummies, with a tendency to put too much code in a given function and with very little ability to invent a domain in which the solution is simple and clearly expressed, probably because they don't care about that kind of readability and its not much in their data set.
I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.
For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.
My solution is to _only_ chat. No auto completion, nothing agentic, just chats. If it goes off the rails, restart the conversation. I have the chat window in my "IDE" (well, Emacs) and though it can add entire files as context and stuff like that, I curate the context in a fairly fine-grained way through either copy and pasting, quickly writing out pseudo code, and stuff like that.
Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.
Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.
It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.
This story just makes me sad for the developers. I think especially for games you need a level of creativity that AI won't give you, especially once you get past the "basic engine boilerplate". That's not to say it can't help you, but this "all in" method just looks forced and painful. Some of the best games I've played were far more "this is the game I wanted to play" with a lot of vision, execution, polish, and careful craftspersonship.
I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.
Asset flips (half arsed rubbish made with store bought assets) were a big problem in the games industry not so long ago. They're less prevalent now because gamers instinctively avoid such titles. I'm sure they'll wise up to generative slop too, I've personally seen enough examples to get a general feel for it. Not fun, derivative, soulless, buggy as hell.
AI is the latest "overwhelmingly negative" games industry fad, affecting game developers. It's one of many. Most are because nine out of ten companies make games for the wrong reason. They don't make them as interactive art, as something the developers would like to play, or to perfect the craft. They make them to make publishers and businessmen rich.
That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.
I don't share your optimism, I think as long as there are truly great games being made and the developers earning well from them, the business people are going to be looking at them and saying "we could do that". What those studios lack in creativity or passion they more than make up for in marketing, sales, and sometimes manipulative money extraction game mechanics.
It's not so much optimism as facts. Large AAA game companies have driven away investors[0] and talent[1]. The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.
AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.
Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).
Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.
[0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.
Very selective data in that presentation. The worst figures are always selected for comparisons: in one it's since 2019, then since 2020, then since 2022, then since 2020, then 2019, and on and on.
There is nothing wrong with making entertainment products to make money. That's the reason all products are made: to make money. Games have gone bad because the audience has bad taste. People like Fortnite. They like microtransactions. They like themepark rubbish that you can sell branded skins for. It is the same reason Magic: the Gathering has been ruined with constant IP tie-ins: the audience likes it. People pay for it. People like tat.
In Norway, there was a recent minor scandal where a county released a report on how they should shut down some schools to save money, and it turned out half the citations were fake. Quite in line with the times. So our Minister of Digitizing Everything says "It's serious. But I want to praise Tromsø Municipality for using artificial intelligence." She's previously said she wants 80% of public sector to be using AI this year and 100% by 5 years. What does that even mean? And why and for what and what should they solve with it? It's so stupid and frustrating I don't even
There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…
Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.
As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.
The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.
u/justonceokay's wrote:
> The solution to bad code is more code.
This has always been true, in all domains.
Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).
u/justonceokay's wrote:
> AI will never produce a deletion.
I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".
But what of novelty, craft, innovation? Can Gen-AI, moot the need for code? Like the oft-cited example of -2,000 LOC? https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?
Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?
You wrote:
> only focus on adding new features
Yup.
Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.
The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.
Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.
If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?
Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.
When they look at the calendar and it says May 2025 instead of April
LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.
Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
This reminds me of an old software engineering adage.
They will not feel old because they will enter into bliss of Singularity(TM).
https://en.wikipedia.org/wiki/Technological_singularity
I think we'll be okay and likely better off.
Deleted Comment
I'm currently reading an LLM generated deletion. its hard to get an LLM to work with existing tools, but not impossible
I suspect he is pretty unimpressed by the code that LLMs produce given his history with code he thinks is subpar, but what do I know
It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.
The market can remain irrational longer than you can remain solvent.
Use LLM to write Haskell. Problem solved?
Ah yes, maintenance, the most fun and satisfying part of the job. /s
I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.
This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.
The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.
In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!
[0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"
[1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.
That, right here, is a world-shaking statement. Bravo.
But the sentiment is true, by default current LLMs produce verbose, overcomplicated code
However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.
Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.
Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.
I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.
AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.
Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.
I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms
Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.
Dead Comment
I'm adding `.noai` files to all the project going forward:
https://www.jetbrains.com/help/idea/disable-ai-assistant.htm...
AI may be somewhat useful for experienced devs but it is a catastrophe for inexperienced developers.
"That's OK, we only hire experienced developers."
Yes, and where do you suppose experienced developers come from?
Again and again in this AI arc I'm reminded of the magicians apprentice scene from fantasia.
Strictly speaking, you don't even need university courses to get experienced devs.
There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.
Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff
In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly
He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"
I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it
But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly
You get experienced devs from inexperienced devs that get experience.
[edit: added "degrees" as intended. University was mentioned as the context of their observation]
We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.
That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.
Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Don't get me wrong, it will take huge social upheaval to replace the current economic system.
But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.
I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.
It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.
... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.
This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.
Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.
Couple that with the unavailability of community and social supports and you have a a recipe for disaster.
I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...
My strong advice, is to digest every line of generated code; don't let it run ahead of you.
Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.
Dead Comment
I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.
It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.
At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.
We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.
Deleted Comment
I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.
For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.
Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.
Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.
It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.
I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.
That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.
[0] https://www.matthewball.co/all/stateofvideogaming2025
[1] https://www.youtube.com/watch?v=5tJdLsQzfWg
AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.
Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).
Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.
[0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.
[1] The games industry has seen more than 10% sustained attrition over the last 5 years, and about 50% of employees hope to leave their employer within a year: https://www.skillsearch.com/news/item/games---interactive-sa...
There is nothing wrong with making entertainment products to make money. That's the reason all products are made: to make money. Games have gone bad because the audience has bad taste. People like Fortnite. They like microtransactions. They like themepark rubbish that you can sell branded skins for. It is the same reason Magic: the Gathering has been ruined with constant IP tie-ins: the audience likes it. People pay for it. People like tat.
Deleted Comment