People are absolutely insane with their takes on AI replacement theory. The complexity of our stacks has grown exponentially since the 70s. Very few people actually comprehend how many layers of indirection, performance, caching, etc. are between their CRUD web app and bare metal these days.
AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.
> AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code.
This is true, and I am (sadly, I'd say) guilty of it. In the past, for example, I'd be much more wary about having too much duplication. I was working on a Go project where I needed to have multiple levels of object mapping (e.g. entity objects to DTOs, etc.), and with LLMs it just spit out the answer in seconds (correct I'd add), even though it was lots and lots of code where in the past I would have written a more generic solution to prevent me from having to write so much boilerplate.
I see where the evolution of coding is going, and as a late middle aged developer it has made me look for the exits. I don't disagree with the business rationale of the direction, and I certainly have found a ton of value in AI (e.g. I think it makes learning a new language a lot easier). But I think it makes programming so much less enjoyable for me personally. I feel like it's transformed the job more to "editor" from "author", and for me, the nitty gritty details of programming were fun.
Note I'm not making any broad statement about the profession generally, I'm just stating with some sadness that I don't enjoy where the day-to-day of programming is heading, and I just feel lucky that I've saved up enough in the earlier part of my career to get out now.
I don't always programming in the small, and still feel that AIs provide plenty of chance for architecture, design, refactoring. For me it's been an absolute boon, I'm enjoying build more than ever. At any rate it's undeniably transformative and I can see many people not enjoying the end state.
Really? I sort of feel the opposite. I am a mid-career as well and HIGHLY TIRED of writing yet another set of boilerplate to do a thing or chase down some syntax error in the code and the fact that AI will now do this for me has given me a lot more energy to focus on the higher level thinking about how it all fits together.
I do not look forward to the amount of incompetence and noise that increasing adoption of these tools will usher in. I've already had to deal with a codebase in which it was clear that the author fundamentally misunderstood what a trie data structure was. I was also having an difficult time trying to talk to them about the implementation and their misconceptions. lo and behold I eventually find out the reason they chose this data structure was because they asked ChatGPT what to do and they never actually understood, conceptually, what they were doing or using. This made the whole engagement with the code and process of fixing things way harder. Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...
That’s called consultancy and you can bill chunky rates by the hour. You should be rubbing your hands with glee!
And then work out how to do code review and fixing using AI, lightly supervised by you so that you can do it all whilst walking the dog or playing croquet or something.
I've yet to see an LLM response or an LLM generated diff that suggests removing or refactoring code. Every AI solution is additive; new functions, new abstractions added in every step. Increased complexity is all but baked into the system.
Software engineering jobs involve working in a much wider solution space - writing new code is but one intervention among many. I hope the people blindly following LLM advice realize their lack of attention to detail and "throw new code at it" attitude comes across as ignorant and foolish, not hyper-productive.
The complexity has grown but not the quality. We went from writing ADA code with contracts and all sorts of protections with well thought architectures, to random crap written in ReactJS in web sites that now weigh more than a full install of Windows 95.
I’m really ashamed of what SWE has become and AI will increase that tenfold as you say. We shouldn’t cheer up on that, especially if I will have to debug all that crap.
And if it increases the number of engineers, they won’t be good due to a lack of education (I already experience this at work). But anyway I don’t believe it, managers will not waste more money on us, that would go against modern capitalism.
Oh yes, I'm with you. I didn't say I liked it. I am a low level munger and I like it that way - the lowest layers/oldest layers of the stack tend to be the pieces that are well written and stand the test of time. Where I see AI hitting is at the upper, devil may care, layers of application stack that will be an absolutely hellscape to deal with as a competent engineer.
I expect pretty much the opposite to happen: it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI. If a machine can act more reliably by simplifying its inputs at a fraction of the cost of the equivalent human labour, the system has always adjusted to accommodate the machine.
The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.
I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.
> it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI
The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.
At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.
This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.
You're missing the point, there are specific reasons why these stacks have grown in complexity - even if you introduce "API for AI interface" as a requirement, you still have to balance that with performance, reliability, interfacing with other systems, and providing all of the information necessary to debug when AI gets it wrong. All of the same things that humans need apply to AI - the claim for AI isn't that it deterministically solve every problem it can comprehend.
So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.
I wonder if AI is going to reduce the amount of JS UIs. AI bots can navigate simple HTML forms much easier than crazy React code with 10 layers of divs for a single input. It's either that or people create APIs for everything and document how they are related and interact with documentation.
I'd really like to know the parameters are. I hear claims like, "it saves me an hour a day," or, "I'm 30% more productive with AI." What do these figures mean? They seem like proxies for fuzzy feelings.
When I see boring, repetitive code that I don't want to look at my instinct isn't to ignore it and keep adding more boring, repetitive code. It's like seeing that the dog left a mess on your carpet and pretending you didn't see it. It's easier than training the dog and someone else will clean it... right?
My instinct is to fix the problem causing there to be boring, repetitive code. Too much of that stuff and you end up with a great surface area for security errors, performance problems, etc. And the fewer programmers that read that code and try to understand it the more likely it becomes that nobody will understand it and why it's there.
The idea that we should just generate more code on top of the code until the problem goes away is alien to me.
Although it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.
So really, is AI a tool to make us more productive or a tool to remove our bargaining power?
> So really, is AI a tool to make us more productive or a tool to remove our bargaining power?
Don't you notice how it makes you more productive, that you can solve problems faster? It would be really odd if not.
And regarding the bargaining power: that's not the other side of the scale, it's a different problem. If your code monkey now gets as good as your average developer, the average developer will have lost some relative value, unless he also upped his game by using AI.
If everyone gets better, why would you see this as something bad, which makes us lose "bargaining power"? Because you no longer can put the least effort which your employer expects from you? Even then: it's not like AI makes things harder, it makes them better. At least for me software development has become more enjoyable.
While 5 years ago I was asking myself if I really want to do this for the rest of my career, I now know that I want to do this, with this added help, which takes away much of the tedious stuff like looking up solution-snippets on Stack Overflow. Plus, I know that I will have to deal less and less with writing code, and more and more with managing code solutions offered to me.
>it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.
Amazing. You think that the only reason people are using AI is because it's being forced on them?
I honestly feel kinda bad for some people in this thread who don't see the freight train coming.
Well to give to a concrete example.
I use it to write test cases for the CRUD applications that I sometimes have to work on. Some test cases already exist and I feed the tests and actually code including additional instructions into a model and get relatively decent output. We also use a code review bot that we feed repository relevant instructions to and get decent basic PR comments. It even caught an edge case that 3 other developers didn't consider.
I think AI can be yet another tool that takes some repetitive tasks off my hands. I still obviously check all the code it generated.
Sort of off-topic, but is there any generative AI for code? From my limited understanding, the code is trained on human written code, and the model adapts it to what most closely matches.
What I'm curious about is, can it find innovative ways to solve problems? Like the infamous Quake 3 inverse-sqrt hack? Can it silently convert (read: optimize) a std::string to a raw char* pointer if it doesn't have any harmful side effects? (I don't mean "can you ask it to do that for you?" , I mean can it think to do that on its own?) Can it come up with trippy shit we've never even seen before to solve existing problems? That would truly impress me.
Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?
> Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?
So such things already exist, and for me, the most frustrating thing about LLMs is that they just suck the oxygen out of the room for talking about anything AI-ish that's not an LLM.
The term for what you're looking for is "superoptimization," which tries to adapt the principles of mathematical nonconvex optimization that AI pioneered to the problem of finding optimal code sequences. And superoptimization isn't new--it's at least 30 years old at this point. At this point, it's mature enough that if I were building a new compiler framework from scratch, I'd design at least the peephole optimizer based around superoptimization and formal verification.
(I kind of am putting my money where my mouth is there--I'm working on an emulator right now, and rather than typing in the semantics of every instruction, I'm generating them using related program synthesis techniques based on the observable effects on actual hardware.)
> To do so, Mr. Giorgi has his own timesaving helper: an A.I. coding assistant. He taps a few keys and the software tool suggests the rest of the line of code. It can also recommend changes, fetch data, identify bugs and run basic tests. Even though the A.I. makes some mistakes, it saves him up to an hour many days.
> Still, nearly two-thirds of software developers are already using A.I. coding tools, according to a survey by Evans Data, a research firm.
> So far, the A.I. agents appear to improve the daily productivity of developers in actual business settings between 10 percent and 30 percent, according to studies. At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.
We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult.
> We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult
It will probably still be more productive. IDEs, Stack Exchange...each of these prompted the same fears and realised some of them. But the benefits of having more code quicker and cheaper, even if more flawed, outweighed those of quality. The same way the benefits of having more clothes and kitchenware and even medicine quicker and cheaper outweighed the high-quality bespoke wares that preceded them. (Where it doesn't, and where someone can pay, we have artisans.)
In the mean time, there should be an obsolescence premium [1] that materialises for coders who can clean up the gloop. (Provided, of course, that young and cheap coders of the DOGE variety stop being produced.)
The problem with 'more code, quicker and cheaper' is that when you fall behind a baseline of quality it actually ends up costing you and your business, significantly. Companies learned this the hard way during the outsourcing booms, and the usage of AI amplifies this problem 10 fold much like it's doing with spam.
> At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.
I don't have any specific experience with KPMG, but considering the other "big name" firms' work I've encountered, there's, uh, lots of room for improvement.
What did/do you call your services you offer? I sincerely love debugging (fixing tech of any kind digital or analog). Never thought I could offer services just fixing instead of building from scratch...
> Now, let’s talk about the real winners in all this: the programmers who saw the chaos coming and refused to play along. The ones who didn’t take FAANG jobs but instead went deep into systems programming, AI interpretability, or high-performance computing. These are the people who actually understand technology at a level no AI can replicate.
> And guess what? They’re about to become very expensive. Companies will soon realize that AI can’t replace experienced engineers. But by then, there will be fewer of them. Many will have started their own businesses, some will be deeply entrenched in niche fields, and others will simply be too busy (or too rich) to care about your failing software department.
> Want to hire them back? Hope you have deep pockets and a good amount of luck. The few serious programmers left will charge rates that make executives cry. And even if you do manage to hire them, they won’t stick around to play corporate politics or deal with useless middle managers. They’ll fix your broken systems, invoice you an eye-watering amount, and walk away.
The increase in productivity means you need fewer inexperienced and/or bad engineers to a project. On the other hand, they may be retained to go after bolder, more numerous targets.
I don’t think that future will happen, because eventually someone will realize there is a competitive advantage in building a truly good product with people who actually know what they’re doing, and when other companies catch on they will start doing that and bad prompt kiddy engineers will be gone.
> because eventually someone will realize there is a competitive advantage in building a truly good product with people who actually know what they’re doing
Doesn't / Shouldn't that competitive advantage inherently exist already? But don't we still see a small group of big players that put out broken / mediocre / harmful tech dominating the market?
The incentives are to make the line keep going up – nothing else. Which is how we get search engines that don't find things, social media that's anti-social, users that are products, etc.
I'm not at all hopeful that an already entrenched company that lays off 50% of its workers and replaces them with an AI-slop-o-matic will lose to a smaller company putting out well-made principled tech. At least not without leveraging other differentiating factors.
(I say all of this as someone that is excited about the possibilities AI can now afford us. It's just that the possibilities I'm excited about are more about augmentation, accessibility, and simplicity rather than replacement or obsolescence.)
The problem, so far, is that they're still...quite unreliable, to say it least. Sometimes I can feed the model files, and it will read and parse the data 100 out of 100 times. Other times, the model seems clueless about what to do, and just spits out code on how to do it manually, with some vague "sorry I can't seem to read the file", multiple times, only to start working again.
And then you have the cases where the models seem to dig themselves into some sort of terminal state, or oscillate between 2-3 states, that they can't get out off - until you fire up a new model, and transfer the code to it.
Overall they do save me a ton of time, especially with boilerplate stuff, but very routinely even the most SOTA models will have their stupid moments, or keep trying to do the same thing.
I always thought hacking scenes in sci-fi were unrealistic, but if you're cooking up AI-fortified code lasagna at your endpoints, there are going to be a mishmash of vulnerabilities: Expert robust thought will be spread very thin by the velocity that systemic forces push developers to.
> Mark Zuckerberg, Meta’s chief executive, stirred alarm among developers last month when he predicted that A.I. technology sometime this year would effectively match the performance of a midlevel software engineer
Either Meta has tools an order of magnitude more powerful than everyone else, or he's drinking his own koolaid.
AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.
This is true, and I am (sadly, I'd say) guilty of it. In the past, for example, I'd be much more wary about having too much duplication. I was working on a Go project where I needed to have multiple levels of object mapping (e.g. entity objects to DTOs, etc.), and with LLMs it just spit out the answer in seconds (correct I'd add), even though it was lots and lots of code where in the past I would have written a more generic solution to prevent me from having to write so much boilerplate.
I see where the evolution of coding is going, and as a late middle aged developer it has made me look for the exits. I don't disagree with the business rationale of the direction, and I certainly have found a ton of value in AI (e.g. I think it makes learning a new language a lot easier). But I think it makes programming so much less enjoyable for me personally. I feel like it's transformed the job more to "editor" from "author", and for me, the nitty gritty details of programming were fun.
Note I'm not making any broad statement about the profession generally, I'm just stating with some sadness that I don't enjoy where the day-to-day of programming is heading, and I just feel lucky that I've saved up enough in the earlier part of my career to get out now.
People with your attitude will be the first to be replaced.
Not because you code isn't as good as an AI; maybe it's even better. But because your personality makes you a bad teammate.
And then work out how to do code review and fixing using AI, lightly supervised by you so that you can do it all whilst walking the dog or playing croquet or something.
Software engineering jobs involve working in a much wider solution space - writing new code is but one intervention among many. I hope the people blindly following LLM advice realize their lack of attention to detail and "throw new code at it" attitude comes across as ignorant and foolish, not hyper-productive.
Ask for multiple refactors and their trade-offs
But I think I would rather just end my career instead of transitioning into fixing enormous codebases written by LLMs.
I used to dream about starting a company from scratch and spent a good amount of time trying to manipulate management to let me start codebases over.
I’m really ashamed of what SWE has become and AI will increase that tenfold as you say. We shouldn’t cheer up on that, especially if I will have to debug all that crap.
And if it increases the number of engineers, they won’t be good due to a lack of education (I already experience this at work). But anyway I don’t believe it, managers will not waste more money on us, that would go against modern capitalism.
The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.
I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.
The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.
At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.
This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.
So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.
When I see boring, repetitive code that I don't want to look at my instinct isn't to ignore it and keep adding more boring, repetitive code. It's like seeing that the dog left a mess on your carpet and pretending you didn't see it. It's easier than training the dog and someone else will clean it... right?
My instinct is to fix the problem causing there to be boring, repetitive code. Too much of that stuff and you end up with a great surface area for security errors, performance problems, etc. And the fewer programmers that read that code and try to understand it the more likely it becomes that nobody will understand it and why it's there.
The idea that we should just generate more code on top of the code until the problem goes away is alien to me.
Although it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.
So really, is AI a tool to make us more productive or a tool to remove our bargaining power?
Don't you notice how it makes you more productive, that you can solve problems faster? It would be really odd if not.
And regarding the bargaining power: that's not the other side of the scale, it's a different problem. If your code monkey now gets as good as your average developer, the average developer will have lost some relative value, unless he also upped his game by using AI.
If everyone gets better, why would you see this as something bad, which makes us lose "bargaining power"? Because you no longer can put the least effort which your employer expects from you? Even then: it's not like AI makes things harder, it makes them better. At least for me software development has become more enjoyable.
While 5 years ago I was asking myself if I really want to do this for the rest of my career, I now know that I want to do this, with this added help, which takes away much of the tedious stuff like looking up solution-snippets on Stack Overflow. Plus, I know that I will have to deal less and less with writing code, and more and more with managing code solutions offered to me.
Amazing. You think that the only reason people are using AI is because it's being forced on them?
I honestly feel kinda bad for some people in this thread who don't see the freight train coming.
There are a few loud people who think AI programming is the best thing since sliced bread.
What’s the freight train?
I think AI can be yet another tool that takes some repetitive tasks off my hands. I still obviously check all the code it generated.
What I'm curious about is, can it find innovative ways to solve problems? Like the infamous Quake 3 inverse-sqrt hack? Can it silently convert (read: optimize) a std::string to a raw char* pointer if it doesn't have any harmful side effects? (I don't mean "can you ask it to do that for you?" , I mean can it think to do that on its own?) Can it come up with trippy shit we've never even seen before to solve existing problems? That would truly impress me.
Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?
So such things already exist, and for me, the most frustrating thing about LLMs is that they just suck the oxygen out of the room for talking about anything AI-ish that's not an LLM.
The term for what you're looking for is "superoptimization," which tries to adapt the principles of mathematical nonconvex optimization that AI pioneered to the problem of finding optimal code sequences. And superoptimization isn't new--it's at least 30 years old at this point. At this point, it's mature enough that if I were building a new compiler framework from scratch, I'd design at least the peephole optimizer based around superoptimization and formal verification.
(I kind of am putting my money where my mouth is there--I'm working on an emulator right now, and rather than typing in the semantics of every instruction, I'm generating them using related program synthesis techniques based on the observable effects on actual hardware.)
> Still, nearly two-thirds of software developers are already using A.I. coding tools, according to a survey by Evans Data, a research firm.
> So far, the A.I. agents appear to improve the daily productivity of developers in actual business settings between 10 percent and 30 percent, according to studies. At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.
We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult.
It will probably still be more productive. IDEs, Stack Exchange...each of these prompted the same fears and realised some of them. But the benefits of having more code quicker and cheaper, even if more flawed, outweighed those of quality. The same way the benefits of having more clothes and kitchenware and even medicine quicker and cheaper outweighed the high-quality bespoke wares that preceded them. (Where it doesn't, and where someone can pay, we have artisans.)
In the mean time, there should be an obsolescence premium [1] that materialises for coders who can clean up the gloop. (Provided, of course, that young and cheap coders of the DOGE variety stop being produced.)
[1] https://www.sciencedirect.com/science/article/abs/pii/S01651...
I don't have any specific experience with KPMG, but considering the other "big name" firms' work I've encountered, there's, uh, lots of room for improvement.
It was lucrative cleaning up shit code from Romania and India.
I'm hoping enough people churn out enough hot garbage that needs fixing now that I can jack up my day rate.
I remember when the West would have no coders because Indian coders are cheaper.
I remember when nocode solutions would replace programmers.
I remember.
> Now, let’s talk about the real winners in all this: the programmers who saw the chaos coming and refused to play along. The ones who didn’t take FAANG jobs but instead went deep into systems programming, AI interpretability, or high-performance computing. These are the people who actually understand technology at a level no AI can replicate.
> And guess what? They’re about to become very expensive. Companies will soon realize that AI can’t replace experienced engineers. But by then, there will be fewer of them. Many will have started their own businesses, some will be deeply entrenched in niche fields, and others will simply be too busy (or too rich) to care about your failing software department.
> Want to hire them back? Hope you have deep pockets and a good amount of luck. The few serious programmers left will charge rates that make executives cry. And even if you do manage to hire them, they won’t stick around to play corporate politics or deal with useless middle managers. They’ll fix your broken systems, invoice you an eye-watering amount, and walk away.
Doesn't / Shouldn't that competitive advantage inherently exist already? But don't we still see a small group of big players that put out broken / mediocre / harmful tech dominating the market?
The incentives are to make the line keep going up – nothing else. Which is how we get search engines that don't find things, social media that's anti-social, users that are products, etc.
I'm not at all hopeful that an already entrenched company that lays off 50% of its workers and replaces them with an AI-slop-o-matic will lose to a smaller company putting out well-made principled tech. At least not without leveraging other differentiating factors.
(I say all of this as someone that is excited about the possibilities AI can now afford us. It's just that the possibilities I'm excited about are more about augmentation, accessibility, and simplicity rather than replacement or obsolescence.)
The problem, so far, is that they're still...quite unreliable, to say it least. Sometimes I can feed the model files, and it will read and parse the data 100 out of 100 times. Other times, the model seems clueless about what to do, and just spits out code on how to do it manually, with some vague "sorry I can't seem to read the file", multiple times, only to start working again.
And then you have the cases where the models seem to dig themselves into some sort of terminal state, or oscillate between 2-3 states, that they can't get out off - until you fire up a new model, and transfer the code to it.
Overall they do save me a ton of time, especially with boilerplate stuff, but very routinely even the most SOTA models will have their stupid moments, or keep trying to do the same thing.
It’s insane how similar non-deterministic software systems already are to biological. Maybe I’ve been wrong and consciousness is a computation.
Either Meta has tools an order of magnitude more powerful than everyone else, or he's drinking his own koolaid.