Outsourcing dev work to India because it's "cheaper" has already maximally happened since decades ago.
So if your theory was correct there'd be almost no western developers by now. And yet there they are, making half a million a year working for big tech in California.
The only way your position can pass even a basic sense check is that you mean you think these companies are paying 5x just to see their devs in person?
> already maximally happened since decades ago
We just laid off ~4000 employees and are replacing them with hiring in India. Your notion that it already happened decades ago is wrong.
Second, you can look at it differently.
AI is going to do most of the coding in a very short time. That's a fact. The opportunities are going to come to those who know how to prompt the AI and hold it accountable.
So yes, you're no longer going to be the hero for the code you type out.
But you CAN be the hero for being the Senior Developer or Project Manager who know what needs to be done and knows how to get the AI to do it right the first time.
I actually got out of coding a number of years ago because I was tired of keeping up with the latest changes in languages, standards, best practices, etc.
When AI became a thing over the past couple of years, I decided to try again ... and I'm actually enjoying it a whole lot more. I make a lot more progress a lot faster, which means I get to see faster results.
You can't control the direction that coding is going. It will go where it goes. But you can control how you think and feel about it. So what choice will you make?
Great evidence. Add "full stop." to really drive the point home
No amount of parallelization will make your program faster than the slowest non-parallelizable path. You can be as clever as you want and it won’t matter squat unless you fix the bottleneck.
This extends to all types of optimization and even teamwork. Just make the slowest part faster. Really.
1. Decide if optimization is even necessary.
2. Then optimize the slowest path
> It sounds obvious when spelled out but it blew my mind.
I think there's a weird thing that happens with stuff like this. Cliches are a good example, and I'll propose an alternative definition to them. A cliche is a phrase that's so obvious everyone innately knows or understands it; yet, it is so obvious no one internalizes it, forcing the phrase to be used ad nauseam
At least, it works for a subset of cliches. Like "road to hell," "read between the lines," Goodheart's Law, and I think even Amdahl's Law fits (though certainly not others. e.g. some are bastardized, like Premature Optimization or "blood is thicker than water"). Essentially they are "easier said than done," so require system 2 thinking to resolve but we act like system 1 will catch them.Like Amdahl's Law, I think many of these take a surprising amount of work to prove despite the result sounding so obvious. The big question is if it was obvious a priori or only post hoc. We often confuse the two, getting us into trouble. I don't think the genius of the statement hits unless you really dig down into proving it and trying to make your measurements in a nontrivially complex parallel program. I think that's true about a lot of things we take for granted
In it's original context it means the opposite of how people use it today.
otherwise, yes, you'll continue to be irritated by AI hype, maybe up until the point where our civilization starts going off the rails
I've tried everything. I have four AI agents. They still have an accuracy rate of about 50%.
I’m happy to have read this, which is reason enough to publish it - but also it’s clearly generating debate so it seems like a very good thing to have published.
tptacek has always come across arrogant, juvenile, opinionated, and difficult to work with.
These kinds of articles that heavily support LLM usage in programming seem to FOMO you or at least suggest that "you are using it wrong" in a weak way just to invalidate contrary or conservative opinions out of the discussion. These are pure rhetorics with such an empty discourse.
I use these tools everyday and every hour in strange loops (between at least Cursor, ChatGPT and now Gemini) because I do see some value in them, even if only to simulate a peer or rubber duck to discuss ideas with. They are extremely useful to me due to my ADHD and because they actually support me through my executive disfunction and analysis paralysis even if they produce shitty code.
Yet I'm still an AI skeptic because I've seen enough failure modes in my daily usage. I do not know how to feel when faced with these ideas because I feel out of the false dichotomy (pay for them, use them every day, but won't think them as valuable as the average AI bro). What's funny is that I'm yet to see an article that actually shows LLMs strengths and weaknesses in a serious manner and with actual examples. If you are going to defend a position, do it seriously ffs.
It's just "AI did stuff really good for me" as the proof that AI works
Why haven't we seen an explosion of new start-ups, products or features? Why do we still see hundreds of bug tickets on every issue tracking page? Have you noticed anything different on any changelog?
I invite tptacek, or any other chatbot enthusiast around, to publish project metrics and show some actual numbers.
Videogame speed running has this problem solved. Livestream your 10x engineer LLM usage, a git commit annotated with it's prompt per change. Then everyone will see the result.
This doesn't seem like an area of debate. No complicated diagrams required. Just run the experiment and show the result.
The article provides zero measurement, zero examples, zero numbers.
It's pure conjecture with no data or experiment to back it up. Unfortunately conjecture rises to the top on hackernews. A well built study on LLM effectiveness would fall off the front page quickly.