I still defend that as the best name in English. Rock needs to go first. Rock beats Scissors beats Paper. It's the most straightforward order.
Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.
For simple changes I actually found smaller models better because they're so much faster. So I shifted my focus from "best model" to "stupidest I can get away with".
I've been pushing that idea even further. If you give up on agentic, you can go surgical. At that point even 100x smaller models can handle it. Just tell it what to do and let it give you the diff.
Also I found the "fumble around my filesystem" approach stupid for my scale, where I can mostly fit the whole codebase into the context. So I just dump src/ into the prompt. (Other people's projects are a lot more boilerplatey so I'm testing ultra cheap models like gpt-oss-20b for code search. For that, I think you can go even cheaper...)
Patent pending.
My own very naive and underinformed sense: OpenAI doesn't have other revenue paths to fall back on like Google does. The GPT5 strategy really makes sense to me if I look at this as a market share strategy. They want to scale out like crazy, in a way that is affordable to them. If it's that cheap, then they must have put a ton of work in to some scaling effort that the other vendors just don't care about as much, whether due to loss-leader economics or VC funding. It really makes me wonder if OpenAI is sitting on something much better that also just happens to be much, much more expensive.
Overall, I'm weirdly impressed because if that was really their move here, it's a slight evidence point that shows that somewhere down in their guts, they do really seem to care about their original mission. For people other than power users, this might actually be a big step forward.
It's just a complex abstraction over a fundamentally trivial concept. The only issue it solves is if you want to bring your own tools to an existing chatbot. But I've not had that problem yet.
But it doesn't have a semantic understanding because it's not an llm.
So connecting an llm with my api via MCP means that I can do things like "can you semantically analyze the argument?" and "can you create any counterpoints you think make sense?" and "I don't think premise P12 is essential for lemma L23, can you remove it?" And it will, and I can watch it on my frontend to see how the argument evolves.
So in that sense - combining semantic understanding with tool use to do something that neither can do alone - I find it very valuable. However, if your point is that something other than MCP can do the same thing, I could probably accept that too (especially if you suggested what that could be :) ). I've considered just having my backend use an api key to call models but it's sort of a different pattern that would require me to write a whole lot more code (and pay more money).