I don't think vibe coders know the difference, but often when I ask AI to add a feature to a large code base, I already know how I'd do it myself, and the answer that Claude comes up with is more often the one I would have done. Codex and Gemini have burned me too many times, and I keep going back to Claude. I trust it's judgement. Anthropic models have always been a step above OpenAI and Google, even 2 years ago it was like that so it must be something fundamental.
For me, Codex does well at pure-coding based tasks, but the moment it involves product judgement, design, or writing – which a lot of my tasks do – I need to pull in Claude. It is like Claude is trained on product management and design, not just coding.
I'm there with you, but only been using it a couple months now. I find that as long as I spend a fair amount of time with Claude specifying the work before starting the work, it tends to go really well. I have a general approach on how I want to run/build the software in development and it goes pretty smoothly with Claude. I do have to review what it does and sanity check things... I've tended to find bugs where I expect to see bugs, just from experience.
I keep using the analogy of working with a disconnected overseas dev team over email... since I've had to do this before. The difference is turn around in minutes instead of the next day.
On a current project, I just have it keep expanding on the TODO.md as working through the details... I'd say it's going well so far... Deno driver for MS-SQL using a Rust+FFI library. Still have some sanity checks around pooling, and need to test a couple windows only features (SSPI/Windows Auth and FILESTREAM) in a Windows environment, and I'll be ready to publish... About 3-4 hours of initial planning, 3 hours of initial iteration, then another 1:1:1:1 hours of planning/iteration working through features, etc.
Aside, I have noticed a few times a day, particularly west coast afternoon and early evening, the entire system seems to go 1/3 the speed... I'm guessing it's the biggest load on Anthropic's network as a whole.
1) I don't want to give OpenAI my money. I don't like how they are spending so much money to shape politics to benefit them. That seems to fly in the face of this being a public benefit. If you have to spend money like that because you're afraid of what the public will do, what does that say?
2) I like how Claude just gives me straight text on one side, examples on the other, and nothing else. ChatGPT and Gemini tend to go overboard with tables, lists, emojis, etc. I can't stand it.
3) A lot of technical online conversation seems to have been hollowed out in recent years. The amount of people making blog posts explaining how to use something new has basically tanked.
Wow, I'd always considered claude more of a software tool and never really gave it a chance at regular chat, but yeah after one session I think I'm a convert for exactly #2.
I'm fine with charts, but ChatGPT is so long-winded and redundant. "When would I use such-and-such pattern?" "That's exactly the right question to ask! ... What you're really asking ... Why that's interesting ... Why some people find it critical ... Option 1 ... Option 2 ... Consideration ... Table comparing to so-and-so ... The deep reason ... What it all boils down to ... The one-line answer (tight!) ... The next thing you need to know ... I can also draw a useless picture for you. Would you like me to do that?"
I like Claude a lot for general and technical questions... I like xAI for some of that as well, especially for very current event summaries.
I will say with Claude that I often have to make sure to give it the context of a question for it to give me answers I'm looking for. I find in general it does better when you include the why's behind a decision or question.
There is also the very lame auto win category that i happen to fall into...
I dont trust openai, or google. google has beyond proven that they aren't trustworthy well before the LLM coding tool era. I am legitimately not even giving them a chance.
Sadly I am assuming anthropic will at some point lose my trust, but for now they just feel like the obvious choice for me.
So obviously i am a terrible overall observer, but i am sure i am not alone in the auto win portion of devs choosing anthropic.
That was exactly why I had been a paying Anthropic customer as well – I trusted them more than I trusted OpenAI or Google. But I canceled my subscription this morning after the news that they've ditched their core safety promise [†], and they look likely to fold to the Pentagon's demands on autonomous weapons/surveillance as well.
The sometimes hot garbage I get from the AI results for technical questions in the past year or so has me not even considering them from the start... I've tried github copilot (whatever the default engine is) and OpenAI and just found it annoying. Claude is the first one that I've felt was more productive than annoying and I just started using it.
I've been using ChatGPT (Thinking). I like how it has learned how I do stuff, and keeps that in mind. Yesterday, I asked it to design an API, and it referenced a file I had sent in, for a different server, days earlier, in order to figure out what to do.
I'm not using it in the same way that many folks do. Maybe if I get to that point, I'll prefer Claude, but for my workflow, ChatGPT has been ideal.
I guess the best part, is that it seems to be the absolute best, at interpreting my requirements; including accounting for my human error.
Oof. I turned the history referencing off. I use ChatGPT for wildly diverging topics and it will bring things up that have zero relevance to what I'm currently looking for if history is on.
Yup. I can see that being a problem. My use is pretty linear, so that isn't an issue. I think that you might be able to establish different "contexts," though I haven't tried.
I will see whether or not I can “wipe the slate clean,” between projects.
I would also NEVER use it for any confidential/proprietary stuff that can’t leak.
The project I’m working on is not open-source, but we don’t really care too much, if it leaks. I don’t send in any credentials; just source.
Right now, it’s helping me design tests in Postman, to validate the API.
I like this feature and rely on it too. I get that some people hate it and that it can make some pretty insidious mistakes when it uses it, but I’ve found it valuable for providing implicit context when I have multiple queries for the same project.
Worth noting that Claude also has a memory feature and uses it intelligently like this, sometimes more thoughtfully than cgpt does (fewer “out of left field” associations, smoother integration).
The past few weeks, Claude has started doing that as well... ie recognizing my preference to use Deno for scripting or React+mui when scaffolding a ui around something.
I've been using the browser/desktop for planning sessions on pieces of a larger project I'm putting together and it's been connecting the dots in unexpected ways from the other conversations.
I think the one disappointment is that I can't seem to resume a conversation from the web/desktop interface to the code interface... I have to have it generate a zip I can extract then work from.
Model aside, the harness of Claude Code is just a much better experience. Agent teams, liberal use of tasks and small other ergonomics make it a better dev tool for me.
I've heard a lot of people prefer OpenCode to Claude Code, myself included. Having tried both, I find myself having a much better time in OpenCode. Have you tried it?
I'll admit it lacks on the agent teams side but I tend to use AI sparingly compared to others on my team.
I’ve been using Claude Code for about six months and evaluated OpenCode on my Windows work laptop a few weeks ago. Found 3 dealbreakers that sent me back:
1. No clipboard image paste. In Claude Code I constantly paste screenshots – a broken layout, an error dialog, a hand-drawn schema – and just say “fix this.” OpenCode on Windows Terminal can’t do that without hacky workarounds (save to file, drag-and-drop, helper scripts). I honestly don’t understand how people iterate on UI without this.
2. Ctrl+C kills the process instead of copying. And you can’t resume with --continue either, so an accidental Ctrl+C means losing your session context. Yes, I know about Ctrl+Ins/Shift+Ins, but muscle memory is muscle memory. I also frequently need to select a specific line from the output and say “this part is wrong, do it differently” – that workflow becomes painful.
3. No step-by-step approval for individual edits. Claude Code’s manual edit mode lets me review and approve each change before it’s applied. When I need tight control over implementation details, I can catch issues early and redirect on the spot. OpenCode doesn’t have an equivalent.
All three might sound minor in isolation, but together they define my daily workflow. OpenCode is impressive for how fast it’s moving, but for my Windows-based workflow it’s just not there yet.
> Half their agentic usage is coding. When that's your reality, you train for it. You optimize the tool use, the file editing, the multi-step workflows - because that's what your paying users are actually doing. Google doesn't have that same pressure.
I wonder if this is a strategic choice — anthropic has decided to go after the developers, a motivated but limited market. Whereas the general populace might be more attracted to improved search tools, allowing Google/openai/etc to capture that larger market
They are heavily dogfooding. Coding is needed to orchestrate the training of the next Claude model, data processing, RL environments, evals, scaffolding, UI, APIs, automated experiments, cluster management, etc etc. This allows them to get the next model faster and then get the next one etc.
Making a model that's great at other kinds of knowledge/office work is coincidental, it doesn't feed back directly into improving the model.
- limiting model access when not using claude code
- claude code is a poorly made product. inefficient, buggy, etc. it shows they don’t read the code
- thousands of open GitHub issues, regressions introduced constantly
- dev hostile changes like the recent change to hide what the agent is actually doing
However, they are very good at marketing and hype. I’d recommend everyone give pi or opencode a try. My guess is anthropic actually wants vibe coders (a much broader market).
It's more likely that anthropic feels that if they can crack just programming, then their agents can rapidly do the legwork of surpassing the other labs.
Hmm that makes some sense to me as well, like buying eco upgrades early in an rts game. But that assumes that better programming leads to more-generally-competent AI, which I think is tenuous. The thing that sparked this recent AI summer was not better programming, so why would better programming lead to the next/ongoing big improvements?
I doubt it. Gemini is heavily used internally for coding with integrations across Google's developer tooling. gemini-cli is not meaningfully different from claude code.
Could it be tooling like Claude Code? I just used Claude Code with qwen3.5:35b running locally to track down two obscure bugs in new Common Lisp code I wrote yesterday.
I do in similar way, connect claude code to litellm router that dispatches model requests to different providers: bedrock, openai, gemini, openrouter and ollama for opensource models. I have special slash command and script that collect information about session, project and observed problems to evaluation dataset. I can re-evaluate prompts and find models that do a job in particular agent faster/cheaper, or use automated prompt optimization to eliminate problems.
Gemini is supposed to have this huge context; Gemini cli (paid) often forgets by the next prompt whatever the previous was about and starts doing something completely different , often switching natural or programming language. I use codex and with 5.3 it is better but not there compared to cc for us anyway; it just goes looking for stuff, draws the most bizarre conclusions and ends up lost quite often doing the wrong things. Mistral works quite well on smaller issues. Cerebras gml rocks on quick analysis; if it had more token allowance and less rate limiting , it would probably be what I would use all the time; unfortunately, on a large project, I hit a 24 hour block in less than an hour of coding. It does do a LOT in that time of course because of its bizarre speed.
The complexity of a project vs. getting lost and confused metric, Claude does a lot better than every time I've tried something else, that's it.
I keep using the analogy of working with a disconnected overseas dev team over email... since I've had to do this before. The difference is turn around in minutes instead of the next day.
On a current project, I just have it keep expanding on the TODO.md as working through the details... I'd say it's going well so far... Deno driver for MS-SQL using a Rust+FFI library. Still have some sanity checks around pooling, and need to test a couple windows only features (SSPI/Windows Auth and FILESTREAM) in a Windows environment, and I'll be ready to publish... About 3-4 hours of initial planning, 3 hours of initial iteration, then another 1:1:1:1 hours of planning/iteration working through features, etc.
Aside, I have noticed a few times a day, particularly west coast afternoon and early evening, the entire system seems to go 1/3 the speed... I'm guessing it's the biggest load on Anthropic's network as a whole.
1) I don't want to give OpenAI my money. I don't like how they are spending so much money to shape politics to benefit them. That seems to fly in the face of this being a public benefit. If you have to spend money like that because you're afraid of what the public will do, what does that say?
2) I like how Claude just gives me straight text on one side, examples on the other, and nothing else. ChatGPT and Gemini tend to go overboard with tables, lists, emojis, etc. I can't stand it.
3) A lot of technical online conversation seems to have been hollowed out in recent years. The amount of people making blog posts explaining how to use something new has basically tanked.
I'm fine with charts, but ChatGPT is so long-winded and redundant. "When would I use such-and-such pattern?" "That's exactly the right question to ask! ... What you're really asking ... Why that's interesting ... Why some people find it critical ... Option 1 ... Option 2 ... Consideration ... Table comparing to so-and-so ... The deep reason ... What it all boils down to ... The one-line answer (tight!) ... The next thing you need to know ... I can also draw a useless picture for you. Would you like me to do that?"
I will say with Claude that I often have to make sure to give it the context of a question for it to give me answers I'm looking for. I find in general it does better when you include the why's behind a decision or question.
I dont trust openai, or google. google has beyond proven that they aren't trustworthy well before the LLM coding tool era. I am legitimately not even giving them a chance.
Sadly I am assuming anthropic will at some point lose my trust, but for now they just feel like the obvious choice for me.
So obviously i am a terrible overall observer, but i am sure i am not alone in the auto win portion of devs choosing anthropic.
[†] https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-...
I'm not using it in the same way that many folks do. Maybe if I get to that point, I'll prefer Claude, but for my workflow, ChatGPT has been ideal.
I guess the best part, is that it seems to be the absolute best, at interpreting my requirements; including accounting for my human error.
I will see whether or not I can “wipe the slate clean,” between projects.
I would also NEVER use it for any confidential/proprietary stuff that can’t leak.
The project I’m working on is not open-source, but we don’t really care too much, if it leaks. I don’t send in any credentials; just source.
Right now, it’s helping me design tests in Postman, to validate the API.
Worth noting that Claude also has a memory feature and uses it intelligently like this, sometimes more thoughtfully than cgpt does (fewer “out of left field” associations, smoother integration).
I've been using the browser/desktop for planning sessions on pieces of a larger project I'm putting together and it's been connecting the dots in unexpected ways from the other conversations.
I think the one disappointment is that I can't seem to resume a conversation from the web/desktop interface to the code interface... I have to have it generate a zip I can extract then work from.
I'll admit it lacks on the agent teams side but I tend to use AI sparingly compared to others on my team.
1. No clipboard image paste. In Claude Code I constantly paste screenshots – a broken layout, an error dialog, a hand-drawn schema – and just say “fix this.” OpenCode on Windows Terminal can’t do that without hacky workarounds (save to file, drag-and-drop, helper scripts). I honestly don’t understand how people iterate on UI without this. 2. Ctrl+C kills the process instead of copying. And you can’t resume with --continue either, so an accidental Ctrl+C means losing your session context. Yes, I know about Ctrl+Ins/Shift+Ins, but muscle memory is muscle memory. I also frequently need to select a specific line from the output and say “this part is wrong, do it differently” – that workflow becomes painful. 3. No step-by-step approval for individual edits. Claude Code’s manual edit mode lets me review and approve each change before it’s applied. When I need tight control over implementation details, I can catch issues early and redirect on the spot. OpenCode doesn’t have an equivalent.
All three might sound minor in isolation, but together they define my daily workflow. OpenCode is impressive for how fast it’s moving, but for my Windows-based workflow it’s just not there yet.
I wonder if this is a strategic choice — anthropic has decided to go after the developers, a motivated but limited market. Whereas the general populace might be more attracted to improved search tools, allowing Google/openai/etc to capture that larger market
Making a model that's great at other kinds of knowledge/office work is coincidental, it doesn't feed back directly into improving the model.
- limiting model access when not using claude code
- claude code is a poorly made product. inefficient, buggy, etc. it shows they don’t read the code
- thousands of open GitHub issues, regressions introduced constantly
- dev hostile changes like the recent change to hide what the agent is actually doing
However, they are very good at marketing and hype. I’d recommend everyone give pi or opencode a try. My guess is anthropic actually wants vibe coders (a much broader market).
I doubt it. Gemini is heavily used internally for coding with integrations across Google's developer tooling. gemini-cli is not meaningfully different from claude code.