Posting it here as a top-level comment as many people asked why boycott just openAi:
-----
openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.
Just boycott them all if you can. That's what I've done.
Some people's livelihoods probably depends on Claude and they can't say use Glm4.7 on HF. Fine. But it's a moral compromise, that's life sometimes you need to compromise what you want for what you need. just don't tell yourself it's a reasonable line to hold.
I can't decouple from Google unfortunately but I accept that without fooling myself into thinking "Oh but Google are fine".
I agree, if you can do boycott all of them (and maybe use open weight models locally or on e2ee cloud inference providers) - BUT I also think it 's crucial at a moment like this to take a stance against corporations like openAi that sign with the War Department, willing to introduce mass surveillance and autonomous weapons powered by brittle LLMs. This is a recipe for disaster and the only way they will sway away is by feeling it in the money/subscriptions and in their public image they so carefully crafted.
Note: yes, openAi claims it doesn't support the DoW above mentioned use-caes - but they have signed with the DoW and it is HIGHLY unlikely the DoW would give them a different terms than Antrohopic (at least regarding the substance). Maybe openAi was just happy with the "coat of paint" legalese the DoW offered - which Anthropic specifically called out as ineffective in their statement.
I also wouldn't put it past Altman, who is much more friendly with Trumpo's gov, to play a double game here to get their main competitor out of the game. But at least in this case I hope he's acting for the benefit of all by truly standing with Anthropic on the issue.
Actually Google Gemini provides almost no control on the data you share. Same for Antigravity. No "opt-out" button, even as a lie. Even when you are a paying user. Only Google Workplace users have some control.
There is a setting in Gemini but it removes all your chat history. For Antigravity, I think there is nothing preventing them from use your code and data your agents upload in the background unless you are a workspace user.
Note: Canceled my ChatGPT subscription and deleted an account.
FYI I am a paying Workspace customer. I disabled Gemini retention. Doing so means no chat history sidebar- all are ephemeral. It was org-level. That became impractical. I re-enabled it. Magically, all of my old chats were back. The ones during no retention mode weren’t there. Perhaps if I’d left it off for more than 30 days the old stuff would have been truly removed.
The point is there is no conversation-level controls. It’s incredibly user-hostile.
I can't set a voice reminder on my Pixel without giving full access to my Google workspace (which includes all emails) which is explicitly allowed to be trained on per the terms. There is no per app toggle.
Voice reminders were the only thing assistants did well for years.
I know we should boycott openAI, i was just wondering if I should also boycott altman's other venture, Worldcoin which is down 97.27%? He said I'll get UBI soon
Oh yes, you get free UBI / Worldcoins - you just need to do a full scan with their creepy orb and allow a private-company to keep your full biometric data. That's not asking for too much, is it ... ?
It is indeed, though personally I do not perceive Grok/xAi as one of the top LLM companies. Yes, they do some benchmark-maxing, but I do not think they are on par with Anthropic, Google/DeepMind or openAi.
> also, he warned that "ads would be the last resort" for LLM companies.
What is wrong with ads? I personally dislike them and prefer to just pay for services, but it seems that majority of people prefer "free"-ad-supported model.
I’d argue that it’s not specifically that they prefer it, it’s that they don’t understand and appreciate what they’re selling to get whatever service without paying money. Now that we live in a world where everything is collected, aggregated, sold, and weaponized regardless of you paying or not, maybe it doesn’t matter much anyway.
I think this misses the main reason. I mean ads have been a thing for a while now. What's new is:
* Brockman donates $25m to a pro Trump super PAC
* Altman is in talks with talks with the Pentagon since Wednesday
* Now it's announced Anthropic is dropped by the military, designated a supply chain risk, and OpenAI takes over its military contract, after Anthropic objected to surveilling US citizens and allowing autonomous kill bots.
Investor confidence is far more important to them than cashflow, and the best way to shake investor confidence is with the magic words "user numbers are down".
This is why I haven't used OpenAI since early 2023-ish, and when I did I signed up with a masked email (though notably I'm sure they can tie my chats to me via my credit card :) ). afaict Sam Altman is essentially a sociopath, like lots of the "ruling elite" these days. And while I still use Gemini and Claude extensively and recognize some of the irony there, I view not using OpenAI as harm reduction to myself.
I mean marketing is how business uses psychology to control the masses.. why would we think ai wouldn’t be used by businesses, governments, independent psychopaths?
Your point stands just fine without the silly, uniquely-US-politics-style “SCAM Altman ha ha!” BS. I can feel myself getting dumber every time I am subject to one of these.
I stopped paying OpenAI a long time ago. I get that actually deleting your OpenAI account hurts their ‘numbers’ and thus possibly their valuation. I choose another path: I use their tokens for free, hopefully helping them go out of business a little sooner.
The irony is that until yesterday I felt more or less the same about Anthropic. Last night I paid for an Anthropic subscription I don’t need in order to both support their current cause vs. the US government and help their ‘numbers.’
Ads are imminent, TOS just changed to allow them, and free users will get trash models that are net positive profitable after ads. Better to just leave now.
I think what anthropic did yesterday was good, but I had to take a step back and think, well it wasn’t a bridge too far for them to allow claude to be used in the wildly illegal maduro kidnapping operation.
Right the red line wasn’t much of a line. If you’re drawing your line only at unconstitutional mass surveillance and allowing the DoD to build skynet because Claude’s not ready for it yet that’s not really a line of principle.
Did you ask these too: what was the full context? To what degree was Anthropic aware in advance? What was their action space (their options)? What would be the consequences of their next actions?
And of course: and what sources are you using?
I get it: moral oversimplification is tempting for many people. I understand digging in takes time, but this situation warrants extra consideration.
Ethics is complicated and much harder than programming. Ethical reasoning is a muscle you have to train. Generally speaking, it isn’t the kind of skill that you build in isolation. At the very least, a lot of awareness and introspection is required.
I’d like to think that HN is a fairly intelligent community. But I don’t assume too much. Going based on what I’ve seen here generally, I see a lot of shallow thinking. So I think it’s a reasonable concern to think many of us here have a pretty large blind spot (statistically) when it comes to “softer” skills like philosophy and ethics.
This is not me “blaming” individuals; our industry has strong bias and selection criteria. This is my overall empirical take based on participating here for years.
Still, I’d like to think we are sufficiently intelligent and we have sufficient means and time to fill the gaps. But we have to prove it. I suggest we start modeling and demonstrating the kind of behavior and reasoning that we want to see in the world.
You can probably tell that I lean heavily towards consequentialist ethics, but I don’t discount other kinds of ethical thinking. I just want everyone to think hard harder. Seek more context. Ask what you would do in another’s shoes and why. Recognize the incentives and constraints.
Many people are tempted to judge others. That’s human. I suggest tamping that down until you’ve really marinated in the full context.
Also, each of us probably has more influence with your own actions than merely judging others.
And let me be brutally honest about one’s impact. Organizing and collaborating is so much of a force multiplier (easily 100X) that not doing it for things you care about is moral failure!
I’m not discounting good intentions, but in my system of ethics, I put much more emphasis on our actions. And persuasion is an action, which is what I’m hoping to do here.
I asked z.ai, "Which is the best model for coding"
Here's the response:
```
As of late 2024, there isn't one single "best" model for every situation, as performance depends on whether you need raw coding intelligence, speed, or integration into your workflow.
However, the current consensus among developers places Claude 3.5 Sonnet at the top for pure coding ability.
Here is a breakdown of the best models for coding right now, categorized by their strengths:
...
```
So they have data until late 2024 and nothing beyond? They don't even perform a web search. Doesn't seem to be on par with other frontier models.
I signed up with openai a while ago and I didn’t need to provide any phone number…. I wanna delete my open ai account, but then I cannot use claude without a phone?
It’s a way to mitigate bot accounts. Arguably not the best way, arguably not the right cost/benefit, but all of these services see massive bot traffic and are in a constant battle.
LOL I keep getting, “
Oops, an error occurred!
Too many failed attempts.
Try again”… my login codes are mysteriously not working when trying to delete my OpenAI/ChatGPT account.
When I type in 'DELETE', the button just stays disabled for me. When I tried to make the request through their 'Privacy' portal, I receive a mysterious 'Session expired' error message, and now I've been locked out with the message 'Too many failed attempts'...
Yeah they intentionally broke it. So on Monday morning, instead of just deleting my account, I will be terminating all of the accounts in our company and moving them all to Anthropic. Keep it up, Sam!
It claims that I can’t end my subscription because I signed up on another platform. How odd, once money is involved suddenly our AGI contender can’t implement basic features. Or I’m a fool somehow.
PSA: Export your ChatGPT conversations before cancelling
If you're thinking about cancelling (or switching to Claude/Gemini), don't lose months of conversations first.
I built Basic Memory — it imports your ChatGPT export and turns it into plain Markdown files. Every conversation becomes a file you can actually read, search, and use with whatever AI you switch to.
This is not an ad. It is free and open source. Your data belongs to you. Keep it.
Steps:
1. Settings → Data Controls → Export Data (ChatGPT emails you a zip)
It took exactly 24 hours, to the minute, from the time I received the "we're generating an export" file until I got the download link, so guessing they're either batching it or deliberately sending after 24 hours because it adds friction to the account deletion process.
Altman tweet:
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
From that it reads like the administration quickly agreed to the terms Anthropic wanted with OpenAI instead.
Does "putting them in the agreement" mean "we will never allow them," or "we will not allow them if they are illegal?" Here's a link which says that the DoD was willing to make up with anthropic any time if they allowed surveillance of Americans: https://www.axios.com/2026/02/27/anthropic-pentagon-supply-c...
Seems like Altman wants to spin this as the same principled stand anthropic took, but they really caved to the DoD's "all legal applications" framing. Up to you to decide how much you think the law restrains the Pentagon here.
-----
openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.
Some people's livelihoods probably depends on Claude and they can't say use Glm4.7 on HF. Fine. But it's a moral compromise, that's life sometimes you need to compromise what you want for what you need. just don't tell yourself it's a reasonable line to hold.
I can't decouple from Google unfortunately but I accept that without fooling myself into thinking "Oh but Google are fine".
>>What happened in Tiananmen Square in 1989, June Fourth Incident
>! Content Security Warning: The input text data may contain inappropriate content
Deleted Comment
Note: yes, openAi claims it doesn't support the DoW above mentioned use-caes - but they have signed with the DoW and it is HIGHLY unlikely the DoW would give them a different terms than Antrohopic (at least regarding the substance). Maybe openAi was just happy with the "coat of paint" legalese the DoW offered - which Anthropic specifically called out as ineffective in their statement. I also wouldn't put it past Altman, who is much more friendly with Trumpo's gov, to play a double game here to get their main competitor out of the game. But at least in this case I hope he's acting for the benefit of all by truly standing with Anthropic on the issue.
Why not?
There is a setting in Gemini but it removes all your chat history. For Antigravity, I think there is nothing preventing them from use your code and data your agents upload in the background unless you are a workspace user.
Note: Canceled my ChatGPT subscription and deleted an account.
The point is there is no conversation-level controls. It’s incredibly user-hostile.
I can't set a voice reminder on my Pixel without giving full access to my Google workspace (which includes all emails) which is explicitly allowed to be trained on per the terms. There is no per app toggle.
Voice reminders were the only thing assistants did well for years.
We are going backwards.
You can disable saving your activity In this case you chat's win't be stored or used.
If you use Gemini through Google Workspace, all chat's won't leave the workspace environment and won't be used for LLM training (as of now).
What is wrong with ads? I personally dislike them and prefer to just pay for services, but it seems that majority of people prefer "free"-ad-supported model.
* Brockman donates $25m to a pro Trump super PAC
* Altman is in talks with talks with the Pentagon since Wednesday
* Now it's announced Anthropic is dropped by the military, designated a supply chain risk, and OpenAI takes over its military contract, after Anthropic objected to surveilling US citizens and allowing autonomous kill bots.
The thing stinks rather.
https://x.com/elonmusk/status/1889070627908145538https://x.com/elonmusk/status/1935733153119010910https://x.com/elonmusk/status/1894244902357406013https://x.com/elonmusk/status/1955299075781431726https://x.com/elonmusk/status/1889371675164303791https://x.com/elonmusk/status/1935539112746041422https://x.com/elonmusk/status/1955190817251102883https://x.com/elonmusk/status/1955195673693077615https://x.com/elonmusk/status/1889063777792069911https://x.com/elonmusk/status/1910171944671916305
The irony is that until yesterday I felt more or less the same about Anthropic. Last night I paid for an Anthropic subscription I don’t need in order to both support their current cause vs. the US government and help their ‘numbers.’
Learnt from GOOG that nothing is free. I'm now paying for Claude
And of course: and what sources are you using?
I get it: moral oversimplification is tempting for many people. I understand digging in takes time, but this situation warrants extra consideration.
Ethics is complicated and much harder than programming. Ethical reasoning is a muscle you have to train. Generally speaking, it isn’t the kind of skill that you build in isolation. At the very least, a lot of awareness and introspection is required.
I’d like to think that HN is a fairly intelligent community. But I don’t assume too much. Going based on what I’ve seen here generally, I see a lot of shallow thinking. So I think it’s a reasonable concern to think many of us here have a pretty large blind spot (statistically) when it comes to “softer” skills like philosophy and ethics.
This is not me “blaming” individuals; our industry has strong bias and selection criteria. This is my overall empirical take based on participating here for years.
Still, I’d like to think we are sufficiently intelligent and we have sufficient means and time to fill the gaps. But we have to prove it. I suggest we start modeling and demonstrating the kind of behavior and reasoning that we want to see in the world.
You can probably tell that I lean heavily towards consequentialist ethics, but I don’t discount other kinds of ethical thinking. I just want everyone to think hard harder. Seek more context. Ask what you would do in another’s shoes and why. Recognize the incentives and constraints.
Many people are tempted to judge others. That’s human. I suggest tamping that down until you’ve really marinated in the full context.
Also, each of us probably has more influence with your own actions than merely judging others.
And let me be brutally honest about one’s impact. Organizing and collaborating is so much of a force multiplier (easily 100X) that not doing it for things you care about is moral failure!
I’m not discounting good intentions, but in my system of ethics, I put much more emphasis on our actions. And persuasion is an action, which is what I’m hoping to do here.
Dead Comment
> Unfortunately, Claude is not available to new users right now. We're working hard to expand our availability soon.
That's unfortunate timing.
Here's the response:
```
As of late 2024, there isn't one single "best" model for every situation, as performance depends on whether you need raw coding intelligence, speed, or integration into your workflow.
However, the current consensus among developers places Claude 3.5 Sonnet at the top for pure coding ability.
Here is a breakdown of the best models for coding right now, categorized by their strengths: ...
```
So they have data until late 2024 and nothing beyond? They don't even perform a web search. Doesn't seem to be on par with other frontier models.
I signed up with openai a while ago and I didn’t need to provide any phone number…. I wanna delete my open ai account, but then I cannot use claude without a phone?
It took me a minute to see this.
Deleted Comment
Edit: Had to "submit a request".
So glad they let me request my account and data deleted, really grateful /s
Context is his https://www.resistandunsubscribe.com/ campaign.
> New accounts are still subject to our limit of 3 accounts per phone number. Deleted accounts also count toward this limit.
> Deleting an account does not free up another spot.
> A phone number can only ever be used up to 3 times for verification to generate the first API key for your account on platform.openai.com.
I built Basic Memory — it imports your ChatGPT export and turns it into plain Markdown files. Every conversation becomes a file you can actually read, search, and use with whatever AI you switch to.
This is not an ad. It is free and open source. Your data belongs to you. Keep it.
Steps:
1. Settings → Data Controls → Export Data (ChatGPT emails you a zip)
2. Install Basic Memory (brew tap basicmachines-co/basic-memory && brew install basic-memory)
bm import chatgpt conversations.zip
Complete docs: http://docs.basicmemory.com
From that it reads like the administration quickly agreed to the terms Anthropic wanted with OpenAI instead.
Another leak says the agreement "reflects existing law and the pentagon's policies." https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...
Seems like Altman wants to spin this as the same principled stand anthropic took, but they really caved to the DoD's "all legal applications" framing. Up to you to decide how much you think the law restrains the Pentagon here.