Readit News logoReadit News
mentalgear · 13 days ago
Posting it here as a top-level comment as many people asked why boycott just openAi:

-----

openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:

* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.

* also, he warned that "ads would be the last resort" for LLM companies.

Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.

While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.

rustyhancock · 13 days ago
Just boycott them all if you can. That's what I've done.

Some people's livelihoods probably depends on Claude and they can't say use Glm4.7 on HF. Fine. But it's a moral compromise, that's life sometimes you need to compromise what you want for what you need. just don't tell yourself it's a reasonable line to hold.

I can't decouple from Google unfortunately but I accept that without fooling myself into thinking "Oh but Google are fine".

tomrod · 13 days ago
Z.ai

>>What happened in Tiananmen Square in 1989, June Fourth Incident

>! Content Security Warning: The input text data may contain inappropriate content

eru · 13 days ago
Why are compromises not reasonable lines to hold?

Deleted Comment

mentalgear · 13 days ago
I agree, if you can do boycott all of them (and maybe use open weight models locally or on e2ee cloud inference providers) - BUT I also think it 's crucial at a moment like this to take a stance against corporations like openAi that sign with the War Department, willing to introduce mass surveillance and autonomous weapons powered by brittle LLMs. This is a recipe for disaster and the only way they will sway away is by feeling it in the money/subscriptions and in their public image they so carefully crafted.

Note: yes, openAi claims it doesn't support the DoW above mentioned use-caes - but they have signed with the DoW and it is HIGHLY unlikely the DoW would give them a different terms than Antrohopic (at least regarding the substance). Maybe openAi was just happy with the "coat of paint" legalese the DoW offered - which Anthropic specifically called out as ineffective in their statement. I also wouldn't put it past Altman, who is much more friendly with Trumpo's gov, to play a double game here to get their main competitor out of the game. But at least in this case I hope he's acting for the benefit of all by truly standing with Anthropic on the issue.

Hackbraten · 13 days ago
> I can't decouple from Google unfortunately

Why not?

ozgung · 13 days ago
Actually Google Gemini provides almost no control on the data you share. Same for Antigravity. No "opt-out" button, even as a lie. Even when you are a paying user. Only Google Workplace users have some control.

There is a setting in Gemini but it removes all your chat history. For Antigravity, I think there is nothing preventing them from use your code and data your agents upload in the background unless you are a workspace user.

Note: Canceled my ChatGPT subscription and deleted an account.

reilly3000 · 13 days ago
FYI I am a paying Workspace customer. I disabled Gemini retention. Doing so means no chat history sidebar- all are ephemeral. It was org-level. That became impractical. I re-enabled it. Magically, all of my old chats were back. The ones during no retention mode weren’t there. Perhaps if I’d left it off for more than 30 days the old stuff would have been truly removed.

The point is there is no conversation-level controls. It’s incredibly user-hostile.

gooseyman · 13 days ago
It's all or nothing for Gemini Pro.

I can't set a voice reminder on my Pixel without giving full access to my Google workspace (which includes all emails) which is explicitly allowed to be trained on per the terms. There is no per app toggle.

Voice reminders were the only thing assistants did well for years.

We are going backwards.

de6u99er · 13 days ago
That's not correct, at least not here inEurope.

You can disable saving your activity In this case you chat's win't be stored or used.

If you use Gemini through Google Workspace, all chat's won't leave the workspace environment and won't be used for LLM training (as of now).

jbkkd · 13 days ago
I like Gemini the model, but the app itself sucks. You can't even delete conversations if you're an enterprise workspace user.
de6u99er · 13 days ago
That's not correct. If you didable activity your data won't be used. You eon't have daved chats and can inly have one.
WarmWash · 13 days ago
The API doesn't retain your data, but then you do need to pay fully for each token.
WD-42 · 13 days ago
The reason this is on the front page now is because of Altmans recent deal with the department of war, not because of these general grievances.
rpwverheij · 11 days ago
link to get up to speed on that?
kledru · 13 days ago
maybe they will have a human in the loop when vibe bombing the world, if the person agrees not to use an ad blocker
altmanaltman · 13 days ago
I know we should boycott openAI, i was just wondering if I should also boycott altman's other venture, Worldcoin which is down 97.27%? He said I'll get UBI soon
mentalgear · 13 days ago
Oh yes, you get free UBI / Worldcoins - you just need to do a full scan with their creepy orb and allow a private-company to keep your full biometric data. That's not asking for too much, is it ... ?
fnordpiglet · 13 days ago
Well you have to have customers to have a boycott
rixed · 13 days ago

  > ads would be the last resort
Interestingly, Larry Page & Sergey Brin wrote something similar in their paper about Google; See Apendix A in http://infolab.stanford.edu/pub/papers/google.pdf.

stingraycharles · 13 days ago
Don’t you think Grok / X.ai is worse?
mikkupikku · 13 days ago
Grok isn't even in the running. It's a "me too" embarrassment that only exists so the owner can feel as though he's a meaningful participant.
mentalgear · 13 days ago
It is indeed, though personally I do not perceive Grok/xAi as one of the top LLM companies. Yes, they do some benchmark-maxing, but I do not think they are on par with Anthropic, Google/DeepMind or openAi.
jdiaz97 · 13 days ago
Not a real AI company, every time Grok shows actual intelligence it gets lobotomized by Elon to glaze him
lII1lIlI11ll · 13 days ago
> also, he warned that "ads would be the last resort" for LLM companies.

What is wrong with ads? I personally dislike them and prefer to just pay for services, but it seems that majority of people prefer "free"-ad-supported model.

seanp2k2 · 13 days ago
I’d argue that it’s not specifically that they prefer it, it’s that they don’t understand and appreciate what they’re selling to get whatever service without paying money. Now that we live in a world where everything is collected, aggregated, sold, and weaponized regardless of you paying or not, maybe it doesn’t matter much anyway.
brookst · 13 days ago
I generally agree with your take but the juvenile name-calling really weakens the point.
mentalgear · 13 days ago
Generally I would agree, only in this case the name seems to fit the person better than the actual name.
tim333 · 12 days ago
I think this misses the main reason. I mean ads have been a thing for a while now. What's new is:

* Brockman donates $25m to a pro Trump super PAC

* Altman is in talks with talks with the Pentagon since Wednesday

* Now it's announced Anthropic is dropped by the military, designated a supply chain risk, and OpenAI takes over its military contract, after Anthropic objected to surveilling US citizens and allowing autonomous kill bots.

The thing stinks rather.

krater23 · 13 days ago
Why boycott? Just use their free services and never pay for it. Cost them money instead of pay them money is a step further than boycott.
bspammer · 13 days ago
Investor confidence is far more important to them than cashflow, and the best way to shake investor confidence is with the magic words "user numbers are down".
mrgordon · 13 days ago
That sounds smart but they still raise more money because they “have 900 million users”
layer8 · 13 days ago
If it’s free, then you’re the product. OpenAI gets your data and ad revenue, and can raise more investor money due to how many users they have.
pinnochio · 13 days ago
# of sticky non-paying users still gives them more investment juice than per user costs deducts, since we're still in the speculative phase.
awestroke · 13 days ago
ChatGPT is going to try to influence you to buy certain products and use certain services. So you'll be the product in the end
james_marks · 13 days ago
If you aren’t paying for the product, you are the product being sold. No, thank you.
4b11b4 · 13 days ago
Free services are garbage, you dun know what you're getting routed to
impossiblefork · 13 days ago
You do probably give them useful data by doing that.
mountainriver · 12 days ago
Well I guess the marketing guy brought the world the ChatGPT moment then the actual scientists copied him?
algo314 · 13 days ago
Don't forget the UBI/open-source BS he sold like a snakes-oil salesman and people even bought it.
titanomachy · 13 days ago
I distrust OpenAI as much as the next guy, but “Scam Altman” has “70-year-old uncle Facebook rant” energy.
rvz · 13 days ago
Why not go a step further and be boycotting all of them, especially those that have government contracts.
irl_zebra · 13 days ago
This is why I haven't used OpenAI since early 2023-ish, and when I did I signed up with a masked email (though notably I'm sure they can tie my chats to me via my credit card :) ). afaict Sam Altman is essentially a sociopath, like lots of the "ruling elite" these days. And while I still use Gemini and Claude extensively and recognize some of the irony there, I view not using OpenAI as harm reduction to myself.
yomismoaqui · 13 days ago
Is Scam Altman the modern equivalent of Micro$oft?
jdiaz97 · 13 days ago
Microslop
DonHopkins · 13 days ago
Turns out Microstein Files would have been a better nickname.
awestroke · 13 days ago
Scam Saltman is even better
morissette · 13 days ago
I mean marketing is how business uses psychology to control the masses.. why would we think ai wouldn’t be used by businesses, governments, independent psychopaths?
UqWBcuFx6NV4r · 13 days ago
Your point stands just fine without the silly, uniquely-US-politics-style “SCAM Altman ha ha!” BS. I can feel myself getting dumber every time I am subject to one of these.
brookst · 13 days ago
At some point being childish became a signal of authority in the US. It’s bizarre.
mpalmer · 13 days ago
Those of us whose intelligence is unaffected by reading a single extra letter are keeping you in our thoughts.
mark_l_watson · 13 days ago
I stopped paying OpenAI a long time ago. I get that actually deleting your OpenAI account hurts their ‘numbers’ and thus possibly their valuation. I choose another path: I use their tokens for free, hopefully helping them go out of business a little sooner.

The irony is that until yesterday I felt more or less the same about Anthropic. Last night I paid for an Anthropic subscription I don’t need in order to both support their current cause vs. the US government and help their ‘numbers.’

mrgordon · 13 days ago
OpenAI just advertises that they’ll make you pay later and raises $100B+ on having “900M+ users”
Reagan_Ridley · 13 days ago
I deleted all my free accounts (turns out I have a few...)

Learnt from GOOG that nothing is free. I'm now paying for Claude

dangus · 13 days ago
Ads are imminent, TOS just changed to allow them, and free users will get trash models that are net positive profitable after ads. Better to just leave now.
tehjoker · 13 days ago
I think what anthropic did yesterday was good, but I had to take a step back and think, well it wasn’t a bridge too far for them to allow claude to be used in the wildly illegal maduro kidnapping operation.
roxolotl · 13 days ago
Right the red line wasn’t much of a line. If you’re drawing your line only at unconstitutional mass surveillance and allowing the DoD to build skynet because Claude’s not ready for it yet that’s not really a line of principle.
xpe · 13 days ago
Did you ask these too: what was the full context? To what degree was Anthropic aware in advance? What was their action space (their options)? What would be the consequences of their next actions?

And of course: and what sources are you using?

I get it: moral oversimplification is tempting for many people. I understand digging in takes time, but this situation warrants extra consideration.

Ethics is complicated and much harder than programming. Ethical reasoning is a muscle you have to train. Generally speaking, it isn’t the kind of skill that you build in isolation. At the very least, a lot of awareness and introspection is required.

I’d like to think that HN is a fairly intelligent community. But I don’t assume too much. Going based on what I’ve seen here generally, I see a lot of shallow thinking. So I think it’s a reasonable concern to think many of us here have a pretty large blind spot (statistically) when it comes to “softer” skills like philosophy and ethics.

This is not me “blaming” individuals; our industry has strong bias and selection criteria. This is my overall empirical take based on participating here for years.

Still, I’d like to think we are sufficiently intelligent and we have sufficient means and time to fill the gaps. But we have to prove it. I suggest we start modeling and demonstrating the kind of behavior and reasoning that we want to see in the world.

You can probably tell that I lean heavily towards consequentialist ethics, but I don’t discount other kinds of ethical thinking. I just want everyone to think hard harder. Seek more context. Ask what you would do in another’s shoes and why. Recognize the incentives and constraints.

Many people are tempted to judge others. That’s human. I suggest tamping that down until you’ve really marinated in the full context.

Also, each of us probably has more influence with your own actions than merely judging others.

And let me be brutally honest about one’s impact. Organizing and collaborating is so much of a force multiplier (easily 100X) that not doing it for things you care about is moral failure!

I’m not discounting good intentions, but in my system of ethics, I put much more emphasis on our actions. And persuasion is an action, which is what I’m hoping to do here.

randallsquared · 13 days ago
There's been a fair amount of speculation that pushing back after discovering that that had happened was what instigated this week's fun.
brookst · 13 days ago
Do we know they were consulted on that, as opposed to it being the wake-up call that led to the breakup?

Dead Comment

aniviacat · 13 days ago
I was just about to change from OpenAI to Anthropic, however when signing up I get this message:

> Unfortunately, Claude is not available to new users right now. We're working hard to expand our availability soon.

That's unfortunate timing.

giancarlostoro · 13 days ago
I wonder why that is...
javier2 · 13 days ago
it was like that when I signed up in july last year too. Just waited a couple of days and I was able to signup.
jdiaz97 · 13 days ago
You can always use z.ai or minimax
rahulroy · 11 days ago
I asked z.ai, "Which is the best model for coding"

Here's the response:

```

As of late 2024, there isn't one single "best" model for every situation, as performance depends on whether you need raw coding intelligence, speed, or integration into your workflow.

However, the current consensus among developers places Claude 3.5 Sonnet at the top for pure coding ability.

Here is a breakdown of the best models for coding right now, categorized by their strengths: ...

```

So they have data until late 2024 and nothing beyond? They don't even perform a web search. Doesn't seem to be on par with other frontier models.

sdevonoes · 13 days ago
They ask a phone number to sign up. WTF?

I signed up with openai a while ago and I didn’t need to provide any phone number…. I wanna delete my open ai account, but then I cannot use claude without a phone?

brookst · 13 days ago
It’s a way to mitigate bot accounts. Arguably not the best way, arguably not the right cost/benefit, but all of these services see massive bot traffic and are in a constant battle.
badlibrarian · 13 days ago
If you sign up with a Google account you don't need to give them a phone number. I realize the irony here.
kristjansson · 13 days ago
When did you sign up for OpenAI? They been requiring a phone sign the very first betas
dynm · 13 days ago
Not sure why you're being downvoted. It's unusual and harmful to privacy to require a phone number.
brightball · 13 days ago
Wow, seriously? I signed my team up for it Thursday.
krater23 · 13 days ago
WTF?! Really? Then the bubble is bursting already.
654wak654 · 13 days ago
It's not the bubble, it's the DoD
UqWBcuFx6NV4r · 13 days ago
TIL capacity limits mean that the bubble is bursting. Peak HN user logic.
abbadadda · 13 days ago
LOL I keep getting, “ Oops, an error occurred! Too many failed attempts. Try again”… my login codes are mysteriously not working when trying to delete my OpenAI/ChatGPT account.
itsyonas · 13 days ago
When I type in 'DELETE', the button just stays disabled for me. When I tried to make the request through their 'Privacy' portal, I receive a mysterious 'Session expired' error message, and now I've been locked out with the message 'Too many failed attempts'...
ayhanfuat · 13 days ago
Did you type in your email? It seems already filled in because it shows you your email address as the placeholder text but you need to fill in.
duskdozer · 13 days ago
Pour one out for the dev who got called on saturday morning to break the account deletion process
abbadadda · 13 days ago
Probably, on the backend: “Server Error 500: Users deleting OpenAI Accounts too fast. Try again later.”
0Ggr3g · 13 days ago
Make sure you enter both DELETE and your email above.

It took me a minute to see this.

IAmGraydon · 13 days ago
Yeah they intentionally broke it. So on Monday morning, instead of just deleting my account, I will be terminating all of the accounts in our company and moving them all to Anthropic. Keep it up, Sam!
malwrar · 13 days ago
It claims that I can’t end my subscription because I signed up on another platform. How odd, once money is involved suddenly our AGI contender can’t implement basic features. Or I’m a fool somehow.
UqWBcuFx6NV4r · 13 days ago
If you signed up via e.g. iOS then OpenAI literally is not allowed to manage your subscription. They do not have the capability to do so.
fragmede · 13 days ago
Is that other platform Apple?
abbadadda · 13 days ago
Failed logging in again to delete my OpenAI/ChatGPT account with, “ An unexpected error occurred while creating your session.”
abbadadda · 13 days ago
Same thing on Safari as on Firefox 45 minutes later… I’ll have to try from the laptop when I’m home.

Deleted Comment

gizzlon · 13 days ago
yeah, does not work for me either. Whatever I put in the DELETE input field, the button is still inactive,

Edit: Had to "submit a request".

So glad they let me request my account and data deleted, really grateful /s

teiferer · 13 days ago
I expected the comments to mention Scott Galloway. Haven't found his name here, so I am doing that now.

Context is his https://www.resistandunsubscribe.com/ campaign.

8cvor6j844qw_d6 · 13 days ago
Just a heads up for people that used phone numbers to verify their account before you decide to proceed with account deletion.

> New accounts are still subject to our limit of 3 accounts per phone number. Deleted accounts also count toward this limit.

> Deleting an account does not free up another spot.

> A phone number can only ever be used up to 3 times for verification to generate the first API key for your account on platform.openai.com.

Panoramix · 13 days ago
More reasons to go with the competition
downboots · 13 days ago
What if the competition changed their mind in the future?
phernandez · 13 days ago
PSA: Export your ChatGPT conversations before cancelling If you're thinking about cancelling (or switching to Claude/Gemini), don't lose months of conversations first.

I built Basic Memory — it imports your ChatGPT export and turns it into plain Markdown files. Every conversation becomes a file you can actually read, search, and use with whatever AI you switch to.

This is not an ad. It is free and open source. Your data belongs to you. Keep it.

Steps:

1. Settings → Data Controls → Export Data (ChatGPT emails you a zip)

2. Install Basic Memory (brew tap basicmachines-co/basic-memory && brew install basic-memory)

bm import chatgpt conversations.zip

Complete docs: http://docs.basicmemory.com

ray_v · 13 days ago
I suspect that the export feature is going to be "broken" for a good long while; I've been waiting for mine since 8 am ... a little over 5 hours now.
hn_throwaway_99 · 11 days ago
It took exactly 24 hours, to the minute, from the time I received the "we're generating an export" file until I got the download link, so guessing they're either batching it or deliberately sending after 24 hours because it adds friction to the account deletion process.
CompoundEyes · 13 days ago
Altman tweet: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

From that it reads like the administration quickly agreed to the terms Anthropic wanted with OpenAI instead.

fwipsy · 13 days ago
Does "putting them in the agreement" mean "we will never allow them," or "we will not allow them if they are illegal?" Here's a link which says that the DoD was willing to make up with anthropic any time if they allowed surveillance of Americans: https://www.axios.com/2026/02/27/anthropic-pentagon-supply-c...

Another leak says the agreement "reflects existing law and the pentagon's policies." https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...

Seems like Altman wants to spin this as the same principled stand anthropic took, but they really caved to the DoD's "all legal applications" framing. Up to you to decide how much you think the law restrains the Pentagon here.

Reagan_Ridley · 13 days ago
Altman wanted to you to believe he got the same deal Amodei didn't, because he has the art of deal.
WarmWash · 13 days ago
There is almost certainly more to this whole DoD-Anthropic story than is getting through.
curt15 · 13 days ago
That's what Altman tweeted. Did it actually happen?