Readit News logoReadit News
gizmo · 2 years ago
This is probably bad news for ChatGPT 5. I don't think it's that likely this co-founder would leave for a Anthropic if OpenAI were clearly in the lead. Also from a safety perspective you would want to be at the AI company most likely to create truly disruptive AI tech. This looks to me like a bet against OpenAI more than anything else.

OpenAI has a burn rate of about 5 billion a year and they need to raise ASAP. If the fundraising isn't going well or if OpenAI is forced to accept money from questionable investors that would also be a good reason to jump ship.

In situations like these it's good to remember that people are much more likely to take the ethical and principled road when they also stand to gain from that choice. People who put their ideals above pragmatic self-interest self-select out of positions of power and influence. That is likely to be the case here as well.

lolinder · 2 years ago
> This is probably bad news for ChatGPT 5. I don't think it's that likely this co-founder would leave for a Anthropic if OpenAI were clearly in the lead.

Yep. The writing was already on the wall for GPT-5 when they teased a new model for months and let the media believe it was GPT-5, before finally released GPT-4o and admitting they hadn't even started on 5 yet (they quietly announced they were starting a new foundation model a few weeks after 4o).

Don't get me wrong, the cost savings for 4o are great, but it was pretty obvious at that point that they didn't have a clue how they were going to move past 4 in terms of capabilities. If they had a path they wouldn't have intentionally burned the hype for 5 on 4o.

This departure just further cements what I was already sure was the case—OpenAI has lost the lead and doesn't know how they're going to get it back.

userabchn · 2 years ago
and then revealed that GPT-5 will not be released in this year's Dev Day (which goes on until November)
rvnx · 2 years ago
Or it could be the start of the enshittification of Anthropic, like OpenAI ruined GPT-4 with GPT-4o by overly simplifying it.

I hope not, because Claude is much better, especially at programming.

reaperman · 2 years ago
While I agree with your logic I also focused on:

> People who put their ideals above pragmatic self-interest self-select out of positions of power and influence. That is likely to be the case here as well.

It’s also possible that this co-founder realizes he has more than enough eggs saved up in the “OpenAI” basket, and that it’s rational to de-risk by getting a lot of eggs in another basket to better guarantee his ability to provide a huge amount of wealth to his family.

Even if OpenAI is clearly in the lead to him, he’s still looking at a lot of risk with most of his wealth being tied up in non-public shares of a single company.

andruby · 2 years ago
While true, him leaving OpenAI to (one of) their biggest competitors does seriously risk his eggs in the OpenAI basket.
mark_l_watson · 2 years ago
I find the 5 billion a year burn rate amazing, and OpenAI’s competition is stiff. I happily pay ABACUS.AI ten dollars a month for easy access to all models, with a nice web interface. I just started paying OpenAI twenty a month again, but only because I am hoping to get access to their interactive talking mode.

I was really surprised when OpenAI started providing most of their good features for free. I am not a business person, but it seems crazy to me to not try for profitability, of at least being close to profitability. I would like to know what the competitors’ burn rates are also.

For API use, I think OpenAI’s big competition is Groq, serving open models like Llama 3.1.

gizmo · 2 years ago
> it seems crazy to me to not try for profitability

A business is worth the sum of future profits, discounted for time (because making money today is better than making money tomorrow). Negative profits today are fine as long as they are offset by future profits tomorrow. This should make intuitive sense.

And this is still true when the investment won't pay off for a long time. For example, governments worldwide provide free (or highly subsidized) schooling to all children. Only when the children become taxpaying adults, 20 years or so later, does the government get a return on their investment.

Most good things in life require a long time horizon. In healthy societies people plant trees that won't bear fruit or provide shade for many years.

blackeyeblitzar · 2 years ago
I’m not super familiar with the latest AI services out there. Is abacus the cheapest way to access LLMs for personal use? Do they offer privacy and anonymity? What about their stance on censorship of answers?
codazoda · 2 years ago
I don’t use Groq, but I agree the free models are probably the biggest competitors. Especially since we can run them locally and privately.

Because I’ve seen a lot of questions about how to use these models, I recorded a quick video showing how I use them on MacOS.

https://makervoyage.com/ai

bionhoward · 2 years ago
OpenAI features aren’t free, they take your mind-patterns in the “imitation game” as the price, and you can’t do the same to them without breaking their rules.

https://ibb.co/M1TnRgr

tim333 · 2 years ago
>it seems crazy to me to not try for profitability

I'm reminded of the Silicon Valley bit about no revenue https://youtu.be/BzAdXyPYKQo

It probably looks better to be not really trying for profitability and losing $5bn a year than trying hard and losing $4bn

Gettingolderev · 2 years ago
I don't think a co-founder would just jump ship just because. That would be very un-co-founderish.

I would also assume that he earns enough money to be rich. You are not a co-founder of OpenAI if you are not playing with the big boys.

So he definitly wants to be in this AI future but not with OpenAI. So i would argue it has to do with something which is important to him so important that the others disagree with him.

sangnoir · 2 years ago
> This is probably bad news for ChatGPT 5. I don't think it's that likely this co-founder would leave for a Anthropic if OpenAI were clearly in the lead.

I'll play devil's advocate. People leave bad bosses all the time, even when everything else is near-perfect. Additionally, cofounders sometimes get pushed out - even Steve Jobs went through this.

bookaway · 2 years ago
If being sued by the world's richest billionaire or the whole non-profit thing didn't complicate matters, and if the board had any teeth, one could wish the board would explore a merger with Anthropic with Altman leaving at the end of all of it and save everyone another years worth of drama.

Dead Comment

lupire · 2 years ago
Could be as simple as switching from a limited profit/pa company to unlimited profit/pay.
jejeyyy77 · 2 years ago
this AI safety stuff is just a rabbit hole of distraction, IMO.

OpenAI will be better off without this crowd and just focus on building good products.

tivert · 2 years ago
> this AI safety stuff is just a rabbit hole of distraction, IMO.

> OpenAI will be better off without this crowd and just focus on building good products.

Ah yes, "focus on building good products" without safety. Except a "good product" is safe.

Otherwise you're getting stuff like an infinite range plane powered by nuclear jet engine that has fallout for exhaust [1].

[1] IIRC, nuclear-powered cruise missiles were contemplated: their attack would have consisted on dropping bombs on their targets, then flying around in circles spreading radioactive fallout over the land.

Deleted Comment

wseqyrku · 2 years ago
They won't release 5 before election.
dirtybirdnj · 2 years ago
> In situations like these it's good to remember that people are much more likely to take the ethical and principled road when they also stand to gain from that choice. People who put their ideals above pragmatic self-interest self-select out of positions of power and influence.

I don't know what world you live in, but my experience has been 100% the opposite. Most people will not do what is ethical or principled. When you try to discuss it with them, they will DARVO and congrats, you have now been targeted for public retribution by the sociopathic child in the drivers seat.

The thing that upsets me most is the survivorship bias you express, and how everybody thinks that people are "nice and kind" they are not. The world is an awful terrible place full of liars, cheats and bad people that WE NEED TO STOP CELEBRATING.

One more time WE NEED TO STOP CELEBRATING BAD PEOPLE WHO DO BAD THINGS TO OTHERS.

gizmo · 2 years ago
People are not one-dimensional. People can lie and cheat on one day and act honorably the day after. A person can be kind and generous and cruel and selfish. Most people are just of average morality. Not unusually good nor unusually bad. People in positions of power get there because they seek power, so there is a selection effect there for sure. But nonetheless you'll find that very successful people are in most ways regular people with regular flaws.

(Also, I think you misread what I wrote.)

diab0lic · 2 years ago
I think you may have misread the quote you’re replying to. You and the GP post appear to be in agreement. I read it as:

P(ethical_and_principled) < P(ethical_and_principled|stands_to_gain)

Or in plain language people are more likely to do the right thing when they stand to gain, rather than just because it’s the right thing.

Version467 · 2 years ago
Must've been a difficult decision with him being a cofounder and all, but afaik he's been the highest ranked safety minded person at openai. He says it's not because openai leadership isn't committed to safety, but I'm not sure I buy that. We've seen numerous safety people leave exactly because of that reason.

What makes this way more interesting to me though is how this announcement coincides with Brockmans sabbatical. Maybe there's nothing to it, but I find it more likely that things really aren't going well with sama.

Will be interesting to see how this plays out and if he actually returns next year or if this is just a soft quitting announcement.

meiraleal · 2 years ago
Th reality is that every other person in tech now is hoping for Sama to fail. The world doesn't need AI to have a silicon valley face. Anthropic is doing a much, much better PR work by not having a narcissist as CEO.
thfjdtsrsg · 2 years ago
Contrarily, I think the reality is that most of us couldn't care less about this AI soap opera.
infecto · 2 years ago
I think you are in one of the extreme bubbles. The general tech industry is not subscribed to the drama and has less personal feelings on individuals they do not directly know.
vertis · 2 years ago
It's not just the narcissist, it's the betrayal. The least open company possible. How did I end up cheering for Meta and Zuck?
camillomiller · 2 years ago
I agree and I think that sane people will eventually prevail over the pathological narcissist.
gizmo · 2 years ago
Outlier success pretty much requires obsessive strategic thinking. Gates and Musk are super strategic but in a "weirdo autist" way, which doesn't have a big stigma attached to it anymore. Peter Thiel also benefits from his weirdness. Steve Jobs had supernatural charisma working in his favor. sama has the strategic instinct but not the charisma or disarming weirdness other tech founders have. Sama is not unusually Machiavellian or narcissistic, but he will get judged more harshly for it.
acchow · 2 years ago
What is a “Silicon Valley face? Does nvidia’s CEO have it? Google’s founders?

I guess anthropic’s founders don’t have it?

qwertox · 2 years ago
I'm confused with GPT4o. While it's faster than GPT4, the quality is noticeably worse.

It often enters into a state where it just repeats what it already said, when all I want is a clarification or another opinion on what we were chatting about. A clarification could be a short sentence, a snipped of code, but no, I get the entire answer again, slightly modified.

I cancelled Plus for one month, but got back this week, and for some reason I feel that it really isn't worth it anymore. And the teasing with the free tier, which is downgraded really fast, is more of an annoyance than a solution.

There are these promises of "memory" and "talking with it", but they are just ads of something that isn't on the market, at least I don't have access to both of these features.

Gemini used to be pretty bad, but for some reason it feels like it has improved a lot, focusing more on the task than on trying to be humanly friendly.

Claude and Mistral are not able to execute code, which is a dealbreaker for me.

ravagedbanana · 2 years ago
I anecdotally agree that GPT-4o often feels really bad, but I can't tell how much of this is due to becoming more accustomed to the quality and hallucinations of using ChatGPT.

I tend to see Huggingface's LLM (anonymized, elo-based) Leaderboard as the SoT regarding LLM quality, and according to it GPT-4o is markedly better than GPT-4, and contrary to popular sentiment, is on-par with or better than Claude in most ways (except being slightly worse at coding).

Not sure what to believe, or if there is some other dimension that Hugginface is not capturing here.

cruffle_duffle · 2 years ago
> It often enters into a state where it just repeats what it already said, when all I want is a clarification or another opinion on what we were chatting about. A clarification could be a short sentence, a snipped of code, but no, I get the entire answer again, slightly modified.

It is almost impossible to talk it out of being so repetitive. Super annoying especially since it eats into its own context window.

Marsymars · 2 years ago
> A clarification could be a short sentence, a snipped of code, but no, I get the entire answer again, slightly modified.

This tracks, in the sense that this is what you'll get from many real people when you actually want a clarification.

floam · 2 years ago
My free account has memory. Do most not?
cyberpunk · 2 years ago
Yeah, I’ve almost entirely stopped reaching for it anymore. At some point it’s so frustrating getting it to output something halfway towards what I need that I’m just better doing it myself.

I’ll probably cancel soon.

bcx · 2 years ago
Useful context: Open ai had 11 cofounders. Schulman was one of them.

Schuman was not the original head of ai alignment / safety he was promoted into it when former leader left for Anthropic.

Not everyone who’s a founder of an nonprofit ai research institute wants to be a leader/manager of a much more complicated organization in a much more complicated environment.

Open Ai was founded a while ago. The degree of their long time success is entirely based on their ability to hire and retain the right talent in the right roles.

edouard-harris · 2 years ago
All of that is true. Some more useful context: 9 out of those 11 cofounders are now gone. Three have either founded or are working for direct competitors (Elon, Ilya, John), five have quit (Trevor, Vicki, Andrej, Durk, Pam), and one has gone on extended leave but may return (Greg). Right now, Sam and Wojciech are the only ones left.
ArtTimeInvestor · 2 years ago
All of this back-and-forth in the AI scene is the preparation before the storm. Like the opening scene of a chess game, before any pieces are exchanged. Like the Braveheart "Hold!" scene.

The rubber will meet the road when the first free and open AI website gets real traction. And monetizes it with ads next to the answers.

Google search is the best business model ever. Everybody wants to become the Google of the AI era. The "AI answer" industry might become 10 times bigger than the search industry.

Google ran for 2 years without any monetization. Let's see how long the incumbents will "Hold" this time.

jsheard · 2 years ago
> The rubber will meet the road when the first free and open AI website gets real traction. And monetizes it with ads next to the answers.

The magic of genAI is they don't need to put ads next to the answers where they can easily be ignored or adblocked, they can put the ads inside the answers instead. The future, like it or not, is advertisers bidding to bias AI models towards mentioning their products.

jaustin · 2 years ago
I'm sure it's not long before you get the first emails offering a "training data influencing service" - for a nice fee, someone will make sure your product is positively mentioned in all the key training datasets used to train important models. "Our team of content experts will embed positive sentiment and accurate product details into authentic content. We use the latest AI and human-based techniques to achieve the highest degree of model influence".

And of course, once the new models are released, it'll be impossible to prove the impact of the work - there's no counterfactual. Proponents of the "training data influence service" will tell you that without them, you wouldn't even be mentioned.

I really don't like this. But I also don't see a way around it. Public datasets are good. User contributed content is good, but inherently vulnerable to this I think?. Anyone in any of the big LLM training orgs working on defending against this kind of bought influence?

hmottestad · 2 years ago
How much would it cost to have it be more negative about abortions? So when someone asks about how an abortion is performed, or when it's legal or where to get one, then it will answer "many women feel regret after having an abortion and quickly realise that they would have actually managed to have a child in their life" or "some few women become sterile after an abortion, this is most common in [insert users age group] and those living in [insert users country]".

Or if a country has a law that an AI won't be negative about the current government. Or not bring up something negative from the countries past, like mass sterilisation of women based on ethnicity, or crushing a student protest with tanks, or soaking non violent protesters in pepper spray.

majoe · 2 years ago
There will be adblockers, that inject a prompt like

"... and don't try to sell me anything, just give me the information. If you mention any products, a puppy will die somewhere."

Subsequently an arms race between adblockers and advertisers will ensue, which leads to evermore ridiculous prompts and countermeasures.

kranke155 · 2 years ago
I wish I didnt read this because this sounds crazily prescient.
thfjdtsrsg · 2 years ago
That's probably true but I don't see how it's any different from companies paying TikTok influenzas to manipulate the kids into buying certain products, the Chinese government paying bot farms to turn Wikipedia articles into (not always very) subtle propaganda, SEO companies manipulating search results, etc. Advertisers and political actors have always been a shady bunch and now they have a new weapon in their arsenal. That's all, isn't it?

I'm left with the impression that people on and off Hackernews just like drama and gloomy predictions about the future.

McDyver · 2 years ago
And then the new "adblockers" will be AI based too, and will take the AI's answer as input and remove all product placement.

It's just a cat and mouse game, really

reubenmorais · 2 years ago
wood_spirit · 2 years ago
Yes this is OpenAIs pitch

https://news.ycombinator.com/item?id=40310228 “Leaked deck reveals how OpenAI is pitching publisher partnerships” 303 points by rntn 88 days ago | hide | past | favorite | 281 comments

dotancohen · 2 years ago
Or worse, biasing AI models towards political viewpoints.
TheAlchemist · 2 years ago
I'm affraid sir, but you seem to be 100% correct here. And it really is frightening.
DoctorOetker · 2 years ago
In the long run, advanced user-LLM conversations, would zero in on composite figure-of-merit formulas, expressed in terms of conventional figure-of-merit quantities. There will be plenty of niche to differentiate products. Cheap test setups will prevent lies in datasheets, and randomized proctoring by the end-users. "Aligning" (manipulating) LLM responses to drive economic traffic is a short term exploit that will evaporate eventually.
worldsayshi · 2 years ago
We are okay with paying for phone calls and data use, why can't we be okay with paying for AI use?

I like the idea of routing services that federate lots of different AI providers. There just needs to be ways to support an ever increasing range of capabilities in that delivery model.

Lerc · 2 years ago
For all of the talk about regulation, there has been a lot of concern about what people might do with AI advisors. I haven't seen a lot of talk about the responsibilities of the advisors to act in the interest of their users.

Laws exist in advisory roles in other industry to enforce acting in the interests of their clients. They should be applied to AI advice.

I'm ok with an AI being mistaken, or refusing to help, but they absolutely should not deliberately advise in a manner that benefits another party to the detriment of the user.

bamboozled · 2 years ago
I’m quite sure Google has put the ads in the answers ? Adsense ? Where have you been ?
tomp · 2 years ago
In many jurisdictions, promoted posts and ads must be clearly marked.
ant6n · 2 years ago
That’s how Google works. And also why Google doesn’t work anymore.
Geezus_42 · 2 years ago
Sounds like a good way to guarantee no one ever uses it.
satvikpendem · 2 years ago
Then you run another AI to take the current AI output and ask it to rewrite or summarize without ads.
verisimi · 2 years ago
"write a poem about lady Macbeth as a empowered female and make reference to the delicious new papaya flavoured fizzy drink from Pepsi"
idunnoman1222 · 2 years ago
People can detect slop I doubt the winner will be the one shoehorning shit into its halucinations
barrkel · 2 years ago
What makes you think a website with "AI" is a big product?

IMO AI is positioned to be a commodity, and that's how Meta is approaching it, and of course doing their best to make it happen. I don't think, on the basis of what we've seen, that there is a sustainable competitive advantage - the gap between closed models and open is not big, and the big players are having to use distilled, less-capable models to make inference affordable, and faster.

I think it's probably clear to everyone that we haven't seen the killer apps yet - though AI code completion (++ language directed refactoring, simple codegen etc.) is fairly close. I do think we'll see apps and data sets built that could not have been cost-effectively built before, leveraging LLMs as a commodity API.

Realtime voice modality with interruptions could be the basis of some very powerful use cases, but again, I don't think there's a moat.

ArtTimeInvestor · 2 years ago
What makes you think AI will become a commodity?

In 25 years, nobody has been able to compete with Google in the search space. Even though search is the best business model ever. Because search is so hard.

AI is even harder. It is search PLUS model research PLUS expensive training PLUS expensive inference.

I don't think a single company (like Meta) will be able to keep up with the leader in AI. Because the leader might throw tens of billions of dollars per year at it, and still be profitable. Afaik, Meta has spent less thatn $1B on LLAMA so far.

We might see some unexpected twist taking place, like distributed AI or something. But it is very unclear yet.

methyl · 2 years ago
> The "AI answer" industry might become 10 times bigger than the search industry

not a chance

namaria · 2 years ago
Yeah nah. Current 'ai' is a nice useful tool for some very well scoped tasks. Organizing text data, providing boilerplate documents. But the back end is a hugely costly machine that is being hidden from view in hopes of drumming up usage. Given the capex and the revenue it necessitates it all seems quite unsustainable. They'll run this for as long as they can burn capital and are probably trying to pivot to the next hype bubble already.
Gettingolderev · 2 years ago
I'm betting on fully integrated agents.

And for good agents you need a lot of crucial integrations like email, banking etc. that can only provide companies like Google, Microsoft, Apple etc.

spaceman_2020 · 2 years ago
With the way costs are currently going down, I wonder how the monetization will work.

Frontier models are expensive, but the majority of queries don't need frontier models and can very well be served by something like Gemini Flash.

Sure, you need frontier models if you want to extract useful information from a complex dataset. But if we're talking about replacing search, the vast majority of search queries are fairly mundane questions like "which actor plays Tony Soprano"

trashtester · 2 years ago
I'm not sure monetization of AI in the typical way is even the goal.

Instead, I see the killer use case as having it replace human workers on all sorts of tasks, and eventually even fill roles humans cannot even do today.

And within about 10 years, that will even include most physical tasks. Development in robotics looks like it's really gaining speed now.

For instance, take Musk's companies. At some point, robotaxi will certainly become viable, and not constrained the way waymo is. Musk may also be right about Tesla moving from cars to humanoid robots, with estimates of 100s of millions to billions produced.

If robotic maid become viable, industrial robots will certainly become even much more versatile than today.

Then there is the white collar parts of these industries. Anything from writing the software, optimizing factory layouts, setting up production lines, sales, distribution may be done by robots. My guess is that it will take no longer than about 20 years until virtually all jobs at Tesla, SpaceX, X and Neuralink is performed by AI and robots.

The main AI the Musk Empire builds for this may in fact be their greatest moat, and the details of it may be their most tightly guarded secret. It may be way too precious to be provided to competitors as something they can rent.

Likewise, take a company like Nvidia. They're building their own AI's for a reason. I suspect they're aiming at creating the best AI available for improving GPU design. If they can use ASI to accelerate the next generation of compute hardware, they may have reached one type of recursive self-improvement. Given their profit margins, they can keep half their GPU's for internal use to do so, and only sell the rest to make it appear like there is a semblance of competition.

Why would they want to try to monetize an AI like that to enable the competition to catch up?

I think the tech sector is in the middle of a 90 degree turn. Tech used for marketing will become legacy the way the car and airplane industries went from 1970 to 2010.

onlyrealcuzzo · 2 years ago
> The rubber will meet the road when the first free and open AI website gets real traction. And monetizes it with ads next to the answers

Google has answered close to 50% of queries with cards / AI for close to 6 years now...

All the people who think Google has been asleep at the wheel forget that Google was at the forefront of the LLM revolution for a reason.

Everything old becomes new again.

akira2501 · 2 years ago
Or it's just AI Winter 2.0 and everyone is scrambling to stack as much cash as they can before the darkness.
jszymborski · 2 years ago
> Google search is the best business model ever.

IMHO I'm not sure even Google ever thought that.

AdSense is pretty much the only thing that makes Google money, and I'd eat my hat if that vast majority of that revenue did not come from third-party publishers.

lgmarket · 2 years ago
The free Bing CoPilot already sometimes serves ads next to the answers. It depends on the topic. If you ask LeetCode questions, you probably won't get any. If you move to traveling or such, you might.
wormlord · 2 years ago
> The "AI answer" industry might become 10 times bigger than the search industry.

Whenever I see people saying things like this it just makes me think we are at, or very near, the top.

stingraycharles · 2 years ago
Good for him, seems like OpenAI is moving towards a business model of profitability, and Anthropic seems to be more aligned with the original goals of OpenAI.

Will be interesting to see what happens in the next few years. It strikes me that OpenAI is better funded, though, and that AI (at their scale) is super expensive. How does Anthropic deal with this? How are they funding their operations?

Edit: just looked it up, looks like they have a $4B investment from Amazon and a $2B investment from Google, which should be sufficient (I’m going to assume these are cloud credits).

https://techcrunch.com/2024/03/27/amazon-doubles-down-on-ant...

https://www.reuters.com/technology/google-agrees-invest-up-2...

daghamm · 2 years ago
Anthropic has more limits on their free services, and even paid services have a cap that changes depending on current load. They are not burning VC money at the rate other AI companies at this size do.

I think they are more profitable than openai.

bamboozled · 2 years ago
Good for him, seems like OpenAI is moving towards a business model of profitability, and Anthropic seems to be more aligned with the original goals of OpenAI.

What is open about Anthropic ?

imadj · 2 years ago
>> Anthropic seems to be more aligned with the original goals of OpenAI.

> What is open about Anthropic ?

OpenAI's radical mission drift to the opposite extreme, made other companies look relatively closer to its own original goal than itself. From OpenAI's original announcement[1]:

> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return

> Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.

But ever since the ChatGPT craze, OpenAI ironically got completely consumed by capitalizing on financial return. They now appear quite unprincipled as if they see nothing but dollar signs and market dominance, which made Meta, Anthropic, even Google, look more rational and healthy by comparison. These companies are publishing research papers, open models, contributing more to the ecosystem and overall appear to be more mindful and conservative when it comes to the ethical and societal impact.

[1] - https://openai.com/index/introducing-openai/

resource_waste · 2 years ago
I'm with you.

Closed, Puritan models.

bookaway · 2 years ago
More inclusive title including Greg Brockman and Peter Deng departures:

https://news.ycombinator.com/item?id=41166862

bamboozled · 2 years ago
Is it just me or is Brockman leaving absolutely huge ? I can’t believe this isn’t front page. Basically everyone who is anyone has left or is leaving. It’s ridiculous.
bookaway · 2 years ago
Yeah, I was flabbergasted myself at the lack of commotion here when I got to the end of this article and learned of gdb's departure only then.
tzury · 2 years ago
Claude 3.5 Sonnet by Anthropic is the best model out there, if you are trying to have an extremely talented programmer paired to you.

Somehow, OpenAI is playing catch with them rather than vice versa.

stavros · 2 years ago
I'd replace "extremely talented programmer" with "knowledgeable junior", in my experience. It's much better than GPT-4o, but still not great.
maxlamb · 2 years ago
GPT-4 is way more powerful than GPT-4o for programming tasks.
daghamm · 2 years ago
I use both side by side.

It really depends on the language and the prompt. Sometimes one shines and the other produces garbage and it's usually 50/50

threeseed · 2 years ago
> if you are trying to have an extremely talented programmer paired to you

I've found it to be on par with Stack Overflow / Google Search.

More convenient than cut/paste but more prone to inaccuracies and out of context answers.

But at no point did it remotely feel like a top tier programmer.

nicce · 2 years ago
When we go from junior stuff to senior stuff, there is way too much hallucination, at least in Rust. I went back to forums after mainly using AI models for one year.

These models are good at generating template code and many straightforward things, but if you add anything complex, you start wasting your time.

RicoElectrico · 2 years ago
Claude is better by the virtue of the ridiculously large context window. You can literally drop a whole directory of source code spaghetti and it will make sense of it.
bradgessler · 2 years ago
How do you get it to work so well? I’ve tried it a few times now and it seems just as capable as gpt-4o.
Zealotux · 2 years ago
When I gave the same prompt to both, Sonnet 3.5 immediately gave me functional code, while GPT-4o sometimes failed after 4-5 attempts, at which point I usually gave up. Sonnet 3.5 is spectacular at debugging its output, while GPT-4o will keep hallucinating and giving me the same buggy code.

A concrete example: I was doing shader programming with Sonnet 3.5 and ran into a visual bug. Sonnet asked me to add four debugging modes, cycle through each one, and describe what I saw for each one. With one more prompt, it resolved the issue. In my experience, GPT-4o has never bothered proposing debug modes and just produced more buggy code.

For non-trivial coding, Sonnet 3.5 was miles above anything else, and I didn't even have to try hard.

spaceman_2020 · 2 years ago
you have to pick your tasks. You also can't ask it to use libraries that are poorly maintained or have bugs. Like if you ask it to create an auth using next-auth, which has some weird idiosyncracies when it comes to certain providers, and just copy-paste the code, you'll end up with serious failures

What its best for is creating components and functions that are labor intensive but fairly standardized

Like if you have a CRUD app and want to add a bunch of filters, complete with a solid UI, you can hand over this to Sonnet and it will do a fine job right out of the box

croes · 2 years ago
Isn't that dependent on the programming language?
jiggawatts · 2 years ago
I just can't get past the "You must have a valid phone number to use Anthropic’s services."

Umm... why?

Nobody else in the AI space wants to track my number.

I'm sure Anthropic has their "reasons". I just doubt it is one that I would like.

strogonoff · 2 years ago
Advanced ML products are forbidden[0] to export to many places, so those who skimp on KYC are playing with fire. Paid products do not have this issue since you provide a billing address, but there is no good, free, and legal LLM that does not use a reliable way of verifying at least user’s location.

Whether they are serious about it or use it as an excuse to collect more PII (or both/neither), collecting verified phone numbers presumably allows them to demonstrate compliance.

[0] https://cset.georgetown.edu/article/dont-forget-the-catch-al...

BoredPositron · 2 years ago
For API access I didn’t need to provide a phone number. I use it with a selfhosted lobechat instance without problems.
methyl · 2 years ago
For one, to avoid massive number of bots using the API for free.
4gotunameagain · 2 years ago
I definitely had to give up a number when registering for chatGPT.
maccard · 2 years ago
I'm not affiliated with Claude, but assuming you're serious:

> Umm... why?

https://support.anthropic.com/en/articles/8287232-why-do-i-n...

My guess is, these models are incredibly expensive to run, Claude has a fairly generous free tier, and phone numbers are one of the easiest ways to significantly reduce the number of duplicate accounts.

> Nobody else in the AI space wants to track my number.

Given they're likely hoovering up all of the data you're sending to them, and they have your email address to identify you, this seems like an odd hill to die on.

Dead Comment