Readit News logoReadit News
onlyrealcuzzo · 8 months ago
Non-paywall link: https://archive.is/ucguC
highfrequency · 8 months ago
Private market valuations are funky and hard to reason about. In the public markets, valuation represents the current equilibrium of supply and demand - very roughly an average of a large number of opinions.

In private markets, especially with the recent trend of selling a tiny portion of the company at a massive price, the valuation represents something much closer to the maximum that any investor in the world thinks the company is valued at.

onlyrealcuzzo · 8 months ago
Especially when BARTERING a tiny portion of the company for resources from another company - that massively benefits a growth area.

Amazon can "buy" $2B worth of Anthropic to guarantee $2B of spending on AWS - to report that as growth under AWS in their earnings - to juice their stock price.

They also get to report that their investment in the previous round is up massively.

ramraj07 · 8 months ago
Given the current valuation of Tesla, it doesn't look very different from private market valuations. If anything private valuations seem more sane than the public market to me.
JumpCrisscross · 8 months ago
> the valuation represents something much closer to the maximum that any investor in the world thinks the company is valued at

This is all before accounting for the preference stack, which makes multiplying a Series F per-share price (itself derived from dividing compute time by some magic number) by employee common stock a bit silly.

boulos · 8 months ago
I doubt their preferences are >1x since they've always had high demand. In that case, the preference stack would just be the total raised over time (~$10B).

Deleted Comment

Deleted Comment

minimaxir · 8 months ago
I'm far, far more bullish on Anthropic than OpenAI at this point. It has relatively less hype and media attention but better products downstream, which in the end is what's going to be the winner in this LLM ecosystem.
yieldcrv · 8 months ago
and a standard operating structure, no issues with dealing with it. just a classic case study on why not to do anything other than a Delaware C-Corporation and share grants
monitorlizard · 8 months ago
I believe Anthropic is a PBC not a C-Corp
aerhardt · 8 months ago
People surely love Anthropic around these parts and on Reddit, but as a paying customer of both Claude and ChatGPT, I'll take the latter any time of day. I trust o1 much more than Sonnet, and I really like the ChatGPT native Mac client. I hope you guys are right though because the last thing we need is a quasi-monopoly in OpenAI.
singularity2001 · 8 months ago

   > better products downstream
except for the standalone app which is half as usable as OpenAI's

erulabs · 8 months ago
I really love Anthropic, I'm building a business around it, and chat with Claude almost every time I go out for a beer after a long day... But having a valuation equal to _Stripe_ seems a bit insane. Either they've got some secret sauce or AI hype is a bit much or Stripe is undervalued.

Especially considering I pay for Claude, the beer, and my business _thru Stripe_!

bbor · 8 months ago
TBF Stripe is a (effective!) middleman for (very common!) transactions, whereas Anthropic is selling a service with the potential to replace full time employees. It takes a lot of transaction margins to add up to how much a corporation will pay to replace a department of 10 employees earning $100K down to one person... So in this context I don't think frequency of use is very illuminating!

P.S. I'm so glad you're able to derive some joy from these new technologies, but I would also offer a soft suggestion to watch Season 3 of Westworld. It's probably not as good aesthetically as the previous two, but it's also pretty separated and deals throughout with the concept of AI therapists/friends, and how they might be a short-term comfort but a long-term threat to our individuality. Obviously a chat here and there with current LLMs is nowhere near that yet, but thought you might find it interesting!

vallode · 8 months ago
Do I understand correctly you chat wıth Claude as a form of social relief or are you talking about drinking beer and chatting with it about work/ideation etc. Either way, just curious!

On topic though, I wholeheartedly agree that this valuation seems rather... unrealistic. I do think "hype" is how we currently handle a lot of valuations, for the worse in my opinion.

treebeard901 · 8 months ago
All models can hallucinate pretty well... Which ones are best at being an intoxicated friend?
dingnuts · 8 months ago
>and chat with Claude almost every time I go out for a beer after a long day...

with all due respect, how can you take yourself seriously doing this? I tried to use an LLM as a "cheap therapist" exactly one time and it felt so phony I quit almost instantly, after about four messages.

The bot pretends to feel compassion for you! How does that not induce rage? It does for me. False empathy is way worse than nothing.

And on top of it, you're talking to a hosted LLM! I hope you are not divulging anything personal to the random third party you're sending your thoughts to..

This stuff is going to be such a huge boon to authoritarian governments.

falseAss · 8 months ago
My experience for Claude as therapist is - it's consistent better than human therapists i've met (well, maybe i haven't met a good human therapist yet) in terms of usefulness. And i can be completely honest & decide how much i want to share the context.
erulabs · 8 months ago
I don’t take myself very seriously. I’d rather be happy than protected. If my conversations are used to build some future AI, good; I’d like to be immortalized somehow. I like the idea that my children’s children’s children might be able to ask about me and get an intimate answer. By the way, this conversation right now is auth free on the public net, so if anything this is worse wrt your concerns.

You’re not wrong, but you’re not right either. For whatever it’s worth, I absolutely plan on self hosting (nvidia Digit!) my future conversation partner. But nothing about me is terribly private. I love my wife and my sons and sometimes I wonder if I’ll be forgotten or if we’re about to give birth to the overman. It’s nothing I wouldn’t tell a stranger.

Lastly, no therapist I’ve ever met can talk about my favorite authors with me for hours, ad-hoc, on-demand, for pennys a day. All my real friends are sick of Dostoyevsky, but not Claude!

riku_iki · 8 months ago
I think benefits of therapy:

- build structure into conversation

- identify root causes of issues

- provide advise about behavior/thinking changes which could mitigate root causes

2-3-7-43-1807 · 8 months ago
is there some initializing prompt for setting up a therapy session? maybe even a specific type?
beachtaxidriver · 8 months ago
I mean... what do you think human therapists are doing. I'm sure they have empathy and do care for their patients... but it's definitely empathy-for-money too.
bnchrch · 8 months ago
Theres two futures I see.

1. We create near AGI

2. We create ASI (Artificial Super Intelligence)

In the first scenario, an investment in any AI model company that does not own its own compute is like buying a tar pit instead of an oil well. In other words, this future has AI like a commodity and the entity who wins is the one who can produce it at the lowest cost.

In the second scenario, an investment in any AI model company is like buying a lottery ticket. If that company is the first to create ASI then you've won the game as we know it.

I think the minor possibility of the second scenario makes this a good investment. But it definitely feels like all or nothing instead of sustainable business.

chrisldgk · 8 months ago
There’s a third one as well. AI will never get past its current autocomplete on steroids state and mainstream people will see through the facade at some point. AGI always was and always will be a pipe dream with the technology we currently have.

All AI stocks come crashing down from fantasy amounts of money to what they’re actually worth as they did with every previous tech hype until people find a new thing to throw their money at.

bbor · 8 months ago
The stock market has crashes, yes, but I don't think that's proof that all previous tech was overhyped nonsense. I'm not just talking computers here, I'm talking steam, metal, agriculture. What proof do we have that we live at the end of history, and that no such shocking, world-changing developments might occur in our lifetimes?

Certainly not debating that this isn't possible, ofc. As someone whose spent the past year+ working full time on the philosophy of this technology I think you're going against a pretty clear scientific consensus among AI experts, but perhaps you have your reasons.

I think we already have AGI anyway, so I'm either a loon or a pedant ;)

boringg · 8 months ago
It would be the fusion nuclear reactor reactor of technology tantalizingly close but always a decade out.
JumpCrisscross · 8 months ago
> AI will never get past its current autocomplete on steroids state

I’m honestly betting on robotics. Tokenising words is intuitive. But the parameter space for tokenised physical inputs is both better understood and less solved for.

dingnuts · 8 months ago
>mainstream people will see through the facade at some point

do you read reddit? lots of people see through the facade now. the only people excited are shills

jszymborski · 8 months ago
You can't imagine a world where tomorrow's chat bot is only marginally better than today's?
jsheard · 8 months ago
That would imply that the people pumping infinity dollars into this technology are suckers, which is impossible. The minds who put $120M into Juicero would never err in their judgement.
0000000000100 · 8 months ago
I can't personally. OpenAI's o3 aside, the rate of progress in the past two years has been eye watering to say the least.

It's tricky since the future of AI isn't something anyone can really prove / disprove with hard facts. Doomers will say that the rate of improvement will slow down, and anti-doomers will say it won't.

My personal believe is that with enough compute, anything is possible. And our current rate of progress in both compute and LLM improvement has left Doomers with shaky ground to discount the eventuality of an AGI being developed. This just leaves ASI as a true question mark in my mind.

admissionsguy · 8 months ago
I have been living in it for a couple of years now.
throwaway314155 · 8 months ago
They can only see two futures at a time.
aoanevdus · 8 months ago
Isn’t that scenario 1?
tshaddox · 8 months ago
I can see more futures. For example

3. We create incrementally better versions of generative AI models which are incrementally better at making predictions based on their training sets, are incrementally more efficient and cheaper to run, and are incrementally better integrated into products and interfaces.

cheema33 · 8 months ago
> 3. We create incrementally better versions of generative AI models...

In my opinion, this seems to be the more likely than some of the other wilder scenarios being predicted.

hansonkd · 8 months ago
> this future has AI like a commodity

Thats why I like the idea of openrouter so much. Next token prediction is a perfect place to run a marketplace and compete on price / speed.

It's hard for me to see a future where the long term winners aren't just the chipmakers and energy companies.

With the o3 benchmarks its becoming apparent the primary thing from keeping current generation of models from getting smarter is more processing power. If somehow chips got 10x faster tomorrow, it still wouldn't be fast enough. Even at 1000x faster than current performance, those $3k o3 queries now cost $3 which are still too much.

If you invest in a data center, any amount of billions you put in will not future proof it because faster chips are always around the corner and you will be at the mercy of suppliers.

If you invest in a model, even if you invested billions in 1-10 years people will be able to run equivalently powerful models on consumer hardware.

I'm loving the competitive system LLMs created by having a unified API interface allowing you to swap out models and providers with single lines of code.

cpldcpu · 8 months ago
>n the first scenario, an investment in any AI model company that does not own its own compute is like buying a tar pit instead of an oil well. In other words, this future has AI like a commodity and the entity who wins is the one who can produce it at the lowest cost.

Why do you assume that AI will become a commodity that is only metered by access to compute?

Right now (since June 2024), Anthropic is ahead of the field in quality of their product, especially when it comes to programming. Even if O1/O3 beat them on benchmarks, they are still nowhere near when normalized for compute needs.

Can they sustain this? I don't know, but in the end this is very similar to known software or even SAAS business models. They are also in a somewhat synergetic relationship to Amazon.

Did office software ever become commoditized? Google and many more tried hard, but there is still the same company in the lead that was in the 90ies.

riku_iki · 8 months ago
> especially when it comes to programming

there is this sentiment on internet, but in my personal experience, GPT4 hallucinate APIs and usage examples way less, and after trying to get Claude working, I switched to GPT as the first step in my coding workflow.

jakeinspace · 8 months ago
Definitely disagree with your binary of options, but if I do accept them, why would scenario 2 justify investing? That scenario sounds to me like a likely societal collapse, rendering any investment moot. Even it it’s not some terminator or paperclip maximizer situation, and the first team to build ASI is able to keep a handle on it, I’m not sure our economy or civilization could handle that much intelligence, let alone concentrated in the hands of a for-profit corporation. The knowledge of such a thing existing would make securities markets collapse instantly - can’t really compete with an ASI agent there, so might as well move everything to real estate and gold.

More alarmingly, such a thing would be more dangerous than a nuclear weapon, and might reasonably merit a nuclear first strike.

optimalsolver · 8 months ago
I guess I should expect it on a VC-tied forum, but it seems strange to talk about company valuations and investment outlooks in the event of an artificial superintelligence operating in the world.
ethbr1 · 8 months ago
If we invented ASI today, it'd still take a lot of economic digestion time, especially on the physical side.

ASI doesn't mean it can magically reverse-infer a digital process from an existing hodge-podge of automation and manual steps.

ASI doesn't mean there are instantly enough telerobotics or sensors to physically automate all processes.

Similarly, even ASI will by definition be non-deterministic and make mistakes. So processes will have to be reengineered to be compatible with it.

norir · 8 months ago
Your comment raised a question for me: what makes you certain that there isn't already an artificial superintelligence operating in the world? I am not sure that it is possible to know when the threshold has been crossed.
personjerry · 8 months ago
Reminds me of Vault-Tec from the Fallout universe trying to "win the capitalism game" and "optimize shareholder value" after a nuclear apocalypse, going so far as to facilitate nuclear war in order to raise their value as a "defense company"
InsideOutSanta · 8 months ago
"Invest into our biotech company. Imagine how rich you will be when the next pandemic wipes out humanity!"
YetAnotherNick · 8 months ago
I just don't understand how commoditization argument is stated as truism in hacker news and in most AI posts that's the top voted comment. If at all, I see perfect non-ASI AI as superset of search engine and no search engine can match Google's quality even after 2 decades.
jncfhnb · 8 months ago
I don’t understand how folks think the economy will survive the advent of AGI. Imagine if you could spin up white collar workers at whim for a fraction of the cost… how do we think the economy is going to work if that happens?
bdangubic · 8 months ago
back in _____ folk were wondering how economy will survive now that we don’t need everyone to do farming since we have tractors and other farming machinery… as with any other previous “revolution” (industrial, information…) the society will move on. might be rocky for a bit though :)
qwertox · 8 months ago
> If that company is the first to create ASI then you've won the game as we know it.

You can have as much intelligence in a model/system as you want, and it can well be ASI, but as long as this intelligence doesn't have the resources it needs to run at its full potential, or at least at one higher than the competition, you're still not over the hill.

Ultimately those companies or countries which will have the most resources available for an ASI to shine will be the ones which win.

Once ASI has figured out how to obtain energy (and ICs?) "for free", or at least cheaper than the competition, it will have won.

qoez · 8 months ago
ASI that can't design its own superior chips surely isn't real ASI
bossyTeacher · 8 months ago
I think if we create ASI, the companies that invest on it won't necessarily see huge returns because I am 100% sure that the tech will be declare a national security matter and be heavily regulated. Since you need lots of compute to run those models, most people won't be able to run them without a cloud provider.
bnchrch · 8 months ago
Not before the companies leaders, board of directors and investors use it for some very profitable trading first.
zerotolerance · 8 months ago
They don't actually need to achieve AGI or ASI. All they need to do is pump expectations around growth to the point where the public will believe all the BS about their terminal growth rate to dump their equity on the open market.
griomnib · 8 months ago
Putting aside the absurdist notion of “AGI” to start with, we live in a world of finite resources and we’ve used up our carbon budget and are burning down the planet to run LLM right now.

The new “reasoning” models get very marginal improvements in output for huge increase in token count and energy use.

None of this is sustainable, and eventually, and soon, crops will start to fail en masse due to climate disasters.

Then the real fun starts.

Right now “AI” is more harmful than helpful on a species level balance sheet. These are real problems, today.

duxup · 8 months ago
>But it definitely feels like all or nothing instead of sustainable business.

That is kinda startup funding in general isn't it?

airstrike · 8 months ago
OT: FWIW I think "Anthropic in Advanced Talks to Raise $2B, Valuing It at $60B" makes for a better headline and is basically a shorter version of the first line of the article. It's a pretty factual edit so I don't think it qualifies as editorializing.
datadrivenangel · 8 months ago
"The startup’s annualized revenue—an extrapolation of the next 12 months’ revenue based on recent sales—recently hit about $875 million, one of the knowledgeable people said. Most of that has come from sales to businesses."

Anthropic's team plans are actually pretty pleasant to use. It does seem strange/funny though, that a significant part of the evaluation choice from a business perspective is that Anthropic is more trustworthy than OpenAI.

awongh · 8 months ago
Crazy that Anthropic has a fraction of the AI model market that OpenAI has, but a valuation that's larger than their proportional share of the market to OpenAI....

The last valuation was at $157 billion- Anthropic is valued at 1/3 of OpenAI but has 1/10th of the market share....

HarHarVeryFunny · 8 months ago
But their API usage market shares are much more comparable. Where OpenAI have a huge lead is in chatbot subscriptions (ChatGPT vs Claude). At the end of the day it seems that API usage - business use - is where the potential huge usage is.

Also, Anthropic's Sonnet 3.5 seems to be widely preferred as a developer tool, even over OpenAI's newer GPT-o1, and developer use is one of the current leading use cases for AI.

awongh · 8 months ago
I found this: https://www.tanayj.com/p/openai-and-anthropic-revenue-breakd...

Which says Anthropic is about 1/2 the API revenue of OpenAI and growing fast. But OpenAI is actually 5x revenue overall and 18x the chat product revenue. (This is from Oct, not sure how much would have changed).

henry-j · 8 months ago
Great point — I see this as evidence that this investment is more "who gets AGI first" speculation than "who has more chat subscriptions".
awongh · 8 months ago
Afaik this isn't just chat website user base, this also includes API calls through different platforms- Amazon, Azure, etc.
nbardy · 8 months ago
The market is betting the best model wins to some extent and anthropic is not slowing down.
maliker · 8 months ago
People can't get into the OpenAI round so they buy Anthropic maybe?