Readit News logoReadit News
ctoth · 2 months ago
> There are certain bullsh*t jobs out there — some parts of management, consultancy, jobs where people don’t check if you’re getting it right or don’t know if you’ve got it right.

Market Analyst, perhaps?

zippyman55 · 2 months ago
I suggest AI is cover to reign these jobs in. All those people who had a nice paying job, but did about 2 hrs work a day. AI is coming for them. In some respects, management previously looked the other way, but that is becoming less frequent and its easy to blame the reduction on AI.

Deleted Comment

Deleted Comment

1vuio0pswjnm7 · 2 months ago
"Earlier this month, Garran published a report claiming that we are in "the biggest and most dangerous bubble the world has ever seen.""

Here is the report

https://www.youtube.com/watch?v=uz2EqmqNNlE

jimbo808 · 2 months ago
This may just be wishful thinking, but is it reasonable to hope that it won't hit the middle class as bad this time around? Seems like most of the people holding the AI bags are the very wealthy, and it doesn't seem like these AI companies are employing a huge number of people.
dist-epoch · 2 months ago
When everybody agrees about something in finance, it's typically the other way around.

Reminds me of the "everybody knows Tether doesn't have the dollars it claims and it's collapse is imminent" that was parroted here for years.

grogers · 2 months ago
The argument about Tether wasn't that they didn't have any assets backing the coins. It was that the assets they had were more risky than the boring <1 mo maturity treasuries they should be holding. Just because tether didn't implode , doesn't mean it wasn't a very real possibility. It's not very different from "the market can stay irrational longer than you can stay solvent"
bdangubic · 2 months ago
every penny I made in the market over the last 30 years can be in some (or all) way attributed to exactly this. but this has to be backed by fundamentals. and fundamentals are weakening… this is a good read on OpenAI shit happening recently but it is industry-wide related - https://www.wheresyoured.at/openai400bn/
GenerWork · 2 months ago
People here are still in denial that crypto will ever have a use case, meanwhile you have Larry Fink saying that he wants to tokenize the financial ecosystem.
singularity2001 · 2 months ago
People are in denial that crypto (at least the current iteration) can go to completely zero which is impossible for AI tech
Yizahi · 2 months ago
Token do have use case, obviously. Like, we can see countless usecases with our own eyes. Tokens don't have any legal and at the same time competitive use case, that was the argument. All of those castle in sky constructs about how there would be property deeds on blockchain (technically and legally impossible), how there there would be game assets on blockchain (also technically impossible plus no game studio would ever be interested), how ticket scalpers would solved on blockchain (technically possible, but no ticket vendor is interested because they are the ones who benefit from scalpers) etc. And the list goes on. All of those legal use cases were a dud, because it is simply a shitty technology.

But to reiterate, there is great and massive actual use case for the tokens, yes. No one would argue against it :) . We just think that it is bad.

AaronAPU · 2 months ago
Number go up isn’t a use case.

This seems to be the disconnect.

Yizahi · 2 months ago
And they did not in fact had dollars to back them up. They did not had them for a few years continuously. The lesson is, you never bet even on a surefire stake if there is market corruption involved. Or if mafia money involved. In case of Tether it was both.

It was a good lesson for me personally, to always check wider picture and consider unknown factors.

dcre · 2 months ago
“At the heart of the note is a golden rule I’ve developed, which is that if you use large language model AI to create an application or a service, it can never be commercial.

One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.

The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.”

Frankly kind of amazing to be so wrong right out of the gate. LLMs do not predict the most likely next token. Base models do that, but the RLed chat models we actually use do not — RL optimizes for reward and the unit of being rewarded is larger than a single token. On the second point, approximately all commercial software consists of a big pile of chunks of code that are themselves rote and uninteresting on their own.

They may well end up at the right conclusion, but if you start out with false premises as the pillars of your analysis, the path that leads you to the right place can only be accidental.

criticalfault · 2 months ago
Can you explain a bit more on the topic of what happens after the base model?
dcre · 2 months ago
The base model is a pure next token predictor. It just continues whatever prompt you give it — if you ask it a question, it might just keep elaborating the question. To turn these models into something that can actually chat (and more recently, that can do things like tool calls) they do a second phase of training, including reinforcement learning, which teaches the model to maximize some kind of reward signal meant to represent good answers of various kinds. This reward signal applies at the level of the whole response (or possibly parts of the response) so it is not predicting the most likely next token. I don’t know in an absolute sense how much this ends up changing the base model weights, and it’s surprisingly hard to find discussions of this, I guess because the state of the art is quite secret. But it’s clear that RL is important for getting the models to become useful.

This is a reasonable explanation, though as a non-expert I can’t vouch for the formal parts: https://www.harysdalvi.com/blog/llms-dont-predict-next-word/

There are other posttraining techniques that are not strictly speaking RL (again, not an expert) but it sounds to me like they are still not teaching straightforward next token prediction in the way people mean when they say LLMs can’t do X because they’re merely predicting the most likely next token based on the training corpus.

BenFranklin100 · 2 months ago
All i know is that I’m looking forward to picking up deep learning programmers for biomed applications in about nine months time.
bitwize · 2 months ago
I've quipped a lot here about s/AI/statistics/g, but the applications where that is most straightforwardly true are probably the most solid that are going to produce a lot of value over the long term.

Before computers came along, we really couldn't fit curves to data much beyond simple linear regression. Too much raw number crunching to make the task practical. Now that we have computers—powerful ones—we've developed ever more advanced statistical inference techniques and the payoff in terms of what that enables in research and development is potentially immense.

BenFranklin100 · 2 months ago
Yep. Right now it’s hard for biomed companies to compete on salary from the AI craze, but if the bubble bursts salaries will come back to down to earth. Deep/machine learning will, imo, prove to have large societal benefits over the next decade.
lukeschlather · 2 months ago
The bubble referenced in the article is $1 Trillion, compared to Google's $3 trillion market cap. And OpenAI / Anthropic legitimately compete with Google Search. I feel weirdly like AI's detractors are somehow drinking too much of the AI Kool-Aid. All AI has to do to justify these valuations is capture 1/3rd of Google. Unless Google is wildly overvalued, which it may be, but that's not a phenomenon that has anything to do with AI hype.

And there are legitimately applications beyond search, I don't know how big those markets are, but it doesn't seem that odd to suggest they might be larger than the search market.

singularity2001 · 2 months ago
NVIDIA has a market capitalization of about US$4.3–4.6 trillion. Still not overly excessive compared to Google considering it's a hardware company.
octoberfranklin · 2 months ago
Most of Google's value is the moat they've built around the things that bring in money... their advertising market, google play store, vertical integration, etc. See also Doctorow's Chokepoint Capitalism.

Building even a tiny fraction of those moats is mind-bogglingly difficult. Building a third of that moat is insanely hard. To claim that the AI industry's "expected endgame moat size" is one-third of Google's current moat is a ludicrous prediction. You'd be better off playing the lottery than making that bet.

I would be happy to bet against this if I could do it without making a Keynes-wager (that I can remain solvent longer than markets remain irrational), but I see no way to do so. Put options expire, futures can be force-liquidated by margin calls, and short sales have unlimited downside risk.

exsomet · 2 months ago
> All AI has to do to justify these valuations is capture 1/3rd of Google.

Is that all? It really is that easy huh.

bdangubic · 2 months ago
> And OpenAI / Anthropic legitimately compete with Google Search

They compete legitimately with Google Search as I compete legitimately with Jay-Z over Beyonce :)

OrvalWintermute · 2 months ago
Just ask yourself this question though...

Is there a reason why AI cannot be far better than Google at providing results to queries?

Inherently, they are in the same business, but I am not very aware of any AI specifically aimed right at Google's business....... but it is completely logical that they would.

Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'

Agingcoder · 2 months ago
What I want to know is whether people who believe in a bubble actually short AI/tech-related stocks.
Esophagus4 · 2 months ago
Usually not - the people writing these comments have neither the understanding nor the courage of their conviction to bet based on their own analysis.

If they did, the articles would look less like “wow, numbers are really big,” and more like, “disclaimer: I am short. Here’s my reasoning”

They don’t even have to be short for me to respect it. Even being hedged or on the sidelines I would understand if you thought everything was massively overvalued.

It’s a bit like saying you think the rapture is coming, but you’re still investing in your 401k…

Edit: sorry to respond to this comment twice. You just touched on a real pet peeve of mine, and I feel a little like I’m the only one who thinks this way, so I got excited to see your comment

Terr_ · 2 months ago
That sounds like a variation on: "If you're so smart, why aren't you rich?" which rests on some very shaky (yet comforting) set of assumptions in a "just world."

Heck, just look at yesterday: Myself and several million other people wouldn't have needed to march if smart people reliably ended up in charge.

I think it's more valuable to flip the lens around, and ask: "If you're so rich, why aren't you smart?"

ng12 · 2 months ago
Usually not, because shorting a broad chunk of market is very hard. "Markets can remain irrational longer than you can remain solvent".
sitzkrieg · 2 months ago
or you could sell a single broad market etf lol. or buy a short etf.. it hasn't been hard to selectively exposure yourself to dang near any slice of equities since the ETF boom
sph · 2 months ago
I feel any naive question about investing ever can be answered with "markets can remain irrational longer than you can remain solvent"

The bubble is the manifestation of this concept. Things should be falling apart, yet they keep going up, for longer than it is reasonable; at some point, bearish investors lose so much money they decide it's better just to ride the wave up, growing the bubble even further, until it bursts and everybody loses.

There is a reason investors flock to gold during these times. The best move is not to play (though you don't want to hold too much cash either)

AviationAtom · 2 months ago
It's a bit hard to short private companies, of which most AI companies have chosen to remain, to avoid scrutiny from shareholders.
paulpauper · 2 months ago
They do and the majority lose everything. The few winners who happen to time the top are praised for their genius.
Yizahi · 2 months ago
Market bubble is essentially a gambling event gone wrong. Shorting stock is widely recognized by people smarter than me, as high risk gambling, due multiple factors. So now please tell me, why would people concerned about gambling gone wrong, voluntarily engage in a reverse gambling themselves? Let imagine football and a spectator who is moderately in the know about this sport. He sees that multiple people are gambling large sums on the team he deems would likely lose. Why would such a person go and bet unreasonable sums on the opposite team, even if it's a likely win? It's still gambling and still not a reasonably defined event.

tl;dr - it is really tiring, reading these "clever" quips about "why won't you short then?", mainly because they are neither clever nor in any way new. We have heard that for a decade about "why won't you short BTC them?". You are not original.

Deleted Comment

brazukadev · 2 months ago
I'm against sporting bets, should I bet against it?
lelanthran · 2 months ago
> What I want to know is whether people who believe in a bubble actually short AI/tech-related stocks.

Why? What does that tell you?

brippalcharrid · 2 months ago
Stated preference vs. revealed preference
ThrowawayTestr · 2 months ago
The common phrase is "putting one's money where their mouth is"
Esophagus4 · 2 months ago
On a more degenerate forum, the policy you’re referring to would be “positions or ban”
givemeethekeys · 2 months ago
This is also why all online stock pundits are full of shit. None of them will publicly disclose their P&L's from trading because they make most of their money from YouTube and peddling courses.
AndrewDucker · 2 months ago
I moved my pension in to an index that doesn't include the big AI companies.
AviationAtom · 2 months ago
The whole market was propped up by AI stocks though. So realistically you'd have to move out of the markets to avoid exposure.
dragonwriter · 2 months ago
"I believe this is a bubble and it will pop"

and

"I believe this is a bubble and it will pop and I believe I can time it well enough to be worth putting money on when it will pop"

Are...not the same belief.