Finally, the c-suite is getting it.
Some subset of the population likes to pretend their workforce is a cost that provides less than zero value or utility, and all the value and utility comes from shareholders.
But if this isn't true, and collective skill is worth value, then saying anyone can have that with AI at least has some headwind on your share price - which is all they care about.
Does that offset a potential tailwind from slightly higher margins?
I don't think any established company should be cheerleading that anyone can easily upset their monopoly with a couple of carefully crafted prompts.
It was always kind of strange to me, and seemed as though they were telling everyone, our moat is gone, and that is good.
If you really believed anyone could do anything with AI, then the risk of PEs collapsing would be high, which would be bad for the capital class. Now you have to correctly guess what's the next best thing constantly to keep your ROI instead of just parking it in save havens - like FAANG.
There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).
A basketball team can be great even if their coach sucks.
You can't attribute everything to the person at the top.
Apparently its better to pay $100 million for 10 people than $1 million for 1000 people.
So it depends on the type of problem you're trying to solve.
If you're trying to build a bunch of Wendy's locations, it's clearly better to have more construction workers.
It's less clear that if you're trying to build SGI that you're better off with 1000 people than 10.
It might be! But it might not be, too. Who knows for certain til post-ex?
Do I have this timeline correct?
* January, announce massive $65B AI spend
* June, buy Scale AI for ~$15B, massive AI hiring spree, reportedly paying millions per year for low-level AI devs
* July, announce some of the biggest data centers ever that will cost billions and use all of Ohio's water (hyperbolic)
* Aug, freeze, it's a bubble!
Someone please tell me I've got it all wrong.
This looks like the Metaverse all over again!
It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.
I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.
99% seems like a pulled-out-of-your-butt number and hyperbolic, but, yes, there's clearly a non-trivial percentage of customer support that's absolutely terrible.
Please keep in mind, though, that a lot of customer support by monopolies is intended to be terrible.
AI seems like a dream for some of these companies to offer even worse customer service, though.
Where customer support is actually important or it's a competitive market, you tend to have relatively decent customer support - for example, my bank's support is far from perfect, but it's leaps and bounds better than AT&T or Comcast.
That depends if the AI successes depended much on the leading edge of LLM developments, or if actually most of the value was just "low hanging fruit".
If the latter, that would imply the utility curve is levelling out, because new developments are not proving instrumental enough.
I'm thinking of an S curve: slow improvements through the 2010s, then a burst of activity as the tech became good enough to do something "real", followed by more gradual wins in efficiency and accuracy.
And regardless, I still see this as very positive for society - and don't care as much about whether or not this is an AI bubble or not.
5% are succeeding. People are trying AI for just about everything right now. 5% is pretty damn good, when AI clearly has a lot of room to get better.
The good models are quite expensive and slow. The fast & cheap models aren't that great - unless very specifically fine-tuned.
Will it get better enough so that that growth rate in success pilots grows from 5% - 25% in 5 years or 20? Who knows, but it almost certainly will grow.
It's hard to tell how much better the top foundation models will get over the next 5-10 years, but one thing that's certain is that the cost will go down substantially for the same quality over that time frame.
Not to mention all the new use cases people will keep trying over that timeline.
If in 10-years time, AI is succeeding in 2x as many use cases - that might not justify current valuations, but it will be a much better future - and necessary if we're planning on having ~25% of the population being retired / not working by then.
Without AI replacing a lot of jobs, we're gonna have a tough time retiring all the people we promised retirements to.
So, 15% for the rest of your life or until you can leave the country and are beyond their reach (if an option) if you cannot afford to pay them off. The reality for many is they will never have enough income potential to pay off this debt, so the best course of action is to bail on it to optimize for quality of life if you’ll never be able to pay it back. The developed world is hungry for young, educated talent.
https://www.cnbc.com/2019/05/25/they-fled-the-country-to-esc...
That seems like a losing strategy, like shooting yourself in the face to save your foot.
I mean, there's millions of people with student debt - there's bound to be a few edge cases, but that really isn't relevant.
Are you also implying that it's non-edge case to say, oh, I'll just not pay my student loans and then leave my country?
Why would that be any more an option than people doing that for credit card debt?
I mean, sure, there's some very small percentage in some table factored into the interest rate.
Everyone else is either paying or getting their wages garnished (unless something in the future changes).
Is it not?