Secondly, this is looking very risky: they are at the bottom of the value chain and eventually they'll be running on razor thin margins like all actors who are at the bottom of the value chain.
Anything they can offer is doable by their competitors (in Google's case, they can even do it cheaper due to ow ing the vertical which OpenAI doesn't own).
Their position in the value chain means they are in a precarious spot: any killer app for AI that comes along will be owned by a customer of OpenAI, and if OpenAI attempts to skim that value for itself that customer will simply switch to a new provider of which there are many, including, eventually, the customer themselves should they decide to self host.
Being an AI provider right now is a very risky proposition because any attempt to capture value can be immediate met with "we're switching to a competitor" or even the nuclear "fine, we'll self host an open model ourselves".
We'll know more only when we see what the killer app is, when it eventually comes.
This reminds me of Amazon choosing to sell products that it knows are doing well in the marketplace, out-competing third party sellers. OpenAI is positioned to out-compete its competitors on virtually anything because they have the talent and more importantly, control over the model weights and ability to customize their LLMs. It's possible the "wrapper" startups of today are simply doing the market research for OpenAI and are in danger of being consumed by OpenAI.
Dead Comment
It's a 200 pe stock, sales are falling, so it won't have earnings to speak of next quarter. High pe stocks need growth to justify their multiples. Tesla is not growing.
Also if this robotaxi service isn't pulled off the road soon then it will be limited to a very select set of locations. If someone has to sit in these cars to monitor them all the time then Tesla may be losing money on every journey.
This premature move in releasing the robotaxi is certainly stock pumping.
And I got roasted. Invest with caution.
Isn’t an LLM basically a program that is impossible to virus scan and therefore can never be safely given access to any capable APIs?
For example: I’m a nice guy and spend billions on training LLMs. They’re amazing and free and I hand out the actual models for you all to use however you want. But I’ve trained it very heavily on a specific phrase or UUID or some other activation key being a signal to <do bad things, especially if it has console and maybe internet access>. And one day I can just leak that key into the world. Maybe it’s in spam, or on social media, etc.
How does the community detect that this exists in the model? Ie. How does the community virus scan the LLM for this behaviour?
To me it seems so strange that few good language designers and ml folks didn't group together to work on this.
It's clear that there is a space for some LLM meta language that could be designed to compile to bytecode, binary, JS, etc.
It also doesn't need to be textual like we code, but some form of AST llama can manipulate with ease.
With these tools, AI starts taking as soon as we stop. Happens both in text and voice chat tools.
I saw a demo on twitter a few weeks back where AI was waiting for the person to actually finish what he was saying. Length of pauses wasn't a problem. I don't how complex that problem is though. Probably another AI needs to analyse the input so far a decide if it's a pause or not.
We don't need to feel like we're talking to a real person yet.
That being said, I find it a bit discouraging that small-team passion projects with even the best product-market fit and minimal marketing spend only reach this level of profitability after 5 years.
Like, I can work at a FAANG, coast, make no real contribution to society and collect a 400K/yr check. Or I could go all in on a cool idea and risk getting no customers. Option 2 sounds more fun, but it's still so much stress and uncertainty for little payoff.
Do others feel the same?