You also seem to be implying in your comment that the orion glasses displayed at connect last year were a last minute pivot, which is a ludicrous statement
You cannot determine it's a waste if the effort isn't completed, and if you have no insight into their progress.
Altman is desperately trying to use OpenAI's inflated valuation to buy some kind of advantage. Which is why he's buying ads, paying $6.5 billion in stock to Jony Ive, and $3 billion for a VSCode fork created in a few months.
Almost anything makes sense when you see your valuation going to zero unless you can figure something out.
Core to OpenAI's strategy is that they control not just the models, but also the entrypoints to how these models are used. Don't take it from me, this is explicitly their strategy according to internal documents (https://x.com/TechEmails/status/1923799934492606921).
Some important entrypoints are:
- Entrypoints for layman consumers: They already control this entrypoint due to ChatGPT, the app. They have a limited moat here because they are at the whims of the platform owners, primarily Apple and Google. This is why they are purchasing Ive's startup.
- Entrypoints for developers: They acquired Windsurf, and are actively working on cloud development interfaces such as the new codex product.
- Entrypoints for enterprise: They have the codex products as described above, but also Operator, and are actively working on more cloud based agents.
A rebuttal that I anticipate to the above goes something along the lines of this: "If they have so much capital and dev experience, why are they acquiring these businesses instead of building internal competitors? This is a demonstration of their failure to execute"
The current AI boom is one of the most competitive tech races that has ever occurred. It is because of this, and particularly because they are so well capitalised that it makes sense to acquire instead of build. They simply cannot afford to waste time building these products internally if they can purchase products much further along in their development, and then attach them to their capital and R&D engine
What you're saying is either an outright lie you are telling us, or a lie you have been told and are yourself choosing to believe to feel better about working for them, or to not put yourself at risk by digging deeper and finding adverse information (which once you know it you may be required to blow the whistle or find yourself legally complicit in the matter).
Basic economics suggests there is no reason Meta (then-Facebook) would pay what it had paid for WhatsApp for "just" a messaging app, especially before it was as entrenched as it is now (and so could trivially be dethroned by Facebook's own offering). They did so because unlike Facebook, people trusted WhatsApp with access to their contacts and that information is extremely important for Meta.
Source: also worked at meta and had full access to WhatsApps codebase, like most engineers at meta
These numbers can be plotted as points in a space, and embeddings of things with similar meanings are plotted close to each other. So things like "exam preparation" would have embeddings close to things like "top study tips".
Say you have created embeddings for a large corpus of text (in this case all youtube captions) once. If you create embeddings for a user query, you can search for embeddings close to it, and these will be "semantically" similar to the query.
The advantage is that unlike traditional full-text search, the user doesn't need a query that includes words present in the text.