Seems like the Nigerian Prince bayesian model as analysed by Microsoft. Due to many false positives within the thousands of potential responders pool, they emit a signal that only a real easy victim would fall for to reduce the costs of their final filtering process.
“The company’s mission is to understand the true nature of the universe”
- There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM.
But on the philosophical side, if an understanding can’t be communicated, does it exist? We humans only have various movements and vibrations of flesh, sensing those, text, and images to communicate.
And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.
>“The company’s mission is to understand the true nature of the universe” - There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
I disagree. The day is coming when some *BIG* problem is solved by AI just because someone jokingly asks about it.
These people always exist. They pick up whatever is en vogue and sell it to investors. What happens later is of secondary importance, what matters is that money changed hands.
It kinda reminds me of James Bond Diamonds are Forever where the main scientist is convinced Blofeld is doing the right thing until the very bitter end.
To play devil’s advocate, he lists “truthful” as a goal, which is emphatically missing from openai, google, microsoft, facebook. Google even removed don’t be evil. Elon is greedy and truthful (although obviously with plenty of self deceipt when conflict of interest…). But how far can you really go with truth, when no one wants the truth: not the west, not the east, and not the Middle East. And your allies and investors are in it for the greed part, but not the truth part so much. Trump tried the same thing with truth social… problem is all the greed and shadiness loses credibility with truth also.
We don’t give a s%#* about people wanting to use AI to write SEO spam, automate their customer support or generate content to keep the kids quiet. We want to use this tech as a tool to solve real world problems in a way that, looking back 500 years from now, people will see this as a time of innovation, rather than a time of decline.
Wether he’ll succeed is a different question, of course. But such a direction is clearly missing in the other players. They are just too eager to cater to the laziest segment of the economy of bits. They’re about changing pixels on other people’s screen.
This is probably the response Elon is looking for when he simply writes something vague that can elicit the imagination of any applicant’s specific worldview.
> xAI will continue on this steep trajectory of progress over the coming months, with multiple exciting technology updates and products soon to be announced.
There is a lot of potential for using AI in drug discovery and development, biotech more broadly and chemistry/material science. Pharma is investing heavily in this right now. If useful, the output here could potentially also support Neuralink and even SpaceX.
Coupled with the line about the "true nature of the universe", I guessed this was really about entering that space.
But when you look at the careers page [https://x.ai/careers#open-roles], they are only hiring AI engineers. No biochemists or MDs, material scientists or any other natural science domains. So, if natural science discovery is actually on the road map, either:
- it is in the long term future
- they have no idea what they are doing
More likely, they are not going for natural science and this is basically just a play to compete with openAI. And, in that case, I don't understand how they convinced investors to put 6 billion dollars into it.
The “true nature of the universe” bit is that Elon believes that competing LLMs are too neutered because they disallow certain terms etc. (his words are much more politically charged and I do not agree with his take on this and many other things)
Therefore he believes that Grok can be an LLM trained on the voices of the people using his alleged free speech platform:X.
Elon’s vision of free speech is a world where you can say anything you want as long as it isn’t mean towards Elon or Alt-right ideology. Which is actually pretty hilarious to think about in the context of a training dataset for a generative model…it’s literally gonna be a bullshit generator.
Is the assumption here is that we can somehow understand the nature of the universe if we stop censoring the common man and have an unmuzzled LLM that talks like him instead of the Bay Area AI elites? My uneducated guess is well learn more about the true nature of the common clay^w man.
I agree this appears to be Musk's opinion of LLMs in particular.
However, as Musk has already got AI in his cars and was interested in the topic well before LLMs (founding investor in OpenAI when they were doing reinforcement learning), I'd be extremely disappointed if he had forgotten all of that in the current LLM-gold-rush.
The thing about “figuring out the true nature of the universe” is that you *have to do experiments*. It’s non-negotiable. There’s no amount of really hard thinking or parameters or GPU’s that will let you know the secrets of the universe. It’s astonishing to me that both the AI-maximalists and the AI-doomers are both seemingly unaware of this basic, fundamental fact of science.
Szegedy and some others were working on science-related (math and natural science) projects at Google prior to leaving. This is probably just piggy backing on their prior work without any commitment going forward.
Of course they're not going to make any fundamental contributions to natural science or mathematics (or likely even LLM training/understanding).
The capacity of continuing the real world problem solving is very blurry. Boring is a joke. For Tesla he is not a founder, and they are mostly relying on a 10 year old product (new product is a joke) that'll only maybe saved with tariffs. It's mostly a meme stock company by now. SpaceX is a great success but his job was only providing money through government contracts and having the luck of finding Gwynne Shotwell, seems hard to replicate consistently. Add an ongoing mental breakdown and full on lies (FSD, humans on Mars, Hyperloop, ...) and it doesn't look that good (AI is a bit complex, hard to imagine someone in his state still having the intellectual capacity to really handle this). But yeah a new meme stock can still be a good bet for investors.
Nothing. That’s just the money he needs to buy compute and staff in today’s market. He’s boasting about 100k GPUs. That’s 60k per unit including staff, power, racks, repair, upgrades, failures, development etc. It doesn’t even cover costs.
The Elon Musk Bubble, I really hope so. That’s so much capital and attention that could be spent on actual research instead of over-hyped and over-promised gimmicks
I wonder if the investors are just like crypto-bros and pyramid-schemers. Knowing it's bullshit, but hoping the next dumbass will come tomorrow, next week, etc, to buy it off them where they can make a profit...
Considering the price of e.g. BTC, maybe thinking "People with money can't be this dumb!" is the dumbass move...
That’s when SBF told Sequoia about the so-called super-app: “I want FTX to be a place where you can do anything you want with your next dollar. You can buy bitcoin. You can send money in whatever currency to any friend anywhere in the world. You can buy a banana. You can do anything you want with your money from inside FTX.”
Suddenly, the chat window on Sequoia’s side of the Zoom lights up with partners freaking out.
“I LOVE THIS FOUNDER,” typed one partner.
“I am a 10 out of 10,” pinged another.
“YES!!!” exclaimed a third.
What Sequoia was reacting to was the scale of SBF’s vision. It wasn’t a story about how we might use fintech in the future, or crypto, or a new kind of bank. It was a vision about the future of money itself—with a total addressable market of every person on the entire planet.
“I sit ten feet from him, and I walked over, thinking, Oh, shit, that was really good,” remembers Arora. “And it turns out that that fucker was playing League of Legends through the entire meeting.”
“We were incredibly impressed,” Bailhe says. “It was one of those your-hair-is-blown-back type of meetings.”
How is it that some people get 6B to understand the true nature of the universe? It’s not like they have a track record of doing anything other than absolutely devastating a previously successful company…
If I ask someone to give me 6B to understand the true nature of the universe they’d laugh in my face, but I sort of assume I’d have an even chance of doing better.
Yeah, surely a better group of people to understand the universe would be, I don’t know, a team of Astro physicists whom didn’t buy a social media company so they could push their unfiltered opinions onto the masses. Just a hunch.
Agreed, Elon has failed at everything he’s attempted and is basically broke these days, along with his companies teetering on the edge of bankruptcy. People need to stop giving him money because all he does is lose it
I mean, other than SpaceX, which created the world’s most reliable rocket, and delivers more mass to orbit than anyone else. Dunno how anyone could consider that a failure.
I still dont understand why elmo is not being investigated for fraud for all these obscenely unreal claims and promises he has made over the years. this is a clear and cut definition of fraud, for which "female steve jobs" and another e-trucking clown CEO are doing their time. why not elmo?
Unique data set. And Elon. And with Elon, comes a great set of talent. From https://x.ai/about
> Our team is led by Elon Musk, CEO of Tesla and SpaceX. Collectively our team contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, and SimCLR. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
Does it, though? That was probably true pre-X™, but it seems like the primary selection metric has gone from “competence” to “doesn’t ever contradict Elon”
But we know from Google that unless you can definitively solve the "is this sentence real or a joke" datasets like Twitter, Reddit etc are going to be more trouble than they are worth.
And Elon's recent polarising nature and the callous nature with which he disbanded the Tesla Supercharger team means that truly talented people aren't going to be as attracted to him as in his early days. They are only going to be there for the money.
It's 6B down the drain. Saying grok 1.5 is competitive is a joke, if it was any good it would be ranked well in chatbot arena (https://chat.lmsys.org/). Elon is a master in hyping underperforming things and this is no exception.
I mean, he can try. The world already has a number of AI corporations headed up by totalitarian megalomaniacs though, the market may eventually reward some other course of action.
The real bull case - Elon doesn't kowtow to mentally ill basement nerds and the media/politicians trying to not lose power.
Can you image someone running in to tell Elon the fat nerds on HN are in a tizzy about Grok telling people to eat rocks?
Other bull case he's obviously silo-ing Twitter for unique training data. Reddit can only ask nicely you don't train off them.
Twitter with a good AI could become quite strong. I'm not as bullish on this, but... Twitter is all the cutting edge news. ChatGPT was happy to be years out of date.
No one cares Russia has finally manned up and launched a tactical nuke 24 hours after it happens, something new will be trending. This is Twitters strength, to the minute data. One of the AI's will have to specialize in this.
I would be asking the same question if another company formed in the past year raised $6B to train LLMs. For example, Mistral raised a significantly smaller round at a much lower valuation. Just trying to learn how others see this.
It's probably better for both companies that he isn't. Better for him to torch a $6b series B and Twitter than mess up spacex & tesla more than he already has
I yawned so hard my jaw unlocked.
Can't wait to see groundbreaking... checks notes... "advancements in various applications, optimizations, and extensions of the model".
Do these companies only hire yes men?
https://www.microsoft.com/en-us/research/wp-content/uploads/...
But on the philosophical side, if an understanding can’t be communicated, does it exist? We humans only have various movements and vibrations of flesh, sensing those, text, and images to communicate.
And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.
I disagree. The day is coming when some *BIG* problem is solved by AI just because someone jokingly asks about it.
The all-encompassing nature of it seems befitting a company producing increasingly general-purpose AI.
The output will very simply tell how much 'truthful' the AI actually is.
We don’t give a s%#* about people wanting to use AI to write SEO spam, automate their customer support or generate content to keep the kids quiet. We want to use this tech as a tool to solve real world problems in a way that, looking back 500 years from now, people will see this as a time of innovation, rather than a time of decline.
Wether he’ll succeed is a different question, of course. But such a direction is clearly missing in the other players. They are just too eager to cater to the laziest segment of the economy of bits. They’re about changing pixels on other people’s screen.
There is a lot of potential for using AI in drug discovery and development, biotech more broadly and chemistry/material science. Pharma is investing heavily in this right now. If useful, the output here could potentially also support Neuralink and even SpaceX.
Coupled with the line about the "true nature of the universe", I guessed this was really about entering that space.
But when you look at the careers page [https://x.ai/careers#open-roles], they are only hiring AI engineers. No biochemists or MDs, material scientists or any other natural science domains. So, if natural science discovery is actually on the road map, either: - it is in the long term future - they have no idea what they are doing
More likely, they are not going for natural science and this is basically just a play to compete with openAI. And, in that case, I don't understand how they convinced investors to put 6 billion dollars into it.
Therefore he believes that Grok can be an LLM trained on the voices of the people using his alleged free speech platform:X.
For example: saying “cis” or “cisgender” flags your post as abusive and limits visibility. Saying the 6-letter (or 3-letter) f-slur does not.
However, as Musk has already got AI in his cars and was interested in the topic well before LLMs (founding investor in OpenAI when they were doing reinforcement learning), I'd be extremely disappointed if he had forgotten all of that in the current LLM-gold-rush.
(That's not a "no"; he's disappointed before).
AIs like AlphaFold are hardly in the news compared to OpenAI and its competitors.
Of course they're not going to make any fundamental contributions to natural science or mathematics (or likely even LLM training/understanding).
Did x.ai just become worth more than x.com? We must be nearing the bubble popping...
Their founder has a track record of making his investors a lot of money while solving really hard, important problems.
Current valuations:
- Tesla $570B
- SpaceX $180B
- Neuralink $5B
- Boring $5.6B
Boring and Neiralink are just valuations with unknown revenue base and future which may be not justified, and weak proof to justify another valuation.
Elon Musk Prada Leather Jacket
Elon Musk Rectangular Sunglasses
Considering the price of e.g. BTC, maybe thinking "People with money can't be this dumb!" is the dumbass move...
Humans with money are mostly just humans that are much less likely to face real consequences if they eff it up.
And certainly 6 billion dollars down the drain, funneled to stave the collapse of X/Twitter and Musk paying his dues.
If I ask someone to give me 6B to understand the true nature of the universe they’d laugh in my face, but I sort of assume I’d have an even chance of doing better.
He's also transformed two industries and become a dominant player in each. Just do that and the investors will give you money.
Deleted Comment
https://www.ft.com/content/2a96995b-c799-4281-8b60-b235e84ae...
> Our team is led by Elon Musk, CEO of Tesla and SpaceX. Collectively our team contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, and SimCLR. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
They are already competitive despite the late start: https://x.ai/blog/grok-1.5v
So they hired some guys from other AI companies.
And Elon's recent polarising nature and the callous nature with which he disbanded the Tesla Supercharger team means that truly talented people aren't going to be as attracted to him as in his early days. They are only going to be there for the money.
It's 6B down the drain. Saying grok 1.5 is competitive is a joke, if it was any good it would be ranked well in chatbot arena (https://chat.lmsys.org/). Elon is a master in hyping underperforming things and this is no exception.
Deleted Comment
Dead Comment
The real bull case - Elon doesn't kowtow to mentally ill basement nerds and the media/politicians trying to not lose power.
Can you image someone running in to tell Elon the fat nerds on HN are in a tizzy about Grok telling people to eat rocks?
Other bull case he's obviously silo-ing Twitter for unique training data. Reddit can only ask nicely you don't train off them.
Twitter with a good AI could become quite strong. I'm not as bullish on this, but... Twitter is all the cutting edge news. ChatGPT was happy to be years out of date.
No one cares Russia has finally manned up and launched a tactical nuke 24 hours after it happens, something new will be trending. This is Twitters strength, to the minute data. One of the AI's will have to specialize in this.
As do Amazon and Apple who aren't just sitting back doing nothing.
So I think even 4th place is putting it nicely. Far more likely to be 6th at best.
NVIDIA must be happy, $5.9B will go to it.
He’s been actively pushing for more control of Tesla for exactly this https://electrek.co/2024/05/20/elon-musk-confirms-threat-giv...
What mess is afoot at SpaceX that is his doing?