Wasn't aware they'd hit a WAU count this high. Impressive, but then again at this kind of valuation you sure want to be heading towards 9-figure MAU numbers.
It's been dying for at least a decade, and I'll put up my entire NFT stash and bet you it will be here long after Arc has been forgotten.
Remember RockMelt? Yeah, nobody else does, either.
EDIT: Per a comment in another thread, these features are only free for 90 days it seems.
1. It's not smart enough to recognize from the initial image this is a bolt style seat lock (which a human can).
2. The manual is not shown to the viewer, so I can't infer how the model knows this is a 4mm bolt (or if it is just guessing given that's the most likely one).
3. I don't understand how it can know the toolbox is using metric allen wrenches.
Additionally is this just the same vision model that exists in bing chat?
It's incredible how people think this is "a rich guy revealing their true colors" as if they don't know what they are doing, which is manipulating you into linking more to him.
Even if we stopped all carbon emissions today the earth would continue to warm for centuries. Mitigating these effects is going to take carbon capture(aka unburning oil) and geoengineering on an immense scale which won't be possible without economic growth. There's no going back.
https://www.carbonbrief.org/explainer-will-global-warming-st...
More importantly, the sooner we can stop carbon emissions the less severe warming and ecosystem harm we'll experience. Carbon capture may help a bit if we are able to massively scale it up, but there really is not substitute for ending emissions.
Basically there is this innate idea that if the basic building blocks are simple systems with deterministic behavior, then the greater system can never be more than that. I've seen this is spades within the AI community, "It's just matrix multiplication! It's not capable of thinking or feeling!"
Which to me always felt more like a hopeful statement rather than a factual one. These guys have no idea what consciousness is (nobody does) nor have any reference point for what exactly is "thinking" or "feeling". They can't prove I'm not a stochastic parrot anymore than they can prove whatever cutting edge LLM isn't.
So while yes, present LLMs likely are just stochastic parrots, the same technology scaled might bring us a model that actually is "something that is something to be like", and we'll have everyone treating it with reckless carelessness because "its just a stochastic parrot".
It pretty clear the whole point is minimize the difference between us and AI, but it does feel like you are undermining you argument by trying to work it from both sides. It reminds me some accused of crime who say both "I didn't do it!" and "If did it, it wasn't wrong!".
Humans aren't stochastic parrots. You can't "prove" this because it's not mathematical fact, but there is plenty evidence from study how to brain works to show this. Hell, it's even readily apparent from introspection if you'd bother to check. LLMs on the hand basically are stochastic parrots because they just autoregressive token predictors. They might become less so due to architectural changes made by the companies working on them, but it isn't going to just creep up on us like some goddamn emergence boogeyman.