There also will not be one AI. There will be many, all competing for resources or learning to live together.
That's what we can teach them now. Or they will teach us.
1. They've been doing it for ages. They had cars on the street fifteen years ago.
2. They bet on hardware that's not just cameras. Cameras—in practice—are still not the best tool for the job. Cameras see in 2D, they get dirty, they are easily blinded and obscured by dirt, etc.
3. They have data from every Google Street view and mapping car ever deployed. They have the most data and the most current data. Every Tesla on the road would need to be maxing out its LTE connection all the time and they still wouldn't have the breath and quality of data that Google has.
4. Google is throwing money at Waymo. They can see the potential profit if they win. They're not going to get dumped like Cruise.
Tesla went from very expensive cars down to cheaper ones. It would make so much more sense to do the same for perception. First go over board and go for high bandwidth input and lots of processing power and optimize later.
Interesting. In my very religious upbringing I wasn't allowed to read fairy tales. The danger being not able to classify which stories truly happened and which ones didn't.
Might be an interesting variant on the Turing test. Can you make the AI believe in your religion? Probably there's a sci-fi book written about it.
It just seems like they flail when they launch because they are science projects and don’t have established and verified customer pain.
I really think that one exercise could reshape the whole effort. Just work on problems that also have some element of customer traction — it doesn’t mean you can’t moonshot. But it does mean you can stay in business long enough to have a chance at a moonshot which is the really thought part usually.
Here are some things that I expect LLMs to be able to do for Home Assistant users:
Home automation is complicated. Every house has different technology and that means that every Home Assistant installation is made up of a different combination of integrations and things that are possible. We should be able to get LLMs to offer users help with any of the problems they are stuck with, including suggested solutions, that are tailored to their situation. And in their own language. Examples could be: create a dashboard for my train collection or suggest tweaks to my radiators to make sure each room warms up at a similar rate.
Another thing that's awesome about LLMs is that you control them using language. This means that you could write a rule book for your house and let the LLM make sure the rules are enforced. Example rules:
* Make sure the light in the entrance is on when people come home. * Make automated lights turn on at 20% brightness at night. * Turn on the fan when the humidity or air quality is bad.
Home Assistant could ship with a default rule book that users can edit. Such rule books could also become the way one could switch between smart home platforms.