God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.
"UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges" Also "The use of faulty AI is not new for the health care industry."
There's currently no other DA other than Tesla's FSD available in the US that will work on city streets and highways.
Or, if you want to loosely define "work", Ernst Dickmanns had self driving in the 80s, and put in on the autobahn in the 90s. I'd rather define it more tightly as "statistically at least as safe to be in _and_ to be near, as a human driver".
Tesla claims to have achieved that, but I don't believe them. That's because the data they report 1) omits a fair bit of critical info, and 2) frequently changes definitions. Both serve to make comparisons difficult. If it was clearly safe, I think they'd put effort into making the comparison transparent.
Bear in mind that Musk has been claiming "Full Self-Driving" since at least 2016, and people involved have asserted that he wasn't wrong, he was lying.