If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.
AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.
This reminds me about discourse about a gun problem in US, "guns don't kill people, people kill people", etc - it is a discourse used solely for the purpose of not doing anything and not addressing anything about the underlying problem.
So no, you're wrong - AI IS THE PROBLEM.
These statements are so out of touch with reality, I generally wonder where will YC be in 5-10 years.
Most countries have them and it's not for no reason.
Whether we can trust our government, though, is a different matter.
(FYI Not a UK citizen) But it does matter. If I go to a protest against the government will I be rejected of all those services since someone flagged my ID?
> Those ARM instructions are just hallucinated, and the reality is actually the other way around: ARM doesn’t have a way of hard-coding ‘predictions’, but x86 does.
This made me chuckle. Thanks.
Why not HIBP (Have I Been Pwned) style site to check against the database if your number is in?
But it'll be based on risks introduced by preventable human error- hubris, etc.
All it will take is some viral video of a Tesla running over a child or something terrible like that.
We already have self-driving cars: look at Waymo, etc. look at chinese ride-hailing companies. What we won't have is private-use self-driving cars: a regular person will not be able to buy one.
Most people are just lazy and eager to take shortcuts, and this time it's blessed or even mandated by their employer. The world is about to get very stupid.
[1] https://arstechnica.com/gadgets/2024/08/do-not-hallucinate-t...