Deleted Comment
Deleted Comment
The X-Box was less common as the first X-Box never really sold all that well. But Halo came out for the PC as well and many people played it.
It seems very short sighted.
I think of it more like self driving cars. I expect the error rate to quickly become lower than humans.
Maybe in a couple of years we’ll consider it irresponsible not to write security and safety critical code with frontier LLMs.
Very quickly he went straight to, "Fuck it, the LLM can execute anything, anywhere, anytime, full YOLO".
Part of that is his risk-appetite, but it's also partly because anything else is just really furstrating.
Someone who doesn't themselves code isn't going to understand what they're being asked to allow or deny anyway.
To the pure vibe-coder, who doesn't just not read the code, they couldn't read the code if they tried, there's no difference between "Can I execute grep -e foo */*.ts" and "Can I execute rm -rf /".
Both are meaningless to them. How do you communicate real risk? Asking vibe-coders to understand the commands isn't going to cut it.
So people just full allow all and pray.
That's a security nightmare, it's back to a default-allow permissive environment that we haven't really seen in mass-use, general purpose internet connected devices since windows 98.
The wider PC industry has got very good at UX to the point where most people don't need to worry themselves about how their computer works at all and still successfully hide most of the security trappings and keep it secure.
Meanwhile the AI/LLM side is so rough it basically forces the layperson to open a huge hole they don't understand to make it work.
Deleted Comment
In my opinion, that isn't something that should be allowed or encouraged.
It didn't help that the LLM was confidently incorrect.
The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.
In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.
I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.
With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).