The problem with this analogy is that it makes no sense.
LLMs aren’t guns.
The problem with using them is that humans have to review the content for accuracy. And that gets tiresome because the whole point is that the LLM saves you time and effort doing it yourself. So naturally people will tend to stop checking and assume the output is correct, “because the LLM is so good.”
Then you get false citations and bogus claims everywhere.
If someone performs a negligent discharge, they are responsible, not Glock. It does have other safety mechanisms to prevent accidental fires not resulting from a trigger pull.
Also it's not as powerful as you think. In the past I have spent a lot of time looking at /new, and upvoting stories that I think should be surfaced. The vast majority of them still never hit near the front page.
It's a real shame, because some of the best and most relevant submissions don't seem to make it.