If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"
But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.
This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.
I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?
At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
"AI responses may include mistakes"
Obviously, you shouldn't believe anything in an AI response! Also, here is an AI response for any and every search you make.
You could argue a lot of semantics but the majority of fantasy and sci fi books are not blending the two.
By the numbers, Star Wars is far more grounded as science fiction that Star Trek, but people will insist the former is at best merely "science fantasy." It's really all just vibes.
The best rage bait I've seen in years.
But I'm sure they will sort that out, as I dont have that issue with other anthropic models.
Meanwhile, articles such as this one (https://news.ycombinator.com/item?id=45412263) get spammed with people yelling for “examples”, such as exactly what’s here.
One thing AI has changed for me (beyond, you know, everything) is making me really depressed about the state of the HN community. It seems HN itself hasn’t been immune to the severe social media toxicity pandemic going around… merely doing better than the gen pop alternatives.