If the "legal advisor" detects a potential legal problem, ChatGPT will issue a legal disclaimer and a warning, so that it doesn't have to abruptly terminate the conversation. Of course, it can do a lot of other things, such as lowering the temperature, raising the BS detection threshold, etc., to adjust the flow of the conversation.
It can work, and it would be better than a hard-coded filter, wouldn't it?
This name thing is an additional layer on top of that, maybe because training the model from zero per name (or fine tuning the system message to include an increasingly big list of names that it could leak) is not very practical.
[1] https://platform.openai.com/docs/guides/moderation/overview
EDIT: "Advent of Code" is a registered trademark and you should change your name. https://adventofcode.com/2024/about
The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.
I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.
If there's an algorithmic penalty against the news for whatever reason, that may be a flaw in the HN ranking algorithm.
The mod itself is over 10 years old now, and I think the original developers are gone, explaining why no one was interested in fixing it when Ryan reported it. But this means that now the mod is unusable, no one is going to want to risk a full privilege exploit taking over their PC.
Hopefully this article reaches someone who's a bit more interested in patching the mod.
As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.
Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?