Only 9 unique email addresses have added to the "reply-all" thread within the past couple hours. Don't know if any estimates can be extrapolated from that though.
I can virtually guarantee you that there's been a nontrivial amount of GPT-generated content on HN that has not been caught by mods since ChatGPT, and likely since GPT-2/3, as well. Dang (and the other one whose tag I can't recall) already have their hands full trying to keep the tone civil across thousands (tens of thousands?) of comments a day - it's impossible for them to catch every ML-generated comment (some humans actually do write like these newer language models, after all), and more than likely they're missing a decent number of them - through no fault of their own, it's just an extremely hard problem.
The three solutions that I'm shilling for this problem are (1) invite-trees for HN (like Lobsters, which makes the community much less open but also much more resistant to abuse) (2) webs of trust (not cryptographic, just databases of how much you trust users) overlaid onto HN and other places and (3) people actually reading the content of comments very carefully and upvoting logically sound arguments and downvoting illogical and emotional/manipulative ones, but all of these require a lot of effort and social buy-in.
Wouldn't it be more accurate to say these newer language models actually write like humans? Or is there a subset of the population intentionally trying to write in the way that these language models write.
Deleted Comment