"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.
> it can recognize you across visits and build a behavioral profile under your pseudonym
is the default Discord experience for users with an account, long before age verification entered the chat.
It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.
In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.
Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.
Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.
Wait but under that assumption - LLMs being good enough - wouldn't the maintainer also be able to leverage LLMs to speed up the review?
Often feels to me like the current stance of arguments is missing something.