Its highly recommended to remove the rust....
I even like AI somewhat, some things it produces. pretty pictures i guess. scfi-fi. but still its not engaging anymore, you get saturated very quickly if things are always available.
The internet seems pretty much dead for a long long time already. a few bastions here n there of maybe-real-people talking. I had some minor hope AI/ML might actually improve things, get rid a bit of the bubbles caused by algos, but its gotten much much worse actually.
AI is not the cause of the decline or rot, but its definitely accelerating it.
A lot of things I cared about are taken over by the loud-n-stupid bunch who yells only in blanket statements and never seems to be able to produce any sound reasoning or evidence for their discourse. i call them bots despite them likely being confused humans...
The worse thing is, that it seems now more and more actual people in the real world are mimicking this behavior. Trying to say smart things about topics they know nothing about, because if chatGPT can give some smart sounding lines, why shouldnt I be able to? I am researcher of technology and the number of times people hand my vibe-coded or written totalgarbage to review or fix, (papers, experiments etc.). Their capacity to think and reason is diminishing fast. They will be fierce and toxic if you highlight this as a concern, or point at any of their hallucinations.
I've expeirenced already a few times that people, like a group of zombies, gang up on me (debate/argument) and all jump on arguments which are trivially proven to be incorrect. Even if you prove them incorrect infront of them, they will just try to eat your brain/prove you wrong by talking louder etc. - and these are 'highly educated individuals'.
Considering to leave my research job, after about 12 years of trying to work myself into such a position, and just go back where i started.. to drive a forklift. its more likely i'd be working with real humans there. and if it's a robot, atleast its a real fuckin robot, not one of these infiltrator units...
What happens when a team of humans are playing against a team of AI, which play in the same conditions, with network lag etc. from a client computer perspective...
And consistently beat the human counterparts for being faster at response time, never make mistakes and not ever getting tired?
Eventually it could kill all MMOs, fill them up with AI players, "farm" with AI that never sleeps, ruin counter strike type games online, etc. Another arms race?
games like CS its less useful because it will be blatantly aimbotting. as it gets better it will be more and more obvious. you might be able to train it to mimick human mistakes, but i think ultimately it will be easily spotted by other players. for games without the hand-eye-coordination like turn-based games or games with 'global cooldowns' , mmorpgs etc., it will be much harder to identify.
i think normal subtle cheats like ESP when 'done right' are much more killing esports than this would.
Also, you can even install windows on the box. it's one of its selling points actually... if you really want to...
kernel level anti-cheat is generally not even needed, so perhaps those companies will now consider rolling proper anti-cheat themselves rather than third-party rubbish that no one asked for.
What i also like about this console development is that it might open the door to other smaller players creating consoles in the form of mini-PC with linux and a gaming layer on there. maybe there will be (oem?)partner for valve that make more beefy machines, machines with alternate OSes (windows + skin) etc.
its a different angle that will open up many things hopefully. make it less exclusive market between essentially 3 parties.
I have to say that as a Dutch person it always pains me to see products launch internationally with a Dutch name, for some reason it just feels super cringy. Do other people who's native language isn't English have that as well with their own language?
This was discussed in some detail in the recently published Attacker Moves Second paper*. ML researchers like using Attack Success Rate (ASR) as a metric for model resistance to attack, while for infosec, any successful attack (ASR > 0) is considered significant. ML researchers generally use a static set of tests, while infosec researchers assume an adaptive, resourceful attacker.
• https://arxiv.org/abs/2510.09023