specifically, the GPT-5.3 post explicitly leans into "interactive collaborator" langauge and steering mid execution
OpenAI post: "Much like a colleague, you can steer and interact with GPT-5.3-Codex while it’s working, without losing context."
OpenAI post: "Instead of waiting for a final output, you can interact in real time—ask questions, discuss approaches, and steer toward the solution"
Claude post: "Claude Opus 4.6 is designed for longer-running, agentic work — planning complex tasks more carefully and executing them with less back-and-forth from the user."
On further prompting it did the next step and terminated early again after printing how it would proceed.
It's most likely just a bug in GitHub Copilot, but it seems weird to me that they add models that clearly don't even work with their agentic harness.
As much as I dislike anti-cheat in general (why incorporate it instead of just having proper moderation and/or private servers? Do you need a sketchy third-party kernel level driver to police you to make sure you're "browsing the internet properly in a way that is compliant with company XYZ's policies", or even when running other software like a photo editor, word processor, or anything else? It's _your_ software that you bought.) something similar is already happening with, e.g, Widevine bundled in browsers for DRM-ed video streaming.
I agree that having some first-party or reputable anti-cheat driver or system, is probably preferable than having different studios roll out their own anticheat drivers. (I am aware there are studio-level or common third party common anti-cheat solutions already, such as Denuvo or Vanguard. But I would prefer something better)
I heard with Denuvo reverse engineering work needs to be done for each individual target to unprotect it, but I'm not sure how this will be the case with a first party anti-cheat driver.