I think that's the wrong guess. Even with chat control, in some previous forms, the proposals came of the back of lobbying. One such case was Ashton Kutcker's startup https://www.ftm.eu/articles/ashton-kutchers-non-profit-start...
The more recent proposals for chat control were drafted by non-public "high level groups", the identity of which wasn't revealed to the public https://mullvad.net/en/why-privacy-matters/going-dark
If a project is important enough to require C or x86 assembly, where memory management and undefined behavior have real consequences, then it’s important enough to warrant a real developer who understands every line. It shouldn’t be vibe coded at all.
Python’s “adorable concern for human problems” isn’t a bug here, it’s a feature. The garbage collection, the forgiving syntax, the interpreted nature: these create a sandbox where vibe coded solutions can fail safely. A buggy Python script throws an exception. A buggy C program gives you memory corruption or security holes that show up three deployments later.
The question isn’t what language should AI write. It’s what problems should we trust to vibe coding. The answer: problems where Python’s safety net is enough. The moment you need C’s performance or assembly’s precision, you’ve crossed into territory that demands human accountability.
I disagree. I write a lot of one-off numerical simulations where something quick and dirty is useful but performance matters and the results can be easily verified without analyzing every line of code. Python would be a terrible choice.