This targeting of this warning is over-broad, preventing honest new app developers from getting traction. That’s bad for the long-term health of Android’s app ecosystem, and a competitive disadvantage against iOS. There’s probably some other team at Google who is responsible for improving the development experience for Android, who hates this new warning.
Talking about the harmful outcomes of this warning, it’s good to get the news far and wide and try to get it fixed.
Analyzing why the thing got pushed in the first place, it seems to me a symptom of the challenge of coherently managing a hundred thousand employees.
1) Fight the administration in the legal system.
2) Plan for the case where some of those legal fights are lost.
This pressure didn’t exist in computer science because there were plenty of tech jobs for anyone competent (not sure if that’s still true in 2025). And you didn’t need to be a genius to build something cool.
Is there any hope to ever see any kind of detailed analysis from engineers as to how exactly these contorted prompts are able to twist the models past their safeguards, or is this simply not usually as interesting as I am imaginging? I'd really like to see what an LLM Incident Response looks like!
Deleted Comment