And capabilities [1] is the long-known, and sadly rarely implemented, solution.
Using the trifecta framing, we can't take away the untrusted user input. The system then should not have both the "private data" and "public communication" capabilities.
The thing is, if you want a secure system, the idea that system can have those capabilities but still be restricted by some kind of smart intent filtering, where "only the reasonable requests get through", must be thrown out entirely.
This is a political problem. Because that kind of filtering, were it possible, would be convenient and desirable. Therefore, there will always be a market for it, and a market for those who, by corruption or ignorance, will say they can make it safe.
Cited in other injection articles, e.g. https://simonwillison.net/2023/Apr/25/dual-llm-pattern/
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
This is an extremely crude characterisation of what many people feel. Plenty of artists oppose copyright-ignoring generative AI and "get" it perfectly, even use it in art, but in ways that avoid the lazy gold-rush mentality we're seeing now.