A similar approach that I used for an application is to take a simplified query language as input such as `name*~"john" emp_id>3000` and use a hand-crafted parser to turn it into a SQL query.
I just shipped a feature exactly like this... Jira has the same thing with JQL, which is what inspired my work. Safe from SQL injection and can be used directly by power users or managed through form inputs for basic search/filtering. We use Elasticsearch for other data atm, but I'm hopeful this new PostgreSQL only approach wins out as it makes authz so much simpler since it all composes into one query.
Very quickly he went straight to, "Fuck it, the LLM can execute anything, anywhere, anytime, full YOLO".
Part of that is his risk-appetite, but it's also partly because anything else is just really furstrating.
Someone who doesn't themselves code isn't going to understand what they're being asked to allow or deny anyway.
To the pure vibe-coder, who doesn't just not read the code, they couldn't read the code if they tried, there's no difference between "Can I execute grep -e foo */*.ts" and "Can I execute rm -rf /".
Both are meaningless to them. How do you communicate real risk? Asking vibe-coders to understand the commands isn't going to cut it.
So people just full allow all and pray.
That's a security nightmare, it's back to a default-allow permissive environment that we haven't really seen in mass-use, general purpose internet connected devices since windows 98.
The wider PC industry has got very good at UX to the point where most people don't need to worry themselves about how their computer works at all and still successfully hide most of the security trappings and keep it secure.
Meanwhile the AI/LLM side is so rough it basically forces the layperson to open a huge hole they don't understand to make it work.