So basic HR processes?
Why do I doubt this.
A couple of years back, I was working at Mojang (makers of Minecraft).
We got purchased by Microsoft, which of course meant we had to at least try to migrate away from AWS to Azure. On the surface, it made sense: our AWS bill was pretty steep, iirc into the 6 figures monthly, we could have Azure for free*.
Fast forward about a year, and an uncountable amount of hours spent by both my team, and Azure solutions specialists, kindly lent to us by the Azure org itself, we all agreed the six figure bill to one of corporate daddy's largest competitors would have to stay!
I've written off Azure as a viable cloud provider since then. I've always thought I would have to revaluate that stance sooner or later. Wouldn't be the first time I was wrong!
Dead Comment
Dead Comment
Mind you EA released [some of] the games as freeware back in 2008 so no, you don't have to buy them for the graphics, art, sound, and music assets
Tiberian Dawn GDI https://web.archive.org/web/20110927141135/http://na.llnet.c...
Tiberian Dawn NOD https://web.archive.org/web/20111104060230/http://na.llnet.c...
Tiberian Sun (though no source code was released for this game) https://web.archive.org/web/20110823002110/http://na.llnet.c...
Red Alert Allied https://web.archive.org/web/20100130215623/http://na.llnet.c...
Red Alert Soviet https://web.archive.org/web/20100130220258/http://na.llnet.c...
- Encourage folks to use read-only by default in our docs [1]
- Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data [2]
- Write E2E tests to confirm that even less capable LLMs don't fall for the attack [2]
We noticed that this significantly lowered the chances of LLMs falling for attacks - even less capable models like Haiku 3.5. The attacks mentioned in the posts stopped working after this. Despite this, it's important to call out that these are mitigations. Like Simon mentions in his previous posts, prompt injection is generally an unsolved problem, even with added guardrails, and any database or information source with private data is at risk.
Here are some more things we're working on to help:
- Fine-grain permissions at the token level. We want to give folks the ability to choose exactly which Supabase services the LLM will have access to, and at what level (read vs. write)
- More documentation. We're adding disclaimers to help bring awareness to these types of attacks before folks connect LLMs to their database
- More guardrails (e.g. model to detect prompt injection attempts). Despite guardrails not being a perfect solution, lowering the risk is still important
Sadly General Analysis did not follow our responsible disclosure processes [3] or respond to our messages to help work together on this.
[1] https://github.com/supabase-community/supabase-mcp/pull/94
[2] https://github.com/supabase-community/supabase-mcp/pull/96
[3] https://supabase.com/.well-known/security.txt