Deleted Comment
The controls in the actual proposal are less reasonable: they create finable infractions for any claim in a job ad deemed "misleading" or "inaccurate" (findings of fact that requires a an expensive trial to solve) and prohibit "perpetual postings" or postings made 90 days in advance of hiring dates.
The controls might make it harder to post "ghost jobs" (though: firms posting "ghost jobs" simply to check boxes for outsourcing, offshoring, or visa issuance will have no trouble adhering to the letter of this proposal while evading its spirit), but they will also impact firms that don't do anything resembling "ghost job" hiring.
Firms working at their dead level best to be up front with candidates still produce steady feeds of candidates who feel misled or unfairly rejected. There are structural features of hiring that almost guarantee problems: for instance, the interval between making a selection decision about a candidate and actually onboarding them onto the team, during which any number of things can happen to scotch the deal. There's also a basic distributed systems problem of establishing a consensus state between hiring managers, HR teams, and large pools of candidates.
If you're going to go after "ghost job" posters, you should do something much more targeted to what those abusive firms are actually doing, and raise the stakes past $2500/infraction.
- Encourage folks to use read-only by default in our docs [1]
- Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data [2]
- Write E2E tests to confirm that even less capable LLMs don't fall for the attack [2]
We noticed that this significantly lowered the chances of LLMs falling for attacks - even less capable models like Haiku 3.5. The attacks mentioned in the posts stopped working after this. Despite this, it's important to call out that these are mitigations. Like Simon mentions in his previous posts, prompt injection is generally an unsolved problem, even with added guardrails, and any database or information source with private data is at risk.
Here are some more things we're working on to help:
- Fine-grain permissions at the token level. We want to give folks the ability to choose exactly which Supabase services the LLM will have access to, and at what level (read vs. write)
- More documentation. We're adding disclaimers to help bring awareness to these types of attacks before folks connect LLMs to their database
- More guardrails (e.g. model to detect prompt injection attempts). Despite guardrails not being a perfect solution, lowering the risk is still important
Sadly General Analysis did not follow our responsible disclosure processes [3] or respond to our messages to help work together on this.
[1] https://github.com/supabase-community/supabase-mcp/pull/94
[2] https://github.com/supabase-community/supabase-mcp/pull/96
Deleted Comment
I've got a small question. How do you deal with people asking for open sourcing your product/code, claiming they don't want to use a product they don't control?
I think that's how the JavaScript Temporal proposal works. Convert your instant to the timezone, make the comparisons/calculations, hope you didn't jump an hour due to summertime, convert back.