Readit News logoReadit News
abeisgreat · a year ago
I worked at Firebase for many years and the concerns with security rules have always plagued the product. We tried a lot of approaches (self expiring default rules, more education, etc) but at the end of the day we still see a lot of insecure databases.

I think the reasons for this are complex.

First, security rules as implemented by Firebase are still a novel concept. A new dev joining a team adding data into an existing location probably won’t go back and fix rules to reflect that the privacy requirements of that data has changed.

Second, without the security of obscurity created by random in-house implementations of backends, scanning en masse becomes easier.

Finally, security rules are just hard. Especially for realtime database, they are hard to write and don’t scale well. This comes up a lot less than you’d think though, as any time automated scanning is used it’s just looking for open data, anything beyond “read write true” as we called it would have prevented this.

Technically there is nothing wrong with the Firebase approach but because it is one of the only backends which use this model (one based around stored data and security rules), it opens itself up to misunderstanding, improper use, and issues like this.

supriyo-biswas · a year ago
To be honest I've always found the model of a frontend being able to write data into a database highly suspect, even with security rules.

Unlike a backend where where the rules for validation and security are visible and part of the specifications, Firebase's security rules is something one can easily forget as it's a separate process, and has to be reevaluated as part of every new feature developed.

Kiro · a year ago
Yeah, I've never understood how this concept can work for most applications. In everything I build I always need to do something with the input before writing it to a database. Just security rules are not enough.

What kind of apps are people building where you don't need backend logic?

piotrkaminski · a year ago
Our experience has been very different. Our Firebase security rules are locked down tight, so any new properties or collections need to be added explicitly for a new feature to work — it can't be "forgotten". Doing so requires editing the security rules file, which immediately invites strict scrutiny of the changed rules during code review.

This is much better than trying to figure out what are the security-critical bits in a potentially large request handler server-side. It also lets you do a full audit much more easily if needed.

winwang · a year ago
Are you suggesting that it's essentially too easy for a dev to just set and forget? That's a pretty interesting viewpoint. Not sure how any BaaS could solve that human factor.
xyzeva · a year ago
We tried to contact google, via support to try to help or for them to help disclose the issues to the websites. We got no response other then a response telling us that they will be creating a feature request on our behalf if we wanted instead of helping us, which is fair as I think we'd have to escalate pretty far up in Firebase to get the attention of someone who could alert project owners.
abeisgreat · a year ago
One of the things we fought for, for years after acquisition was to maintain a qualified staff of fulltime, highly paid support people who are capable of identifying and escalating issues like this with common sense.

This is a battle we slowly lost. It started with all of support being the original team, then went to 3-4 fulltime staff plus some contracts, to entirely contractors (as far as I’m aware).

This was a big sticking point for me. I told them I did not believe we should outsource support, but they did not believe we should have support for developer products at all, so I lost to that “compromise.” After that I volunteered myself to do the training of the support teams, which involved traveling to Manila, Japan and Mexico regularly. This did help but like support as whole, it was a losing battle and quality has declined over time.

Your experience is definitely expected and perhaps even by design. Sadly this is true across Google, if you want help you’d best know a Googler.

0xdeadbeefbabe · a year ago
> which is fair as I think we'd have to escalate pretty far up in Firebase to get the attention of someone who could alert project owners.

This begs the question, isn't this a security vulnerability after all?

seanwilson · a year ago
Looking at https://firebase.google.com/docs/rules/basics, would it be practical to have a "simple security mode" where you can only select from preset security rule templates? (like "Content-owner only" access or "Attribute-based and Role-based" access from the article) Do most apps need really custom rules or they tend to follow similar patterns that would be covered by templates?

A big problem with writing security rules is that almost any mistake is going to be a security problem so you really don't want to touch it if you don't have to. It's also really obvious when the security rules are locked down too much because your app won't function, but really non-obvious when the security rules are too open unless you probe for too much access.

Related idea: force the dev to write test case examples for each security rule where the security rule will deny access.

Deleted Comment

piotrkaminski · a year ago
One simple trick helped us a lot: we have a rules transpiler (fireplan) that adds a default "$other": {".read": false, ".write": false} rule to _every_ property. This makes it so that any new fields must be added explicitly, making it all but impossible to unknowingly "inherit" an existing rule for new values. (If you do need a more permissive schema in some places you can override this, of course.)

Our use of Firebase dates back 10+ years so maybe the modern rules tools also do this, I don't know.

What would really help us, though, would be:

1. Built-in support for renaming fields / restructuring data in the face of a range of client versions over which we have little control. As it is, it's really hard to make any non-backwards-compatible changes to the schema.

2. Some way to write lightweight tests for the rules that avoids bringing up a database (emulated or otherwise).

3. Better debugging information when rules fail in production. IMHO every failure should be logged along with _all_ the values accessed by the rule, otherwise it's very hard to debug transient failures caused by changing data.

nness · a year ago
I've been an advocate for Firebase and Firestore for a while — but will agree to all of these points above.

It's a conceptual model that is not sufficiently explained. How we talk about it on own projects is that each collection should have a conceptual security profile, i.e. is it public, user data, public-but-auth-only, admin-only, etc. and then use the security rule functions to enforce these categories — instead of writing a bespoke set of conditions for each collection.

Thinking about security per-collection instead of per-field mitigates mixing security intent on a single document. If the collection is public, it should not contain any fields that are not public, etc. Firestore triggers can help replicate data as needed from sensitive contexts to public contexts (but never back.)

The problem with this approach is that we need to document the intent of the rules outside of the rules themselves, which makes it easy to incorrectly apply the rules. In the past, writing tests was also a pain — but that has improved a lot.

mh8h · a year ago
It's not that difficult to build the scanner into the firebase dashboard. Ask the developer to provide their website address, do a basic scanning to find the common vulnerability cases, and warn them.
abeisgreat · a year ago
Firebase does that, the problem is "warning them" isn't as simple as it sounds. Developers ignore automated emails and they rarely if ever open the dashboard. Figuring out how to contact the developers using the platform (and get them to care) has been an issue with every developer tool I've worked on.
andenacitelli · a year ago
It also makes portability a pain. Switching from an app with Firebase calls littered through the frontend and data consistency issues to something like Postgres is a lengthy process.
ben_jones · a year ago
Firebase attracts teams that don’t have the experience to stand up a traditional database - which at this point is a much lower bar thanks to tools like RDS. That is a giant strobing red light of a warning for what security expectations should be for the average setup. No matter what genius features the Firebase team may create this was always going to be a support and education battle that Google wasn’t going to fully commit to
markhalonen · a year ago
at Steelhead we use RLS (row level security) to secure multi-tenant Postgres DB. Coolest check we do is create a new Tenant and dbdump with RLS enabled and ensure the dump is empty. Validates all security policies in 1 fell swoop.
kjuulh · a year ago
The security rules where I fell off my love with Firebase, not that there is anything wrong with the security, but until the point of having to write those security rules, the product experience felt magical, so easy to use, only one app to maintain pretty much.

But with the firebase security rules, I now pretty much have half of a server implemented to get the rules working properly, especially for more complex lookups. And for those rules, the tooling simply wasn't as great as using typescript or the likes.

I haven't used firebase in years tho, so I don't know if it has gotten easier.

cryptonector · a year ago
Firebase needs something like RLS (row-level security). It needs to be real easy to write authorization rules in the database, in SQL (or similar), if you're going to have apps that directly access the database instead of accessing it via a proxy that implements authorization rules.
xyzeva · a year ago
I agree! Supabase does it pretty good.
mistrial9 · a year ago
very well spoken arguments for a fundemental need for structural diversity, not monoculture, on the net
brazzy · a year ago
I don't see the comment arguing for that at all, and I don't think the analogy to crop monocultures being more vulnerable to pests really holds.

There are good reasons we deride "security through obscurity" as valid, and just because "structural diversity" makes automated scanning harder doesn't mean it can't be done. See Shodan.

Logykk · a year ago
I view the issue as more of a poor UX choice than anything else. Firebase's interface consists entirely of user-friendly sliders and toggles EXCEPT for the security rules, which is just a flimsy config file. I can understand why newer devs might avoid editing the rules as much as possible and set the bare minimum required to make warnings go away, regardless of whether they're actually secure or not. There should be a more graphical and user-friendly way to set security rules, and devs should be REQUIRED to recheck and confirm them before any other changes can be applied.
begueradj · a year ago
This reminds me of "How I pwned half of America’s fast food chains, simultaneously." https://mrbruh.com/chattr/

HN: https://news.ycombinator.com/item?id=38933999

MrBruh · a year ago
That's because I wrote both of those articles, and this is the sequel to that blog post. :P
begueradj · a year ago
Indeed :) Same authors ... referring to each others :)
hk__2 · a year ago
This is normal, the 6th word of the blog post is a link to it.

> After the initial buzz of [pwning Chattr.ai] had settled down, […]

rjbwork · a year ago
Correct me if I'm wrong, but 75% of sites with these vulns are still just hanging out there ready to be dumped, according to the end of this post?

Insane.

Some days I think one ought to be licensed to touch a computer.

Aurornis · a year ago
Many businesses don’t have full time developers. They contract out to agencies who build the website for them. The agencies have a rotating cast of developers and after the initial encounter with their good devs they try to rotate the least experienced developers into handling the contract (unless the company complains, which many don’t).

The vulnerability emails probably got dismissed as spam, or forwarded on and ignored, or they’re caught in some PM’s queue of things to schedule meetings about with the client so they can bill as much as possible to fix it.

> Some days I think one ought to be licensed to touch a computer.

There are plenty of examples of fields where professional licensing is mandatory but you can still find large numbers of incompetent licensed people anyway. Medical doctors have massive education and licensing requirements, but there is no shortage of quack doctors and licensed alternative medicine practitioners anyway.

xyzeva · a year ago
Sadly, this is true, and theres probably much more. We did our best, sent customized emails to each of them, telling what was affected, how to fix it, and how to get in contact.
MrBruh · a year ago
Correct, that's why we couldn't post a list of affected sites or malicious actors would immediately abuse it :/
wavemode · a year ago
It seems reasonable to assume that the exposed information has already fallen into the wrong hands. Might as well post the list at this point (or at some point, at least) so that any users of those sites can become aware, no?
prmoustache · a year ago
Shouldn't encrypting all databased records be the only sane, safe and legal solution with decryption key sent to local (to the website owner) law enforcement when site owners aren't responsive?

Not saying you should do that given the current state of the laws.

tgv · a year ago
It'll now take them 2-3 weeks to get the details.
bsder · a year ago
Until PII compromise puts a company out of business, it's just a cost (to you) of doing business (for them).
xyzeva · a year ago
True, but also better than threat actors getting to it and dumping the DB, causing more problems for the customers.
itqwertz · a year ago
This is the inevitable outcome of picking cheap-fast from the cheap-fast-good PM triangle. Unfortunately for some customers/users, their concerns were left out of the conversation and their PII is the cost.

I’d be wary of any company listed here that made that decision and hasn’t changed leadership, as it has been proven time and time again that many companies simply don’t care enough about customers enough to protect them. History repeats itself.

xyzeva · a year ago
I agree for the most, but there was some good apples (even though very few) that were very thankful and fixed it fast.
simonw · a year ago
I have a very basic Firebase question: are most of the apps described in this post implemented entirely as statically hosted client-side JavaScript with no custom server-side code at all - the backend is 100% a hosted-by-Google Firebase configuration?

If so, I hadn't realized how common that architecture had become for sites with millions of users.

evantbyrne · a year ago
Yeah. Either entirely client-side or passing through a server naively. This is the inevitable result of having an "allow by default" security model in an API. Unfortunately, insecure defaults are a common theme with libraries targeted at JavaScript developers. GraphQL is another area I would expect to see these kinds of issues.
hazelnut · a year ago
> with no custom server-side code at all

Could be a mix. Firebase also offers Firebase Functions which are callable functions in the cloud. That code is not public.

However, Firestore or Firebase realtime database both require the user to setup security rules. Otherwise all data can be read by anybody.

cryptonector · a year ago
That's a pretty crazy set-up, but it can work if appropriate authorization rules are coded into the SQL schema on the backend.

Writing appropriate authz rules on the backend has to be made easy.

johnnyAghands · a year ago
900 Sites, 125 million accounts, 1 vulnerability, 0 Girlfriends.
voidUpdate · a year ago
Apart from the customer service agent that tried to flirt with them :P
HaZeust · a year ago
Great comment, you really showed them!
MrBruh · a year ago
Bro woke up on the wrong side of the bed
throwaway984393 · a year ago
Bro woke up with no gf :'(
rfl890 · a year ago
The customer support gave me a good laugh. Thanks
maipen · a year ago
Stuff like this, makes me thankful to have chosen password managers and virtual cards a long time ago...

Still this makes the interent scarier. Most people don't have a clue how fragile the web is and how vunerable they are.

user90131313 · a year ago
Somehow my assumption is it will only get worse from here, with AI agents looking for exploits etc with much more efficiently than bots? weird future is waiting
xyzeva · a year ago
Yeah, funny how that works.

Services as time goes on makes making websites easier, and abstracts more stuff, which makes devs oblivious to what they have to configure.

tamimio · a year ago
> chosen password managers

It’s not enough; make sure to use a unique email for each service you sign up for. This limits the damage in case of an incident and protects your privacy, as no one can perform OSINT on you to cross reference other services. Additionally, I’ve found that sometimes you can detect a site breach before the owners do when you receive a malicious email sent to that unique address.

maipen · a year ago
> make sure to use a unique email for each service you sign up for.

Unfortunatly that's a big hassle that I am not willing to go through.

Apple's approach to pseudo emails was very nice and in my experience, works very well, but as mainly PC user I can't take advantage of this.

Do you know or recommend a service for this thats easy and fast to use?