I understand Reuters taking down the article, but why would Lawfare or other blogs comply? What could an Indian court possibly do to someone entirely outside of their jurisdiction? It seems like the most appropriate response is "no, and we're going to publish the demand letters," which is exactly what techdirt did.
Let Rajat Khare - the guy who is likely behind Appin - file in a US (or EU, or wherever) court. In the US, at least, he'd have to provide some evidence that the article isn't true, which he probably can't.
Fuck Rajat Khare and Appin, the hacking company he almost certainly controls.
[1] https://www.thedailybeast.com/metoo-media-assassins-clare-lo...
> A new joint investigation by The Markup and Wired...
And when I go to the page about actual investigation by The Markup [1]
> Our investigation stopped short of analyzing precisely how effective Geolitica’s software was at predicting crimes because only 2 out of 38 police departments provided data on when officers patrolled the predicted areas. Geolitica claims that sending officers to a prediction location would dissuade crimes through police presence alone. It would be impossible to accurately determine how effective the program is without knowing which predictions officers responded to and which ones they did not respond to.
Also, later in the article
> Plainfield officials said they never used the system to direct patrols.
Given all this, it's somewhat simplistic to say it's "pretty terrible at predicting crimes", even though that makes for a good clickbait headline. It seems that the software was intended to identify high-crime areas that to target for patrolling, which doesn't seem like a huge problem to me -- but it seems like the software was never actually used as intended in the first place.
----------------------------------------
[1] https://themarkup.org/prediction-bias/2023/10/02/predictive-...
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
[1] https://www.justice.gov/opa/pr/department-justice-announces-...
> Representatives from Germany—a country that has staunchly opposed the proposal—said the draft law needs to explicitly state that no technologies will be used that disrupt, circumvent, or modify encryption. “This means that the draft text must be revised before Germany can accept it,” the country said.
As far as I can tell, Signal uses Twilio only to send SMS for phone number verification. Verification happens when a user registers a new number or changes the number on their existing account.
The rate at which Signal is adding new users could be calculated by:
1900 * (proportion of new registrants among SMS recipients) / (length of Twilio incident)
You could probably make some common-sense assumptions about the first variable. But I can't find any publicly available info on when Twilio was first compromised. Their press release only mentions that they discovered the intrusion on August 4, which is presumably close to the end date of the incident. Does anyone know what the estimated start of the incident might be?