As much as I don't like facebook as a company, I think the jury reached the wrong decision here. If you read the complaint[1], "eavesdropped on and/or recorded their conversations by using an electronic device" basically amounted to "flo using facebook's sdk and sending custom events to it" (page 12, point 49). I agree that flo should be raked over the coals for sending this information to facebook in the first place, but ruling that facebook "intentionally eavesdropped" (exact wording from the jury verdict) makes zero sense. So far as I can tell, flo sent facebook menstrual data without facebook soliciting it, and facebook specifically has a policy against sending medical/sensitive information using its SDK[2]. Suing facebook makes as much sense as suing google because it turned out a doctor was using google drive to store patient records.
At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.
The amended complaint, [3], includes the allegations against Facebook as at that time Facebook was added as a defendant to the case.
Amongst other things the amended complaint points out that Facebook's behavior lasted for years (into 2021) after it was publicly disclosed that this was happening (2019), and then even after Flo was forced to cease the practice by the FTC, and congressional investigations were launched (2021) it refused to review and destroy the data that had previously been improperly collected.
I'd also be surprised if discovery didn't provide further proof that Facebook was aware of the sort of data they were gathering here...
>At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.
Are you talking about this?
>As one of the largest advertisers in the nation, Facebook knew that the data it received
>from Flo Health through the Facebook SDK contained intimate health data. Despite knowing this,
>Facebook continued to receive, analyze, and use this information for its own purposes, including
>marketing and data analytics.
Maybe something came up in discovery that documents the extent of this, but this doesn't really prove much. The plaintiffs are just assuming because there's a clause in ToS saying so, facebook must be using the data for advertising.
Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.
Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.
>Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so.
What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.
>They knew, or should have known, that they needed to check if it was legal to use it
I don't like to defend facebook either but where does this end? Does google need to verify each email it sends in case it contains something illegal? Or AWS before you store something in a publicly accessible S3 bucket?
I wish there was information about who at Facebook received this information and “used” it. I suspect it was mixed in with 9 million other sources of information and no human at Facebook was even aware it was there.
I would say you have a responsibility to ensure you are getting legal data. you don't buy stolen things. That is meta has a reponsibility to ensure that they are not partnering with crooks. Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)
>Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)
Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?
I have the type of email address that regularly receives email meant for other people with a similar name. Invites, receipts, and at one point someones Disney+ account.
At one point I was getting a strangers fertility app updates - didn't know her name, but I could tell you where she was in her cycle.
I've also had NHS records sent to me, again entirely unsolicited, although that had enough I could find who it was meant for and inform them of the data breach.
I'm no fan of facebook, but I'm not sure you can criminalise receiving data, you can't control what others send you.
That's why in these cases you'd prefer a judgment without a jury. Technical cases like this will always confuse jurors, who can't be expected to understand details about sdk, data sharing, APIs etc.
On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
>> Technical cases like this will always confuse jurors.
This has been an issue since the internet was invented. Its always been the duty of the lawyers on both sides to present the information in cases like this in a manner that is understandable to the jurors.
I distinctly remember during the OJ case, there were many issues that the media said most likely were presented in such a detailed manner, many in the jurors seemed to be checked out. At the time, the prosecution spent days just on the DNA evidence. In contrast, the defense spent days just on how the LAPD collected evidence at the crime scene with the same effect, that many on the jury seemed to check out the deeper the defense dug into it.
So it not just technical cases, any kind of court case that requires a detailed understanding of anything complex comes down to how the lawyers present it to the jury.
> Technical cases like this will always confuse jurors... On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
Not to be ageist, but I find this highly counterintuitive.
I've also heard you want a judge trial if you're innocent, jury if you're guilty. A judge will quickly see through prosecutorial BS if you didn't do it, and if you did, it only takes one to hang.
Is it easier for the prosecution to make the jury think Facebook is guilty or for Facebook to make the jury think they are not? I don’t see why one would be easier, except if the jury would be prejudiced against Facebook already. Or is it just luck who the jury sides with?
Suing Facebook instead of Flo makes perfect sense, because Facebook has much more money. Plus juries are more likely to hate FB than a random menstruation company.
Whenever you think of a court versus Facebook, imagine one of these mini mice trying to stick it to a polar bear. Or a goblin versus a dragon, or a fly versus an elephant.
These companies are for the most part effectively outside of the law. The only time they feel pressure is when they can lose market share, and there's risk of their platform being blocked in a jurisdiction. That's it.
>These companies are for the most part effectively outside of the law
You have it wrong in the worst way. They are wholly inside the law because they have enough power to influence the people and systems that get to use discretion to determine what is and isn't inside the law. No amount of screeching about how laws ought to be enforced will affect them because they are tautologically legal, so long as they can afford to be.
The worst part for me personally is that almost everyone I know cares about this stuff and yet they keep all of their Meta accounts. I really don't get it and frankly, find it kind of disturbing.
I know people that don't see anything wrong with Meta so they keep using it. And that's fine! Your actions seem to align with your stated values.
I get human fallibility. I've been human for awhile now, and wow, have I made some mistakes and miscalculations.
What really puts a bee in my bonnet though is how dogmatic some of these people are about their own beliefs and their judgement of other people.
I love people, I really do. But what weird, inconsistent creatures we are.
Voting with your feet doesn't work if you don't have a place to go. People are afraid of losing their connections, which are some of the most precious things we have. Doesn't matter if it's an illusion, that's enough. Zuck is holding us hostage on our most basic human instincts. I think that's fucked up.
> The worst part for me personally is that almost everyone I know cares about this stuff and yet they keep all of their Meta accounts.
They care as much as people who claim to care about animals but still eat them, people who claim to love their wives and still beat/cheat them. Your actions are the sole embodiment of your beliefs
Eh, I care and I don't do it, but my wife does. I do not agree with her choices in that area and voice the concerns in a way that I hoped would speak to her, but it does not work as it is now a deeply ingrained habit.
I, too, have vices she tolerates so I don't push as hard as I otherwise would have, but I would argue it is not inconsistency. It is a question of what level of compromise is acceptable.
I keep sharing stories like this with them. Privacy violations, genocide, mental health, …. Whenever I think it might be something someone cares about I share with them. I also make an effort to explain to my non tech folks that meta is Facebook, instagram, WhatsApp, to make sure they understand recognize the name. Many people do not know what meta is. Sometimes I suspect it was a way to capture the bad publicity and protect their brands.
$1 for the first user, $2 for second, $4 for third...By the 30th user, it would be painful even for mega corps. By 40th, it would be an absurd number.
Might also be worth trying to force them to display a banner on every page of the site "you're on facebook, you have no privacy here", like those warnings on cigarette boxes. These might not work though, people would just see and ignore them, just like smokers ignore warnings about cigarettes.
Everybody blames facebook, noone blames the legislators and the courts.
Stuff like this could easily make them pay multi-billion dollar fines, stuff that affects more users maybe even in the trillion range. When government workers come pick up servers, chairs and projectors from company buildings to sell at an auction, because there is not enough liquid value in the company to pay the fines, they (well, the others) would reconsider quite fast and stop with the illegal activities.
Sarah Williams (forgot the name) testified in US Congress as to Facebooks strategies on handling governments. Based on her book, it seems Brazil has been the most effective out of major democratic governments in confronting Facebook. Of course, you have China completely banning Facebook.
I think Mark Zuckerberg is acutely aware of the political power he holds and has been using this immense power at least for the last decade. But since Facebook is a US company and the US government is not interested in touching Faceebok, I doubt anyone will see what Zuckerberg and Facebook are up to. The US would have to put Lina Khan back in at the FTC, or put her high up in the Department of Justice to split Facebook into pieces. I guess the other hope is that states' attorneys' general when an anti-monopoly lawsuit.
Don't get me wrong, I don't "blame Facebook". I lament the environment that empowers Facebook to exist and do harm. These companies should be gutted by the state, but they won't because they pump the S&P.
I don't think many of you read the article... the Flo app is the one in the wrong here, not meta. The app people were sending user data to meta with no restrictions on its use. Despite however the court ruled.
> The app people were sending user data to meta with no restrictions on its use
And then meta accessed it. So unless you put restrictions on data, meta is going to access it. Don't you think it should be the other way around? Meta to ask for permission? Then we wouldn't have this sort of thing.
5 years ago I was researching the iOS app ecosystem. As part of that exercise I was looking at the potential revenue figures for some free apps.
One developer had a free app to track some child health data. It was long time ago so I don't remember the exact data being collected. But when asked about the economics of his free app, the developer felt confident about a big pay day.
As per him the app's worth was in the data being collected. I don't know what happened to the app but it seemed that app developers know what they are doing when they invade privacy of their users - under the guise of "free" app. After that I became very conscious about disabling as many permissions as possible and especially not using apps to store any personal data, especially health data.
I don't understand why anyone would let these psychopathic corporations have any of their personal or health data. Why would you use an app that tracked health data, or use a wearable device from any of these companies that did that. You have to assume, based on their past behavior, that they are logging every detail and it's going to be sold and saved in perpetuity.
I guess because people want to track some things about their health, and people provide good pieces of software with a simple UI to do it, and this is more useful that, say, writing it down in a notebook, or in a text file or notes app.
I guess also people feel that corporations _shouldn't_ be allowed to do bad things with it.
Sadly, we already know with experience in the last 20 years, that many people don't care about what information they give to large corporations.
However, I do see more and more people increasingly concerned about their data. They are still mainly people involved in tech or related disciplines, but this is how things start.
Well maybe one reason this is hard to understand is that the plaintiff in this case hasn’t been harmed in any way. I suppose you could also argue, why would anyone go outside, there are literally satellites in space that image your every move in control of psychopathic corporations, logging every detail which they sell and save in perpetuity.
True. Unfortunately, users are all humans - with miserably predictable response patterns to "Look at this Free New Shiny Thing you could have!" pitches, and the ruthless business models behind them.
You can -- the real problem here is that each app could violate your privacy in different ways. Unless you break TLS and inspect all the traffic coming from an app (and, do this over time since the reality of what data is sent will change over time) then you don't really know what your apps are stealing from you. For sure, many apps are quite egregious in this regard while some are legitimately benign. But, do you as a user have a real way to know this authoritatively, and to keep up with changes in the ecosystem? My argument would be that even security researchers don't have time to really do a thorough job here, and users are forced to err on the side of caution.
What they do then is create an app where location is necessary, make that app spin up a localhost server, then add js to facebook and every site with a like button to phone that localhost and basically deanon everyone.
My wife uses Flo though every time I see her open the app and input information the tech side of my brain is quite alarmed. An app like that keeps very very personal information and really highlights for me the need to educate non-technical folks on information security.
> [...] users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.
> [...] 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.
> Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.
> "[...] the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,”
Bashing on Facebook/Meta might give a quick dopamine hit, but they really aren't special here. The victims' data was routinely sold, en mass, per de facto industry practices. Victims should assume that hundreds of orgs, all over the world, now have copies of it. Ditto any government or criminal groups which thought it could be useful. :(
[1] https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
[2] https://storage.courtlistener.com/recap/gov.uscourts.cand.37... page 6, line 1
The amended complaint, [3], includes the allegations against Facebook as at that time Facebook was added as a defendant to the case.
Amongst other things the amended complaint points out that Facebook's behavior lasted for years (into 2021) after it was publicly disclosed that this was happening (2019), and then even after Flo was forced to cease the practice by the FTC, and congressional investigations were launched (2021) it refused to review and destroy the data that had previously been improperly collected.
I'd also be surprised if discovery didn't provide further proof that Facebook was aware of the sort of data they were gathering here...
[3] https://storage.courtlistener.com/recap/gov.uscourts.cand.37...
Are you talking about this?
>As one of the largest advertisers in the nation, Facebook knew that the data it received
>from Flo Health through the Facebook SDK contained intimate health data. Despite knowing this,
>Facebook continued to receive, analyze, and use this information for its own purposes, including
>marketing and data analytics.
Maybe something came up in discovery that documents the extent of this, but this doesn't really prove much. The plaintiffs are just assuming because there's a clause in ToS saying so, facebook must be using the data for advertising.
Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.
Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.
What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.
>They knew, or should have known, that they needed to check if it was legal to use it
What do you think this should look like?
But FB, having received this info proceeded to use it and mix it with other signals it gets. Which is what the complaint against FB alleged.
Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?
Really the only blame here should be on Flo.
At one point I was getting a strangers fertility app updates - didn't know her name, but I could tell you where she was in her cycle.
I've also had NHS records sent to me, again entirely unsolicited, although that had enough I could find who it was meant for and inform them of the data breach.
I'm no fan of facebook, but I'm not sure you can criminalise receiving data, you can't control what others send you.
This happens accidentally every single day and we don't punish the victim
On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
This has been an issue since the internet was invented. Its always been the duty of the lawyers on both sides to present the information in cases like this in a manner that is understandable to the jurors.
I distinctly remember during the OJ case, there were many issues that the media said most likely were presented in such a detailed manner, many in the jurors seemed to be checked out. At the time, the prosecution spent days just on the DNA evidence. In contrast, the defense spent days just on how the LAPD collected evidence at the crime scene with the same effect, that many on the jury seemed to check out the deeper the defense dug into it.
So it not just technical cases, any kind of court case that requires a detailed understanding of anything complex comes down to how the lawyers present it to the jury.
Not to be ageist, but I find this highly counterintuitive.
Deleted Comment
Innocent until proven guilty is the right default, but at some point when you've been accused of misconduct enough times? No jury is impartial.
Dead Comment
These companies are for the most part effectively outside of the law. The only time they feel pressure is when they can lose market share, and there's risk of their platform being blocked in a jurisdiction. That's it.
You have it wrong in the worst way. They are wholly inside the law because they have enough power to influence the people and systems that get to use discretion to determine what is and isn't inside the law. No amount of screeching about how laws ought to be enforced will affect them because they are tautologically legal, so long as they can afford to be.
I know people that don't see anything wrong with Meta so they keep using it. And that's fine! Your actions seem to align with your stated values.
I get human fallibility. I've been human for awhile now, and wow, have I made some mistakes and miscalculations.
What really puts a bee in my bonnet though is how dogmatic some of these people are about their own beliefs and their judgement of other people.
I love people, I really do. But what weird, inconsistent creatures we are.
They care as much as people who claim to care about animals but still eat them, people who claim to love their wives and still beat/cheat them. Your actions are the sole embodiment of your beliefs
I, too, have vices she tolerates so I don't push as hard as I otherwise would have, but I would argue it is not inconsistency. It is a question of what level of compromise is acceptable.
Deleted Comment
Might also be worth trying to force them to display a banner on every page of the site "you're on facebook, you have no privacy here", like those warnings on cigarette boxes. These might not work though, people would just see and ignore them, just like smokers ignore warnings about cigarettes.
Stuff like this could easily make them pay multi-billion dollar fines, stuff that affects more users maybe even in the trillion range. When government workers come pick up servers, chairs and projectors from company buildings to sell at an auction, because there is not enough liquid value in the company to pay the fines, they (well, the others) would reconsider quite fast and stop with the illegal activities.
I think Mark Zuckerberg is acutely aware of the political power he holds and has been using this immense power at least for the last decade. But since Facebook is a US company and the US government is not interested in touching Faceebok, I doubt anyone will see what Zuckerberg and Facebook are up to. The US would have to put Lina Khan back in at the FTC, or put her high up in the Department of Justice to split Facebook into pieces. I guess the other hope is that states' attorneys' general when an anti-monopoly lawsuit.
Dead Comment
Flo is wrong for using an online database for personal data.
Meta is wrong for facilitating an online database for personal data.
They're both morally and ethically wrong.
And then meta accessed it. So unless you put restrictions on data, meta is going to access it. Don't you think it should be the other way around? Meta to ask for permission? Then we wouldn't have this sort of thing.
One developer had a free app to track some child health data. It was long time ago so I don't remember the exact data being collected. But when asked about the economics of his free app, the developer felt confident about a big pay day.
As per him the app's worth was in the data being collected. I don't know what happened to the app but it seemed that app developers know what they are doing when they invade privacy of their users - under the guise of "free" app. After that I became very conscious about disabling as many permissions as possible and especially not using apps to store any personal data, especially health data.
I guess also people feel that corporations _shouldn't_ be allowed to do bad things with it.
Sadly, we already know with experience in the last 20 years, that many people don't care about what information they give to large corporations.
However, I do see more and more people increasingly concerned about their data. They are still mainly people involved in tech or related disciplines, but this is how things start.
https://www.mozillafoundation.org/en/privacynotincluded/cate...
Dead Comment
Deleted Comment
> [...] users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.
> [...] 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.
> Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.
> "[...] the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,”
Bashing on Facebook/Meta might give a quick dopamine hit, but they really aren't special here. The victims' data was routinely sold, en mass, per de facto industry practices. Victims should assume that hundreds of orgs, all over the world, now have copies of it. Ditto any government or criminal groups which thought it could be useful. :(
Dead Comment