The trusting developers not to sell any data but putting zero safeguards in place to prevent this and extremely punitive repercussions despite being repeatedly told by the public, media, and even high level employees tells me Facebook can't plead ignorance to this and they not only knew this was happening, but they probably intended for it to happen. They knew it was illegal but put all the incentives for companies not to follow the rules. That's the only hole in his statement.
As for the rest of it, it's progress. It seems like a lot of good changes, but Analytically Facebook execs probabaly summarized that this is the least they had to do to stave off regulations or monopoly anti trust from congress. Any less, regulations would still be placed on them so it's brilliant strategy to do this and frame it in a way that Facebook is concerned about all the damage and we voluntarily do this for you instead of the truth which was we knew about this forever and only are doing it because of threat of regulations.
Overall, an optimal outcome for all parties currently, except for society as a whole down the line
What, exactly, are you claiming about this that was “illegal”? When you singup for Facebook, you agree that anything you post might be shared with others on the platform.
The Facebook developer platform became so limited in 2014 that most developers (including me) left. There was no point in developing apps for the social graph that had no ability to be use the social graph. But even prior to that, the sharing of this information with apps, even those authorized by a friend, wasn’t “illegal”. You agreed to it when you signed up for Facebook and voluntarily handed them your information. Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here.
One day, the whole farce that is "you agreed to the terms and conditions" deserves to just die.
Almost no one reads them, so they should not be enforceable.
I mean, as developers we know when a session is established, pages are visited, and can easily see how long they've been on the page.
No one can read the typical T&C in 10 seconds .. let alone 1 minute .. especially without even opening the page! So the options should be something like:
[x] I don't care, just whatever dude.
[ ] No. Get me out of here. Because I don't know how to close the browser window myself.
> Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here
Perhaps not in the US, but I'd like to point out that it's not the same way everywhere: I think the EU is moving towards another direction. There's a whole thing around whether a company has a responsibility to do due diligence around preserving the personal information of its users. Just because you gave them your data doesn't always mean that they can now do whatever they want with it (e.g. give access to detailed information in large amounts to third parties). Even if you sign an agreement, in many jurisdictions there are certain rights a company can't just make you sign away.
Doesn't matter what you signed and how it relates to civil law, you cannot sign away your statutory rights.
What matters is how criminal law views this particular data collection in the context of Facebook's working relationship with this particular client, within all of the various jurisdictions that Facebook operates.
>When you singup for Facebook, you agree that anything you post might be shared with others on the platform.
Except you CAN'T sign away your rights in many jurisdictions, including this thing called HIPAA. So if Facebook sold people's mentions of health problems to third-parties... is that a violation?
I mean, this is the exact scenario people grilled Windows 10's spyware. But somehow, Facebook doing it, isn't an issue? What's the difference? I'm honestly asking.
Yeah this is one of the worst part about this whole situation. Everyone screaming ILLEGAL!! and also somehow acting like Facebook is the only one doing this?
Almost EVERY free site is trying their damn best to collect as much info and link everything together to sell it so they can- you know.. make money.
I guess those people are instead willing to pay for virtually every site they use on the internet, right? Right? crickets
So FB opened up their platform to 3rd party Devs in 2007 and this CA incident happened in 2013. FB changed their policy of not allowing broad data access to these Devs in 2014. So my question is: Why they admit that they'd audit the pre-2014 apps now when NYT and Guardian/Observer broke the news? And what policy is in place now that makes sure that these 3rd party Devs won't sell whatever info they do collect as per post-2014 policy? I am not satisfied with what Mark said just now. We need more answers.
At the very start of Facebook platform the API would anonymize the user's email address, the app would get an app-specific hash, e.g. abcdefg-app123456@facebook.net It was a working email address with Facebook handling the forwarding.
This proved futile as the very first thing that apps then did was to ask users for their real email address.
To really fix this, Facebook will have to stop allowing 3rd party developers direct access to user data.
Basically, FB should introduce an App-Engine like platform where the backend of any 3rd-party application that uses FB data has to run on FB-owned servers. Developers of these applications would then ship their code to FB (similar to Heroku) and run in a sandboxed environment where they are not allowed to take data out at all.
That way FB can audit how the data is being used at anytime and kick out people who are out of compliance with their terms. If a user deletes their FB account, now all their data could be deleted from any 3rd party applications automatically. This is basically similar to the way the government handles classified data.
They could aggressively pursue enforcement and punitive measures. It's not about preventing everything that could possibly go awry, but making it well-known in the developer community that Facebook/Platform has no problem shutting you down if you egregiously break the rules.
I think the only sane response would be to shut down the developer program. I doubt it contributes much to the FB bottomline and it's clearly something FB doesn't care much about given the breadth of this scandal.
2011 FTC hearings were about the exact same topic. You can't trust the 3rd party app developers. Back then it was social game developers selling user profile data to Rapleaf and other data brokers.
> Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services.
He was part of Cambridge Analytica at the time. So they suspended his account along with the rest of them I suppose.
> What about the punishment to Christopher Wylie, by closing/suspending his accounts in facebook, whatsapp, etc ?
The Christopher Wylie that, by his own account, was a knowing, active, and key participant in the things they punished CA for, and in fact claims to be the one who came up with the concept for it?
It's in Facebook's benefit for advertisers to gather all that data, because the only way they can actually use it to make money is by advertising to the users on Facebook. I find it incredibly hard to believe that this thought never crossed anybody's mind.
Have you ever looked at FB's ad platform? You don't download everyone's data and target the campaign yourself. You target, "18-25 males in these zip codes who like the yankees". I don't see how you go from that platform (hosted and controlled by facebook) to something else.
> They knew it was illegal but put all the incentives for companies not to follow the rules.
Offtopic but i cant help it. You get what you measure. Which is why economies that only measure profit optimize for nothing but profit. When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit, we should not be surprised that companies like Facebook do awful things. the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of) https://en.wikipedia.org/wiki/Generally_Accepted_Accounting_...
> When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit
We don't need accountants to measure legality. That's what we have law enforcement and courts for. Investors care about profits; behaving illegally should hurt profits. Deputising a multi-billion dollar company's thousands of shareholders as its moral police is an absurd proposal.
> the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of)
How do you record company/moral values into Books of Accounts? How does one audit those morals? What category do you assign it under: Asset, Liability or Owners Equity? And moreover, what would happen if morals change and some are deemed obsolete? Also, if you start to measure company values then it becomes obvious that other things that have been kept out of purview of the Books would want to have an equal footing too; like: legal contracts with clients, employee agreements, court cases, so on and so forth.
There is a very good reason why accounting standards (like GAAP or IFRS) decided to only record transactions into the Books and not be concerned with legalities or moralities. It's impossible to assign an amount to a company/moral value as what is valuable to me as a shareholder may not be considered valuable to say the local tax authority or an would be investor and vice versa.
Here is a one attempt at using environmental, social and governance (ESG) integration factors to create an index of companies that are measured beyond just the basic plain vanilla profitability/market cap metrics:
Slightly offtopic too, but I would love to hear suggestions for good books on economics and/or philosophy that would discuss non-monetary profit and values.
One of the key points of the Z post is that for an app to be able to request permissions from users, the app creator will need to sign a contract and be subject to an audit.
This appears to solve the issue of having wide permissions, but it does not do so. In reality, this is an attempt at transferring facebook's risk to shady app developers, while the overall lifecycle for the app won't change.
In essence, this is a do-nothing from the standpoint of app developers who have requested additional permissions. Any app developer who is told they need to undergo an audit, due to transcribing the entire social network, can simply say no and get their account banned. It will likely have no effect, as the account will almost certainly have already been suspended in such a spot.
WRT to FB's knowledge of this happening: Your assuming bad intent is no worse than FB's assuming good intent. Otherwise, one of the more reasonable FB comments on HN.
Why can’t they hold on the data inside Facebook and ask developers to come in a VPN to a VM or Remote Desktop (which gets recorded always ) and then use analytical tools installed on it , to work on it with no internet access in a DMZ . By this way they can record everything the developer is doing with data and the worst case he can do is take screenshot. By this way no data goes out of their hands
Yeah it’s like “oh gosh, they violated our terms of service by pulling 50 million users info! We must send them a sternly worded email with a checkbox to confirm they they won’t do it again!”. It’s inconceivable that there aren’t larger exfiltrations of user data that have taken place - how would they even know?
Facebook did not intend for this to happen. That is such nonsense.
They intended for cool apps to go viral across their social graph so Facebook could be a “social utility” and the operating system of human relationships and other airy fantasies they spouted in 2012, 2013, 2014 when they built the app platform.
Their hopes of a beautiful future of joy and freedom were dashed when they discovered humans are capable of garbage behaviour.
Ironically they believed the walled garden of Facebook would clean up the cesspool of blog comments. Oops.
>The trusting developers not to sell any data but putting zero safeguards in place to prevent this and extremely punitive repercussions despite being repeatedly told by the public, media, and even high level employees tells me Facebook can't plead ignorance to this and they not only knew this was happening, but they probably intended for it to happen.
Why would they intend for it to happen? They didn't make any money off of this exfiltration, CA paid people via Mechanical Turk to install the application so that they could mine their data. Facebook didn't get a dime. In fact, they have a monetary interest in preventing this, because their data is worth something and these guys just got it from free usage of their API. So the insinuation that Facebook wanted this to happen, or looked away because it benefited them, makes zero sense.
What kind of safeguards are you imagining? How do you have 3rd parties interface with Facebook without letting those applications reason about the information within a Facebook account? Tinder is valued at over a billion dollars and it's not possible to use it without a Facebook account-- should Facebook shut that down and ban the entire concept of 3rd party Facebook interaction?
I do not understand the anger. They did nothing wrong. I can't believe that people are legitimately arguing that users shouldn't have a right to expose their information to apps.
I think it is ridiculous to think that Facebook is sharing data to 3rd party sources out of charity. If you provide more value via data to integrating applications than any of your competitors, you will continue to have the largest share of the market. So yes, there is profit incentive.
They sold the data by proxy by knowingly letting the 3rd parties syphon the data. Why else would they be at CA when the cops showed up? Are you being intentionally obtuse?
If they're serious about this, and I'm leaning toward they aren't really serious about it, this will affect a lot of startups, specifically in advertising, but probably a lot of mobile apps as well:
“First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps.“
Might be a too little, too late attempt at self regulation since they've apparently known about this cambridge analytica situation since before the election and since then there have been facebook employees embedded with CA to help them target ads.
MSNBC is already playing this as: 1) starts with a denial 2) admits wrongdoing 3) claims behavior will change 4) changes are not carried out and we're in the same place a year down the road.
They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.
I worked on an app eight years ago for a woman that went to Harvard with zuck. I only mention that because she did. Every. Day. Eventually they had me do a Facebook integration and when I saw how easy it was to escalate the permissions I was horrified. I was an inexperienced developer at the time and wasn’t calling the shots. We saved everything. The lead devs laughed about the policy that straight up said it was our responsibility to delete the data after use. That would be inconvenient! The system was designed for this. There is no way for them to do accounting on this issue. This blog is a farce. The company and project doesn’t exist anymore. I’d bet money that more than a few people have that tar ball.
> They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.
Worth noting that Zuck wants something from Xi and Medvedev, has everything he needs in terms of support from the US Gov.
Facebooks real existential threat is just that . He thinks that having everything he wants now is correlated to the future. He should call his bud Billy Gates and see how not playing 5 moves ahead with the government worked out.
Facebook as almost insured that it becomes the whipping post of the FAANG companies as the government wants to look hard on tech. Im genuninely not sure what Zuck can do as long as Apple, Google and Amazon don't make mistakes in the same way.
The next 10 years are going to be a lot like the old ad age about being chased out of a campsite by a bear. It's unimportant to be the fastest (best) of the group, you just cant be the slowest.
Until that time, everyone can complain, but the concept of "revealed preferences" is relevant. Do people actually care? If so, they'll change their behavior, FB will likely notice, and changes will happen.
People have been complaining about FB and privacy since practically day 0. Throughout that entire period, FB has only become more popular.
"They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead."
I get the feeling, but you'll agree that a legally binding legal procedure that is an actual existential threat to your hundred billion dollar company that employs 25k people requires a different approach then a seduction meeting with a despot to try to loosen regulations.
> but refuses to appear before the US Congress and sends the company counsel instead.
This strikes me as especially interesting. I mean, I'd personally theoretically have my reservations about this congress over and above the average congress, but FaceMark refusing strikes me as a deeply telling datum about how he's thinking about this.
Basically no CEOs ever want to testify in front of a congressional committee. There is only downside to such a situation for the company. There is zero upside.
This is not a deeply telling datum. It is a boring an standard one.
testifying in front of congress is just an opportunity for politicians to win points by kicking you around like a ball. very few people (let alone CEOs) stand to gain anything from it, and corporate counsel is paid to put up with abuse.
> ...my reservations about this congress over and above the average congress...
ugh, come on. it's not like the whole congress votes on how you're to be treated, and the questions you'll be asked. the biggest heels on both sides of the aisle are free to harangue you all they want.
Anyone else remember Beacon? This was how FB has always designed to work from the beginning. They've just been toying with the PR ways to say it to make people accept it without thinking.
"They're not a product company, they're a distraction company."
Correction, they are a surveillance company. Need I remind everyone of Google and FB's In-Q-Tel CIA partners in crime? They just figured out a way for everyone to willingly report on themselves, but not just on themselves, on others too! FB is bad and should collapse like every other dotcom boom-bust that uses and abuses it's users... but the difference is the level of monopoly on non-technical users that didn't exist in the 90's. Back then the technical community could have dropped a product like hotcakes and watched it bust... but due to the increase of non-technical users who sign any EULA/TOS and don't give a crap about privacy... I expect nothing will happen until something really bad and at a massive scale happens.
For those who are younger, consider this the slashdot/digg/reddit cycle. Reddit will die next the closer it gets to IPO too.
It's the beauty of computing though. Every market is ripe for disruption if someone has a good vision and follow-through. The problem is that so many of them use the exact same model and a few years later are the ones dying due to lack of integrity.
>Beacon formed part of Facebook's advertisement system that sent data from external websites to Facebook, for the purpose of allowing targeted advertisements and allowing users to share their activities with their friends. Beacon would report to Facebook on its members' activities on third-party sites that also participate with Beacon. These activities would be published to users' News Feed. This would occur even when users were not connected to Facebook and would happen without the knowledge of the Facebook user. One of the main concerns was that Beacon did not give the user the option to block the information from being sent to Facebook.
This was how FB has always designed to work from the beginning
Not exactly. In all fairness, as Zuck points out, this is a key part of the story, and in theory, why this is different:
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent.
This raises the question of what Facebook was doing (if anything) to prevent this sort of action, but the fact that they just took CA at their word that they deleted this ill-gotten data (of course they didn't), it makes me think they did very little. I think this is just as concerning as any other part of this story. Even if people are knowingly willing to hand over data to Facebook (or the devs of some app) in exchange to use a service, they wouldn't think that it's a free for all and anyone can mine the data for whatever they want.
> It is against our policies for developers to share data without people's consent.
Not to mention that it's arguable that "consent" was really given for facebook to share the data in the first place. I'd be interested to see some polling results asking if facebook users knew what facebook was up to and whether they feel OK with it.
Thanks for playing into the story that Facebook created to set the conversation.
This was openly covered last year by BBC when they interviewed Trump's digital campaign manager at the time Theresa Hong [interview linked]. The campaign spent $16M on Facebook. Understandably, Facebook gave them the white glove treatment, even had their own employees embedded in Project Alamo (the headquarters of the campaign's digital arm).
But today Facebook claims they had no idea who one of their multi-million-dollar clients in 2015-2016 were. That it was just some random quiz-making hacker dude selling data to some other random company.
This piece of work posted today by Facebook is what we call damage control. Don't expect the truth from it-- it will contain truths, but it will not be the truth of the matter. And don't let it set your dialogue, man.
Exactly this; no one is sorry, they're sorry they got caught after the fact. This is simply an attempt at ass covering and shifting blame onto Cambridge Analytica so users / investors / governments don't sue Facebook.
I had a very hard time imagining Zuckerberg actually writing these words, it just felt too carefully crafted and full of standard PR tropes. The CNN interview should be interesting, then we can hear the actual words from his mouth.
I always see people say this after any PR crises... but how would you say it instead if not that? Is there a way to be truly creative and not shoot yourself in the foot at the same time?
2018’s 3P data and tracking tech no longer needs to violate FB policy to do what was done in 2013.
FB provides a platform for ads, sure, but it can be used for way more than just ads.
3P tracking can infer who (even specifically) is viewing ads and it knows the social graph by other means. It can also do effective mass psychometric tests. A/B testing infrastructure can be used for more than just optimizing ads ... all of the psychometric dimensions tested by Kogan’s 2013 app can be expressed as embedded imagery and messaging in ads, and the same types of tests can operate at the same scale by integrating 3P data and tracking (some of which certainly originates from policy-compliant FB apps) which already knows the social graph beyond what FB will allow a single app or ad to draw.
All I want is Mark Zuckerberg to say is "this isn't a community, we're a multi billion dollar company. We have a pretty cool website where you can do a bunch of stuff, and we mine your personal data (and sell it) to pay for it. Sound like a good deal? We think so."
But no, we've got to pretend Facebook is a touchy-feely community. I get the feeling that Facebook is ashamed of the way they make their billions.
It's not really selling it's user data for money; it's selling access to it's users for money. Sure, user data allows some advanced targeting, but the reason they make the profit they do is that people are buying ads for people to see, not data about them. That's an important distinction.
Most people are buying ads for people to see. Some others, however, are using the platform as access to user’s extensive 3P data, once they click and are shuffled through multiple shady ad exchanges gathering and selling data, including the social graph.
Some others, like CA and ilk, also get easy access to user’s psychometric data, by embedding the psychometrics into A/B tested ad content.
The adage “you are the product” has only become more true as FB has advanced, whether by their intended or explicit policies or not.
People outside of the tech world, in general, do not really know what the "targeted" in targeted ads means though. They think advertising, they think ads in the NYT or on TV. People do "know", but unless you really think about it, or understand how it works, its invasiveness can be easily overlooked.
I like the part where he says, "We have a responsibility to protect your data, and if we can't then we don't deserve to serve you." Facebook doesn't "serve" consumers, it built an RPG that harvests user data and sells it to the highest bidder.
I just don't get it. I don't get the enormous backlash over this. Users give away their data to a company, the company sells that data to advertisers (or sells ads and targets the ads according to data), and the users can continue using the service they enjoy for free. This was the contract you agreed to when you clicked the sign up button. I don't get why people are mad at Zuckerberg and not themselves! People act like a company is supposed offer their services for free. The free services is the business model of the internet. You want to use Google Docs for free, fine they'll collect data on what you make. You want to play games for free, fine they'll display ads to you. You want to catch up with old friends, fine they'll collect data on your interests. No one is forcing Facebook users to use Facebook. You know they're collecting data on you. If you don't want them to, then don't use their "platform."
For one, I don't think most people are acutely aware that this is what is going on. The HN reading crowd is the enlightened technical elite. We know that this is the transaction which occurs whenever we do something for "free" on the internet. Regular people may wonder why something as sophisticated as this is free, but are not aware that the cost is privacy. They just think "oh, this is a cool free fun thing. Awesome!" and be on their way.
Even then, the surprising, and technically not against TOS, open secret to every single social app that leverages it, is the comparative ease with which one could scrape a user's peer nodes' entire public history without explicit permission from said node. This is the thing that allows for all those "you may know" invite suggestion engines.
And this is by design, obviously, as it is literally the only thing that makes facebook valuable.
> I don't get why people are mad at Zuckerberg and not themselves!
Because Trump won and Zuck is the new scapegoat. No one gave a shit when Obama's campaign did the same thing, and they were even public and boastful about it at the time.
"the enormous backlash" in this case is because it looks like the data was used to hack the US election. If it had been used to sell washing powder no one would give a damn.
This was the contract you agreed to when you clicked the sign up button. I don't get why people are mad at Zuckerberg and not themselves!
Zuckerberg is the one with control over the terms in a contract of adhesion. A contract of adhesion that gates access to a popular tool, that people want to use and don't bother reading or understanding the terms.
Whether or not you like contracts of adhesion, Facebook is taking advantage of people. When society finds terms in contracts (especially contracts of adhesion) to be immoral, they're allowed to voice that. And sometimes that causes the law to act to void contracts or prevent parties from certain actions even with contractual connect.
Also the people in this data scandal took a quiz from a shady company and probably clicked through the prompt to share their info without thinking twice. The only problem was it got some of their friends info... but that bug/feature was fixed years ago.
Personally I quit Facebook upon signing up the first time in 2009. I friended a real life friend then 20 assholes from high school saw that action on my friends timeline and tried friending me. Right away I closed my Facebook account and never signed up again.
It was obvious back then Facebook shares my actions I don't want public with anyone they feel like. If you don't want them to do that then you shouldn't use the service.
Well they sort of promised the FTC they wouldn't just do that without giving you a heads up, but then in the api they let your data out to apps that you didn't get a heads up about.
> First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity.
> Second, we will restrict developers' data access even further to prevent other kinds of abuse.
> Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data.
All of these points don't address what actually happened. It's not like CA had a feed of FB's user data. It was harvested. And saved. And data, like diamonds, is forever.
The problem is that FB simply has too much information about us. The #deleteFacebook moment is a wake-up call not so much for FB's tone-deaf execs, but moreso for the masses that (probably until now) never realized how much data FB has on every single one of us.
"... we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications."
How is it at all possible to certify that data has been deleted and no copies were taken..?
This feels strange to those of us from technical spheres, but much of the world works without formal verification of facts. There are in theory severe penalties, both implicit and explicit, that ostensibly act as deterrent to bad actors.
This is usually only noteworthy when it fails, but by and large it works. The friction involved if we did not generally accept someone's (signed, notarized, appropriately formalized) word as bond would cause our world to grind to a halt.
Notarization is another great example of "the real world is strange to technical folks." There are a thousand reasons that notarization is a terrible process. There are minimal requirements to be a notary. Notaries have predictable seals. Ordering a fake notary stamp is pretty easy. There's no technical deterrent to backdating requests. Notaries are frequently employed by the people they're notarizing for. Notaries are not trained to recognize fake IDs.
Even the flimsiest of technological improvements could radically improve the trustworthiness of the notarizations, but by and large the system works, so there's not much of a reason to fix it yet.
> This feels strange to those of us from technical spheres, but much of the world works without formal verification of facts.
Including many technical spheres, and particularly including the vast majority of the software world; formal verification exists in software, of course, but is most notable in not being used for most of it. Yet, even without that, there's usually a release process that includes the moral equivalent of a certification—without formal verification or certain knowledge—that the software does what it is supposed to.
This predisposition to believe our fellow humans is actually hard-coded into our genes. This is one of the reasons it's so easy to spread fake news. The ability to lie to someone without direct consequences to yourself (because, for example, you are thousands of miles away) was not part of our ancestral environment, but it is part of our contemporary one, and some of us are exploiting this newly created ecological niche.
Yup. Exactly. The tax system in many parts of the world works on an honor system. We all, as a society, rely on the idea that most of us are honest, good people. There is no way to audit everyone.
> How is it at all possible to certify that data has been deleted and no copies were taken..?
By, e.g., signing a paper which says that.
You probably meant how can you know that, which is a different issue than how can you certify it, and which amounts to a question of degree of internal accountability and control of data access.
Certification is about assuming responsibility for the truth of the thing certified, which doesn't actually require knowing it (though obviously knowing it to a reasonable degree of certainty makes it more comfortable to certify it.)
The definition of certify here is to "attest or confirm in a formal statement." So, Cambridge Analytica could be sued if they lied in such a statement.
This may be possible, however it is going to be interesting to see how Facebook gets standing to sue CA. From what I understood from Zuckerberg's statement, the agreement was between Facebook and Kogan. If CA got the data from Kogan, it seems like Facebook wouldn't have standing to go after CA directly. I haven't seen the specific agreements made by each party in this case, so I can't say for sure, but traditionally I think Facebook could only go after Kogan, who then may be able to go after CA depending on the specifics of their agreement.
Along those lines, how is it possible for Facebook to demand that CA deleted the data, when CA is a third party and never had any kind of agreement with facebook(at least, relating to the information Kogan shared)?
If someone leaks facebook data to a journalist, can FB demand that they delete it as well?
I think it's the very fact that they cannot truly verify, which is why Facebook is so willing to go through the motions. In reality, they know many developers will not comply, but they can claim they did everything humanly possible before a grand jury.
It costs Facebook nothing to ask developers to sign a contract written on virtual paper, yet can be used as a defense to cover their butt further down the line.
There's tons of substance to be discussed here without reaching for the global hash table (https://news.ycombinator.com/item?id=9722096). Fortunately the community has mostly been doing that, but let's not spoil it.
He honestly isn't wrong. People use Google and Facebook everyday for free in exchange for their privacy and they are ok with that. You can ask most people and they'll say they didn't really care about Snowden's leaks. They are happy to use free software and to be under government supervision (I don't care about the inefficacy of it). Privacy really is a social norm of the past.
I understand your intention and don't mean to criticize, but please consider avoiding the term 'free software' when referring to businesses like Facebook and Google.
As for the rest of it, it's progress. It seems like a lot of good changes, but Analytically Facebook execs probabaly summarized that this is the least they had to do to stave off regulations or monopoly anti trust from congress. Any less, regulations would still be placed on them so it's brilliant strategy to do this and frame it in a way that Facebook is concerned about all the damage and we voluntarily do this for you instead of the truth which was we knew about this forever and only are doing it because of threat of regulations.
Overall, an optimal outcome for all parties currently, except for society as a whole down the line
What, exactly, are you claiming about this that was “illegal”? When you singup for Facebook, you agree that anything you post might be shared with others on the platform.
The Facebook developer platform became so limited in 2014 that most developers (including me) left. There was no point in developing apps for the social graph that had no ability to be use the social graph. But even prior to that, the sharing of this information with apps, even those authorized by a friend, wasn’t “illegal”. You agreed to it when you signed up for Facebook and voluntarily handed them your information. Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here.
So what about this whole situation is "illegal"?
There was no consent because tracking was not opt-in but opt out.
More recently regarding user personal details and lack of consent, again deemed illegal in Germany https://amp.theguardian.com/technology/2018/feb/12/facebook-...
The pattern I see is that europe considers facebook's methods to commoditize users' data to be unreasonable and will be regulated.
If the are companies in the UK, or people working in the UK, the sharing or retention of data may have been illegal under British law.
https://en.wikipedia.org/wiki/Data_Protection_Act_1998
Almost no one reads them, so they should not be enforceable.
I mean, as developers we know when a session is established, pages are visited, and can easily see how long they've been on the page.
No one can read the typical T&C in 10 seconds .. let alone 1 minute .. especially without even opening the page! So the options should be something like:
[x] I don't care, just whatever dude.
[ ] No. Get me out of here. Because I don't know how to close the browser window myself.
EDIT: Found this as a way of proof http://www.pcpitstop.com/spycheck/eula.asp
Perhaps not in the US, but I'd like to point out that it's not the same way everywhere: I think the EU is moving towards another direction. There's a whole thing around whether a company has a responsibility to do due diligence around preserving the personal information of its users. Just because you gave them your data doesn't always mean that they can now do whatever they want with it (e.g. give access to detailed information in large amounts to third parties). Even if you sign an agreement, in many jurisdictions there are certain rights a company can't just make you sign away.
There's no consent on behalf of the friends if one shares information about their friends to an app. Consent is given directly
What matters is how criminal law views this particular data collection in the context of Facebook's working relationship with this particular client, within all of the various jurisdictions that Facebook operates.
The article "What Colour Are Your Bits", is a pretty good look at this - http://ansuz.sooke.bc.ca/entry/23
Except you CAN'T sign away your rights in many jurisdictions, including this thing called HIPAA. So if Facebook sold people's mentions of health problems to third-parties... is that a violation?
I mean, this is the exact scenario people grilled Windows 10's spyware. But somehow, Facebook doing it, isn't an issue? What's the difference? I'm honestly asking.
Almost EVERY free site is trying their damn best to collect as much info and link everything together to sell it so they can- you know.. make money.
I guess those people are instead willing to pay for virtually every site they use on the internet, right? Right? crickets
> We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data.
Sounds like they'll make the developer agreement legally binding so they can take legal action for violating the ToS.
Dead Comment
At the very start of Facebook platform the API would anonymize the user's email address, the app would get an app-specific hash, e.g. abcdefg-app123456@facebook.net It was a working email address with Facebook handling the forwarding.
This proved futile as the very first thing that apps then did was to ask users for their real email address.
How about the ones they agreed to with the FTC in 2011 [1][2]?
[1] https://www.ftc.gov/sites/default/files/documents/cases/2011...
[2] https://www.ftc.gov/news-events/press-releases/2011/11/faceb...
Basically, FB should introduce an App-Engine like platform where the backend of any 3rd-party application that uses FB data has to run on FB-owned servers. Developers of these applications would then ship their code to FB (similar to Heroku) and run in a sandboxed environment where they are not allowed to take data out at all.
That way FB can audit how the data is being used at anytime and kick out people who are out of compliance with their terms. If a user deletes their FB account, now all their data could be deleted from any 3rd party applications automatically. This is basically similar to the way the government handles classified data.
I think the only sane response would be to shut down the developer program. I doubt it contributes much to the FB bottomline and it's clearly something FB doesn't care much about given the breadth of this scandal.
What about the punishment to Christopher Wylie, by closing/suspending his accounts in facebook, whatsapp, etc ?
He was part of Cambridge Analytica at the time. So they suspended his account along with the rest of them I suppose.
The Christopher Wylie that, by his own account, was a knowing, active, and key participant in the things they punished CA for, and in fact claims to be the one who came up with the concept for it?
What about it?
Dead Comment
Offtopic but i cant help it. You get what you measure. Which is why economies that only measure profit optimize for nothing but profit. When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit, we should not be surprised that companies like Facebook do awful things. the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of) https://en.wikipedia.org/wiki/Generally_Accepted_Accounting_...
We don't need accountants to measure legality. That's what we have law enforcement and courts for. Investors care about profits; behaving illegally should hurt profits. Deputising a multi-billion dollar company's thousands of shareholders as its moral police is an absurd proposal.
How do you record company/moral values into Books of Accounts? How does one audit those morals? What category do you assign it under: Asset, Liability or Owners Equity? And moreover, what would happen if morals change and some are deemed obsolete? Also, if you start to measure company values then it becomes obvious that other things that have been kept out of purview of the Books would want to have an equal footing too; like: legal contracts with clients, employee agreements, court cases, so on and so forth.
There is a very good reason why accounting standards (like GAAP or IFRS) decided to only record transactions into the Books and not be concerned with legalities or moralities. It's impossible to assign an amount to a company/moral value as what is valuable to me as a shareholder may not be considered valuable to say the local tax authority or an would be investor and vice versa.
https://www.msci.com/esg-ratings
https://www.msci.com/research/esg-research
Recommended watching the following TED video to introduce the topic: https://www.ted.com/watch/ted-institute/ted-state-street/aud...
https://www.theguardian.com/sustainable-business/2014/oct/01...
Deleted Comment
Deleted Comment
This appears to solve the issue of having wide permissions, but it does not do so. In reality, this is an attempt at transferring facebook's risk to shady app developers, while the overall lifecycle for the app won't change.
In essence, this is a do-nothing from the standpoint of app developers who have requested additional permissions. Any app developer who is told they need to undergo an audit, due to transcribing the entire social network, can simply say no and get their account banned. It will likely have no effect, as the account will almost certainly have already been suspended in such a spot.
Dead Comment
Deleted Comment
They intended for cool apps to go viral across their social graph so Facebook could be a “social utility” and the operating system of human relationships and other airy fantasies they spouted in 2012, 2013, 2014 when they built the app platform.
Their hopes of a beautiful future of joy and freedom were dashed when they discovered humans are capable of garbage behaviour.
Ironically they believed the walled garden of Facebook would clean up the cesspool of blog comments. Oops.
Why would they intend for it to happen? They didn't make any money off of this exfiltration, CA paid people via Mechanical Turk to install the application so that they could mine their data. Facebook didn't get a dime. In fact, they have a monetary interest in preventing this, because their data is worth something and these guys just got it from free usage of their API. So the insinuation that Facebook wanted this to happen, or looked away because it benefited them, makes zero sense.
What kind of safeguards are you imagining? How do you have 3rd parties interface with Facebook without letting those applications reason about the information within a Facebook account? Tinder is valued at over a billion dollars and it's not possible to use it without a Facebook account-- should Facebook shut that down and ban the entire concept of 3rd party Facebook interaction?
I do not understand the anger. They did nothing wrong. I can't believe that people are legitimately arguing that users shouldn't have a right to expose their information to apps.
“First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps.“
Might be a too little, too late attempt at self regulation since they've apparently known about this cambridge analytica situation since before the election and since then there have been facebook employees embedded with CA to help them target ads.
MSNBC is already playing this as: 1) starts with a denial 2) admits wrongdoing 3) claims behavior will change 4) changes are not carried out and we're in the same place a year down the road.
They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.
Deleted Comment
Worth noting that Zuck wants something from Xi and Medvedev, has everything he needs in terms of support from the US Gov.
Facebook as almost insured that it becomes the whipping post of the FAANG companies as the government wants to look hard on tech. Im genuninely not sure what Zuck can do as long as Apple, Google and Amazon don't make mistakes in the same way.
The next 10 years are going to be a lot like the old ad age about being chased out of a campsite by a bear. It's unimportant to be the fastest (best) of the group, you just cant be the slowest.
I've said variations on this before: https://news.ycombinator.com/item?id=16438362, but FB will get serious about it when users stop using it.
Until that time, everyone can complain, but the concept of "revealed preferences" is relevant. Do people actually care? If so, they'll change their behavior, FB will likely notice, and changes will happen.
People have been complaining about FB and privacy since practically day 0. Throughout that entire period, FB has only become more popular.
I get the feeling, but you'll agree that a legally binding legal procedure that is an actual existential threat to your hundred billion dollar company that employs 25k people requires a different approach then a seduction meeting with a despot to try to loosen regulations.
This strikes me as especially interesting. I mean, I'd personally theoretically have my reservations about this congress over and above the average congress, but FaceMark refusing strikes me as a deeply telling datum about how he's thinking about this.
This is not a deeply telling datum. It is a boring an standard one.
> ...my reservations about this congress over and above the average congress...
ugh, come on. it's not like the whole congress votes on how you're to be treated, and the questions you'll be asked. the biggest heels on both sides of the aisle are free to harangue you all they want.
The election woke people up.
I found this from 2011 when they shut it down as a "mistake." (https://newsroom.fb.com/news/2011/11/our-commitment-to-the-f...).
Unfortunately, it looks like they removed the launch release - but it would be interesting to see how it was presented in light of the recent news.
They're not a product company, they're a distraction company.
Correction, they are a surveillance company. Need I remind everyone of Google and FB's In-Q-Tel CIA partners in crime? They just figured out a way for everyone to willingly report on themselves, but not just on themselves, on others too! FB is bad and should collapse like every other dotcom boom-bust that uses and abuses it's users... but the difference is the level of monopoly on non-technical users that didn't exist in the 90's. Back then the technical community could have dropped a product like hotcakes and watched it bust... but due to the increase of non-technical users who sign any EULA/TOS and don't give a crap about privacy... I expect nothing will happen until something really bad and at a massive scale happens.
For those who are younger, consider this the slashdot/digg/reddit cycle. Reddit will die next the closer it gets to IPO too.
It's the beauty of computing though. Every market is ripe for disruption if someone has a good vision and follow-through. The problem is that so many of them use the exact same model and a few years later are the ones dying due to lack of integrity.
Can you substantiate this claim of tech product companies getting large/successful and then “dying due to lack of integrity”?
I haven’t noticed the pattern but if there is one I’m sure interested in the evidence. (And in what sense do they lack integrity?)
https://web.archive.org/web/20080214193303/http://www.facebo...
https://en.wikipedia.org/wiki/Facebook_Beacon
Dead Comment
Not exactly. In all fairness, as Zuck points out, this is a key part of the story, and in theory, why this is different:
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent.
This raises the question of what Facebook was doing (if anything) to prevent this sort of action, but the fact that they just took CA at their word that they deleted this ill-gotten data (of course they didn't), it makes me think they did very little. I think this is just as concerning as any other part of this story. Even if people are knowingly willing to hand over data to Facebook (or the devs of some app) in exchange to use a service, they wouldn't think that it's a free for all and anyone can mine the data for whatever they want.
Not to mention that it's arguable that "consent" was really given for facebook to share the data in the first place. I'd be interested to see some polling results asking if facebook users knew what facebook was up to and whether they feel OK with it.
This was openly covered last year by BBC when they interviewed Trump's digital campaign manager at the time Theresa Hong [interview linked]. The campaign spent $16M on Facebook. Understandably, Facebook gave them the white glove treatment, even had their own employees embedded in Project Alamo (the headquarters of the campaign's digital arm).
But today Facebook claims they had no idea who one of their multi-million-dollar clients in 2015-2016 were. That it was just some random quiz-making hacker dude selling data to some other random company.
https://twitter.com/bbcstories/status/896752720522100742?lan...
This piece of work posted today by Facebook is what we call damage control. Don't expect the truth from it-- it will contain truths, but it will not be the truth of the matter. And don't let it set your dialogue, man.
Number of times the word "advertising" was mentioned: 0
Facebook continues to pretend it's business model is unicorns and kittens, not selling user data for money.
"learn from this experience", "doesn't change what happened in the past. ", "responsibility", "going forward", "together".
You find this same boilerplate from athletes who beat their wife, do drugs, or kill people.
FB provides a platform for ads, sure, but it can be used for way more than just ads.
3P tracking can infer who (even specifically) is viewing ads and it knows the social graph by other means. It can also do effective mass psychometric tests. A/B testing infrastructure can be used for more than just optimizing ads ... all of the psychometric dimensions tested by Kogan’s 2013 app can be expressed as embedded imagery and messaging in ads, and the same types of tests can operate at the same scale by integrating 3P data and tracking (some of which certainly originates from policy-compliant FB apps) which already knows the social graph beyond what FB will allow a single app or ad to draw.
But no, we've got to pretend Facebook is a touchy-feely community. I get the feeling that Facebook is ashamed of the way they make their billions.
You seem lost in a false narrative that he is out to get you :(
I challenge you to rethink your position assuming he means well...just for the fun of it...and share with us what your conclusion would be
Some others, like CA and ilk, also get easy access to user’s psychometric data, by embedding the psychometrics into A/B tested ad content.
The adage “you are the product” has only become more true as FB has advanced, whether by their intended or explicit policies or not.
Even then, the surprising, and technically not against TOS, open secret to every single social app that leverages it, is the comparative ease with which one could scrape a user's peer nodes' entire public history without explicit permission from said node. This is the thing that allows for all those "you may know" invite suggestion engines.
And this is by design, obviously, as it is literally the only thing that makes facebook valuable.
Precisely. Acting astonished at why people are upset just shows a disconnect from the reality of the masses.
Free, valuable products are all over the internet: Google Docs, Trello, WhatsApp, Waze, Spotify...just to barely scratch the surface.
It’s not obvious to most users that the costs of some of these includes privacy violation / spying on the user.
Because Trump won and Zuck is the new scapegoat. No one gave a shit when Obama's campaign did the same thing, and they were even public and boastful about it at the time.
Zuckerberg is the one with control over the terms in a contract of adhesion. A contract of adhesion that gates access to a popular tool, that people want to use and don't bother reading or understanding the terms.
Whether or not you like contracts of adhesion, Facebook is taking advantage of people. When society finds terms in contracts (especially contracts of adhesion) to be immoral, they're allowed to voice that. And sometimes that causes the law to act to void contracts or prevent parties from certain actions even with contractual connect.
Personally I quit Facebook upon signing up the first time in 2009. I friended a real life friend then 20 assholes from high school saw that action on my friends timeline and tried friending me. Right away I closed my Facebook account and never signed up again.
It was obvious back then Facebook shares my actions I don't want public with anyone they feel like. If you don't want them to do that then you shouldn't use the service.
Dead Comment
> Second, we will restrict developers' data access even further to prevent other kinds of abuse.
> Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data.
All of these points don't address what actually happened. It's not like CA had a feed of FB's user data. It was harvested. And saved. And data, like diamonds, is forever.
The problem is that FB simply has too much information about us. The #deleteFacebook moment is a wake-up call not so much for FB's tone-deaf execs, but moreso for the masses that (probably until now) never realized how much data FB has on every single one of us.
How is it at all possible to certify that data has been deleted and no copies were taken..?
This is usually only noteworthy when it fails, but by and large it works. The friction involved if we did not generally accept someone's (signed, notarized, appropriately formalized) word as bond would cause our world to grind to a halt.
Even the flimsiest of technological improvements could radically improve the trustworthiness of the notarizations, but by and large the system works, so there's not much of a reason to fix it yet.
Including many technical spheres, and particularly including the vast majority of the software world; formal verification exists in software, of course, but is most notable in not being used for most of it. Yet, even without that, there's usually a release process that includes the moral equivalent of a certification—without formal verification or certain knowledge—that the software does what it is supposed to.
By, e.g., signing a paper which says that.
You probably meant how can you know that, which is a different issue than how can you certify it, and which amounts to a question of degree of internal accountability and control of data access.
Certification is about assuming responsibility for the truth of the thing certified, which doesn't actually require knowing it (though obviously knowing it to a reasonable degree of certainty makes it more comfortable to certify it.)
All our data associated with FB unique id is out there. My understanding this has stopped in 2015 but for sure?
Hopefully the data that is out there becomes less relevant over time.
[1] https://medium.com/@jamesallworth/what-the-f-was-facebook-th...
If someone leaks facebook data to a journalist, can FB demand that they delete it as well?
It costs Facebook nothing to ask developers to sign a contract written on virtual paper, yet can be used as a defense to cover their butt further down the line.
Deleted Comment
[1] http://www.businessinsider.com/well-these-new-zuckerberg-ims...
There's tons of substance to be discussed here without reaching for the global hash table (https://news.ycombinator.com/item?id=9722096). Fortunately the community has mostly been doing that, but let's not spoil it.
https://www.telegraph.co.uk/technology/facebook/6966628/Face...
https://www.fsf.org/about/what-is-free-software