Readit News logoReadit News
nologic01 · 3 years ago
The regulatory architecture of "big tech" is an unmitigated disaster.

You can't have two entities effectively control every touching point with the digital domain for a major fraction of the planet.

There is absolutely no reason to trust that they will not abuse this position in opaque and impossible to trace ways. These are trillion-dollar powered for-profit entities, with armies of lawyers and lobbyists that can intimidate medium sized countries. They will exploit every weakness of incompetent, confused and captured regulatory/political systems. Because that's what they are legally obliged to do for their shareholders. And these shareholders care zilch if this duopolistic - fingers in all honeypots - design undermines our entire digital future. They just want some tech "winners" in their portfolios.

The longer nothing serious is done the harder it becomes to do anything.

justinclift · 3 years ago
> they are legally obliged to do for their shareholders

Isn't that an "old wives tale" that doesn't have any basis in reality?

terminous · 3 years ago
For US companies registered in Delaware (which is most of them), the doctrine of shareholders primacy holds. Shareholders can and have sued corporate boards/execs for not acting in their best interest. It goes back to the 1910s when Henry Ford very loudly and explicitly announced he was cutting shareholder dividends to pay workers the famous $5/day wage (double what any other factory workers were paid), and minority shareholders sued and won.

Of course it is debated and different in different jurisdictions, as it is common law and not a specific statute. But ambiguity in law means execs lean towards the safe decision. Shareholders can go a long way with a legal threat that never sees trial.

https://en.m.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.

reaperducer · 3 years ago
Isn't that an "old wives tale" that doesn't have any basis in reality?

You are correct. Nobody has ever been able to point to a law requiring companies to maximize profits for shareholders at all costs.

In fact, there are thousands and thousands of companies that exist with a primary purpose of doing things other than making money for their shareholders.

Any "legal" obligation to "maximize shareholder value" started out as a way to excuse the greed of other people, but falls flat upon even cursory examination.

nologic01 · 3 years ago
Alas it does. Think, e.g. about the backlash against asset managers for pursuing sustainability agendas.

Closer to the topic, think also about the "furore" and supposed panic at Google that they are not capitalizing (literally) on their AI leadership.

rewmie · 3 years ago
> These are trillion-dollar powered for-profit entities, with armies of lawyers and lobbyists that can intimidate medium sized countries. They will exploit every weakness of incompetent, confused and captured regulatory/political systems.

I don't think the need to exploit opportunities is exclusive to major corporations. We see this behavior and power grabs even in petty quarrels in small amateur organizations such as small businesses and homeowner associations. The tools change but the human traits that drive this behavior is always there. There is no toggle that turns on this behavior when assets surpass a certain level.

> Because that's what they are legally obliged to do for their shareholders.

I don't think they are "legally obliged" to anyone, or even "obliged" at all. Their leadership is driven to exploit the tools at their reach to meet their goals, and that's precisely what they do.

nologic01 · 3 years ago
> I don't think the need to exploit opportunities is exclusive to major corporations

we are not talking about small, medium or large corporations. we are talking about two entities that effectively control the entire fabric of digital society and the economy. and from which there is increasingly no opt-out. there was never so much concentration. never.

> Their leadership is driven to exploit the tools at their reach to meet their goals

Corporate management gets approved their remuneration from shareholders. If they don't deliver they get the boot.

vouaobrasil · 3 years ago
I blame the tech companies but I also blame the people who created all this stuff -- programmers, tech startups, and computer science. Those people created a system that is too easy to use as an ultra-efficient wealth concentrator, and now tech companies are the result.
nologic01 · 3 years ago
we are all cogs in the machine. maybe decades ago the stance of individuals could have made a difference but nowadays its clear that it no longer does. there is a collectively created monster that can only be tamed by something of equivalent and greater collective power: radical government regulation. The EU is doing its part. The US isnt.

imho nothing was learned from the previous Microsoft monopoly era. Information technology is not just any business sector. It is the canvas on which everything else plays out. It needs to be regulated in a way that people can go with online life, politics, business etc. without this Damocles sword permanently hanging above us.

sumtechguy · 3 years ago
The regulatory environment also played a big role in this. If you read the original telecommunications act (title I). It is wildly heavy handed on privacy. The TL:DR basically you can pipe this voice data around but you may not use it and if an gov official wants it, get a warrant (even if it is a rubber stamp one). Title II broke most of that to make the internet grow. Well it succeeded doing that wildly. But now the companies are doing what is basically 'natural' for them to do. Consume what they can see. This sort of thing has happened before with telegrams, mail, and plain old telephone. Companies have seized up Title II to make this happen. For example your cell phone is not a phone. It is an internet device and covered under tittle II not I.

Also we as users are somewhat to blame as we wanted 'free stuff'. We traded away our freedoms for tweets and facebook posts.

There is no 'one entity' to blame here. We all share it quite equally.

I have been saying this for many years now. But most of these companies are heading right into a Title III being written just for exactly what they are doing. They better get their act together or no one is going to like the outcome.

waynesonfire · 3 years ago
I came to say something similar. I think it's important to add that there is a slew of non-technical staff, sales / product managers that foam at the mouth at opportunities to exploit these monopolist ideas. They then hire technical staff to execute. Their entire compensation package depends on this virus like growth.
dagaci · 3 years ago
This is the most Kafkaesque thread i've ever seen on Hacker News.

Microsoft definitly uses analytics if that counts as personal data - https://www.microsoft.com/insidetrack/blog/microsoft-uses-an...

Is Microsoft reading your Gmail account, Word documents or Porn activities and feeding them to OpenAI? Not according to terms and conditions https://learn.microsoft.com/en-us/legal/cognitive-services/o...

Is Microsoft generally doing generally unknown uknowns. Yes.

Silhouette · 3 years ago
This has been my objection to Microsoft's maze-like privacy policies for a long time.

I once asked - on another forum and before the recent "AI" coding assistants were widely available - whether Microsoft's privacy policy allowed them to upload and do things with your own code if you used VS Code with telemetry enabled.

At the time I was downvoted to invisibility and told I was being silly. But not one person showed me anywhere in Microsoft's terms or privacy policy wording that limited the scope of the data processing clearly and transparently to exclude that kind of thing.

Today there's a bit of an obsession with training ML models using any large data set available and perhaps my caution from yesteryear wouldn't look so silly to the critics now.

dspillett · 3 years ago
> But not one person showed me anywhere in Microsoft's terms or privacy policy wording that limited

These policies don't exist to tell you what they are limiting themselves to. These documents exist as a defence to use against you (because you agreed to the policy by continuing to use their services) if you try to stop them doing something.

Unless for some reason there is commercial or legal advantage in saying “we will not to X”, a large company will never knowingly impose such limits on itself.

Silhouette · 3 years ago
Fortunately since I'm in the UK the legal position is rather more enlightened. The default is that any processing of personal data is not allowed and they have to justify it and explain their justification.

Of course even if they flagrantly ignore the rules and carry on while treating any fines as a cost of doing business - which is hardly unusual in GDPR world for certain big tech companies - there is only so much you can do as one individual and you still need the regulators to step in and enforce the rules with meaningful sanctions.

ldjb · 3 years ago
For reference, here is the text of Microsoft's upcoming Services Agreement:

https://www.microsoft.com/en-us/servicesagreement/upcoming.a...

And here is their summary of what has changed:

https://www.microsoft.com/en-us/servicesagreement/upcoming-u...

layer8 · 3 years ago
It still contains the sentence “to improve Microsoft products and services, you grant to Microsoft a worldwide and royalty-free intellectual property license to use Your Content.” The only explicit limitation is asserted with respect to targeted advertising: “We do not use what you say in email, chat, video calls or voice mail, or your documents, photos or other personal files, to target advertising to you.”

The Summary of Changes doesn’t mention any changes to the Privacy Statement, which in turn doesn’t seem to exclude training AI models on user data.

tinus_hn · 3 years ago
Training a Microsoft AI could quite literally be considered ‘improving Microsoft products and services’.
boredumb · 3 years ago
Honest question I have. I've worked in advertising and the laws around PII are interesting. In California there are the CCPA laws that require them to allow a form to submit and have my PII removed after 90? days.

What happens if these people train a model on a cali residents PII, having a request come in, 3 months later someone asks it about you and it spits out the PII that was "removed"? I'm assuming it's a matter of this going to court to be decided but I'd be curious if any californian legal nerds have some reasons why no one has started trying to target these things for settlements if nothing else?

j16sdiz · 3 years ago
> What happens if these people train a model on a cali residents PII..

They would just deny it. Or claims impossible to recover from the trained model.

I know no CA laws, but lots of similar privacy law, when asked to be "removed", allow exception for aggregated data.

sbszllr · 3 years ago
I don’t know CCPA laws that well, but I do the EU ones. As it stands “the right to erasure / to be forgotten” is extremely vague on this, and there doesn’t seem to be a wide precedent. In general the law is applicable to raw data records and not to aggregate data/metrics, neither to models. However, models in this context refers to one particular ruling w.r.t. to insurance or credit scoring industry (don’t remember exactly which one).

I want to point out that the model doesn’t need to “spit out” removed data. It can be a classifier, or regression model, and not a generative model, and ideally, it would not be trained on your data.

Worth noting that from the technical standpoint, it’s difficult too. Say, a model costs X-large amount of dollars. Normally, I would retrain it e.g. every 6 months. But now I have erasure requests coming in on a regular basis —- retraining often to comply with those is too expensive. There’s a line of research on “machine unlearning” on how to do it efficiently but it’s quite underwhelming so far.

Deleted Comment

pixl97 · 3 years ago
Why are you assuming they have your PII in it in the first place and that the identifying part would not be stripped?

If the personal part is gone, I'm not sure you have any claim.

Liquix · 3 years ago
How does M$'s legal team accomplish such a feat? Are there layers of linguistic abstraction built up such that only a sufficiently large team (Microsoft, grand jury) has the bandwidth to extract any meaning? Red herrings with gotchas hidden in seemingly innocuous places? Do they just talk in circles and never give an exact answer?
lucasRW · 3 years ago
Past a certain number of millions of dollars, a team of lawyer is effectively a legal red team who is tasked with finding bypasses (and will find them) around the restrictions in place.
nottheengineer · 3 years ago
Maybe governments should hire actual red teams to look for those loopholes.
reaperducer · 3 years ago
How does M$'s legal team accomplish such a feat?

Who's going to stop them?

MikusR · 3 years ago
What is M$?
schube · 3 years ago
Microsoft.
K0balt · 3 years ago
If it does not specifically exclude that use, then you can be assured that eventually that data will be used in training if it’s useful. If not now, then eventually. That goes for every company warehousing data on the planet.

This should be everyone’s base assumption, and should be basically accurate unless laws are put in place, but even then jurisdictional bypass may make them irrelevant for historical data and will only weakly protect new data.

Welcome to the new oil.

Delphiza · 3 years ago
There is a difference between personal data and personally identifiable data. In many ways it is unavoidable to use personal data. Predictive text uses personal data (they 'learn' from everyone) - is the sentence and paragraph personal? Search engines record what you have typed in - do you classify your search query as 'personal'? I can copy paste the first comment from this thread - is it personal data. I may be flat out wrong, and shouldn't type anything into a web page ever.

Those four lawyers and three privacy experts didn't seem to come to a conclusion on what personal data is. Does big tech feed data created by people into 'AI' tools? Yes. Does little tech? Yes.

I'm okay to join Mozilla with my pitchfork if I know what it is about. I would like to have people that are clear about how they are looking after my interests, rather than just getting the mob riled up. Use of data, any data, is subject to an agreement. Go on and read the 'legal' that you have agreed to using Hacker News - they have a whole section on how they use personal information. Do we get our pitchforks out for HN, or are they cool?

la64710 · 3 years ago
It’s Microsoft that for years tried to kill Linux and still sharpens the knife.
Delphiza · 3 years ago
So it's just a grudge against Microsoft then? That's not something I'll sign up for, thanks. You either believe in the concepts of not using personal data - for everyone - or you're selling something.