You can store my data for me, but only encrypted, and it can be decrypted only in a sandbox. And the output of the sandbox can be sent only back to me, the user. Decrypting the personal data for any other use is illegal. If an audit shows a failure here, the company loses 1% of revenue the first time, then 2%, then 4, etc.
And companies must offer to let you store all of your own data on your own cloud machine. You just have to open a port to them with some minimum guarantees of uptime, etc. They can read/write a subset of data. The schema must be open to the user.
Any systems that have been developed from personal user data (i.e. recommendation engines, trained models) must be destroyed. Same applies: if you're caught using a system that was trained in the past on aggregated data across multiple users, you face the same percentage fines.
The only folks who maybe get a pass are public healthcare companies for medical studies.
Fixed.
(But yeah it'll never happen because most of the techies are eager to screw over everyone else for their own gain. And they'll of course tell you it's to make the services better for you.)
The only way you can change this is very high social trust, and all of society condemning anyone who ever defects.
(Yes it really is AI-written / AI-assisted. If your AI detectors don’t go off when you read it you need to be retrained.)
But you got downvoted for pointing out that it was slop. I got similarly downvoted a couple days ago. Hackernews folk seem uninterested in having it pointed out when AI is being used to generate posts.
I'd guess it's some combination of a) they like using AI themselves, and b) they can't distinguish AI themselves. And they turn to all manner of excuse like "AI detectors do not work" or "non-native speakers need a way to produce articles, too". It's a crappy time to be a humanist, or really to care about anything, it seems.
I despise AI slop, but this is a great article and a worthy cause. If AI was used, and helped make this article a reality, then the author did a great job of guiding the AI, and doing quality checks.
If you read this article and don't observe the tells of AI content, you have a problem (or maybe you don't, because no one cares anymore).
The tells in this article: There are lots of parts that look like AI - the specific pattern of lists, the "not this but that", particular phrases that are relatively unlikely.
For example, the strange parallelism here (including the rhyming endings): "Sunscreen balms – Licked off immediately Fabric nose shields – She rubbed them off constantly Keeping her indoors – Reduced her quality of life drastically Reapplying medication constantly – Exhausting and ineffective" The style is cloying and unnatural.
"That solution didn't exist. So we decided to create it."
"For the holidays, I even made her a bright pink version, giving her a fashionable edge." -- wtf is a fashionable edge? A fashionable edge over what?
"I realized this wasn't just Billie's story—it was a problem affecting dogs everywhere."
Sure these could just be cliche style (and increasingly we will probably see that as the AI garbage infects the writing style of actual humans), but they look like AI. It's not as bad as some, but it's there.
Everyone should be disclosing the use of AI. And every time someone uses AI, he should say "I don't care enough about you the reader to actually put the time into writing this myself."
Bullet points? Must be AI. Em-dash? Obviously slop. Not only this, but that? Holy moly, AI slop.
(we ignore whether or not the writing is actually interesting, engaging, educational, etc. of course)
We can all agree, as a society, "hey, no individual person will graze more than ten cows on the commons," and that's fine. And if we all agree and someone breaks their vow, then that is immoral. "Society just sucks when everyone thinks this way" indeed.
But if nobody ever agreed to it, and you're out there grazing all you're cattle, and Ezekiel is out there grazing all his cattle, and Josiah is out there grazing all his cattle, there is no reasonable ethical principle you could propose that would prevent me from grazing all my cattle too.
But let me entertain it for a moment: prior to knowing, e.g., that plastics or CO2 are bad for the environment, how should one know that they are bad for the environment. Fred, the first person to realize this would run around saying "hey guys, this is bad".
And here is where I think it gets interesting: the folks making all the $ producing the CO2 and plastics are highly motivated to say "sorry Fred, your science is wrong". So when it finally turns out that Fred was right, were the plastics/CO2 companies morally wrong in hindsight?
You are arguing that morality is entirely socially determined. This may be partially true, but IMO, only economically. If I must choose between hurting someone else and dying, I do not think there is a categorically moral choice there. (Though Mengzi/ Mencius would say that you should prefer death -- see fish and the bear's paw in 告子上). So, to the extent that your life or life-preserving business (i.e. source of food/housing) demands hurting others (producing plastics, CO2), then perhaps it is moral to do so. But to the extent that your desire for fancy cars and first class plane tickets demands producing CO2...well (ibid.).
The issue is that the people who benefit economically are highly incentivized to object to any new moral reckoning (i.e. tracking people is bad; privacy is good; selling drugs is bad; building casinos is bad). To the extent that we care about morality (and we seem to), those folks benefitting from these actions can effectively lobby against moral change with propaganda. And this is, in fact, exactly what happens politically. Politics is, after all, an attempt to produce a kind of morality. It may depend on whom you follow, but my view would be that politics should be an approach to utilitarian management of resources, in service of the people. But others might say we need to be concerned for the well-being of animals. And still others, would say that we must be concerned with the well-being of capital, or even AIs! In any case, large corporations effectively lobby against any moral reckoning against their activities and thus avoid regulation.
The problem with your "socially determined morality" (though admittedly, I increasingly struggle to see a practical way around this) is that, though in some ways true (since society is economics and therefore impacts one's capacity to live) is that you end up in a world in which everyone can exploit everyone else maximally. There is no inherent truth in what the crowd believes (though again, crowd beliefs do affect short-term and even intermediate-term economics, especially in a hyper-connected world). The fact that most white people in the 1700s believed that it was not wrong to enslave black people does not make that right. The fact that many people believed tulips were worth millions of dollars does not make it true in the long run.
Are we running up against truth vs practicality? I think so. It may be impractical to enforce morality, but that doesn't make Google moral.
Overall, your arguments are compatible with a kind of nihilism: there is no universal morality; I can adopt whatever morality is most suitable to my ends.
I make one final point: how should slavery and plastics be handled? It takes a truly unfeeling sort of human to enslave another human being. It is hard to imagine that none of these people felt that something was wrong. Though google is not enslaving people nor are its actions tantamount to Nazism, there is plenty of recent writing about the rise of technofascism. The EAs would certainly sacrifice the "few" of today's people for the nebulous "many" of the future over which they will rule. But they have constructed a narrative in which the future's many need protection. There are moral philosophies (e.g. utilitarianism) that would justify this. And this is partially because we have insufficient knowledge of the future, and also because the technologies of today make highly variable the possible futures of tomorrow.
I propose instead that---especially in this era of extreme individual power (i.e. the capacity to be "loud" -- see below)---a different kind of morality is useful: the wielding of power is bad. As your power grows, so to does the responsibility to consider its impact on others and to more aggressively judge every action one takes under the Veil of Ignorance. Any time we affect the lives of others around us, we are at greater risk of violating this morality. See eg., Tools for Conviviality or Silence is a Commons (https://news.ycombinator.com/item?id=44609969). Google and the tech companies are being extremely loud, and you'd have to be an idiot to see that it's not harmful. If your mental contortions allow you to say "harm is moral because the majority don't object," well, that looks like nihilism and certainly doesn't get us anywhere "good". But my "good" cannot be measured, and your good is GDP, so I suppose I will lose.