This is a political problem, not a technological one. The USSR (alongside Germany and others) managed effective at scale spying with primitive technology: paperwork for everything and every movement, informants, audio surveillance, and so on. The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.
And when one looks back at the past we've banned things people would never have imagined bannable. Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?
Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence. The problem is not that the technology exists, but that there is 0 political interest in curtailing it, and we've a 'democracy' where the will of the people matters very little in terms of what legislation gets passed.
You and I are in agreement that the surveillance needs to stop, but I think we differ on how to explain the problem. My explanation follows, but note that its not directed at you.
At its peak, the KGB employed ~500,000 people directly, with untold more employed as informants.
The FBI currently employs ~35,000 people. What if I told you that the FBI could reach the KGB's peak level of reach, without meaningfully increasing its headcount? Would that make a difference?
The technology takes away the cost of the surveillance, which used to be the guardrail. That fundamentally changes the "political" calculus.
The fact that computers in 1945 were prohibitively expensive and required industrial logistics has literally zero bearing on the fact that today most of us have several on our person at all times. Nobody denies that changes to computer manufacturing technologies fundamentally changed the role the computer has in our daily lives. Certainly, it was theoretically possible to put a computer in every household in 1945, but we lacked the "political" will to do so. It does not follow that because historically computers were not a thing in society, we should not adjust our habits, morals, policies, etc today to account for the new landscape.
So why is there always somebody saying "it was always technically possible to [insert dystopian nightmare], and we didn't need special considerations then, so we don't need them now!"?
This is the correct take. As the cost to do a bad thing decreases, the amount of political will society needs to exert to do that bad thing decreases as well.
In fact, if that cost gets low enough, eventually society needs to start exerting political will just to avoid doing the bad thing. And this does look to be where we're headed with at least some of the knock-on effects of AI. (Though many of the knock-on effects of AI will be wildly positive.)
> The FBI currently employs ~35,000 people. What if I told you that the FBI could reach the KGB's peak level of reach,
You are, if anything, underselling the point. AI will allow a future where every person will have their very own agent following them.
Or even worse, as there are multiple private addtech companies doing surveillance, and domestic and foreign intelligence agencies, so you might have a dozen AI agents on your personal case.
Cost is one factor, but so is visibility. If we replaced humans following people around with cheap human-sized robots following people around, it would still be noticeable if everybody had a robot following them around.
Instead we track people passively, often with privately owned personal devices (cell phones, ring doorbells) so the tracking ability has become pervasive without any of the overt signs of a police state.
I think if you bring up a dystopian nightmare, it assumes someone in power acting in bad faith. If their power is great enough, like maybe a government intelligence agency, it doesn't need things like due process, etc., to do what it wants to do. For example, Joe McCarthy & J. Edgar Hoover didn't need the evidence that could have been produced by AI-aided mass surveillance to justify getting people people who opposed his political agenda blackballed from Hollywood, jailed, fired from their jobs, etc.
If everyone involved is acting in good faith, at least ostensibly, there are checks and balances, like due process. It's a fine line and doesn't justify the existence of mass spying, but I think it is an important distinction in this discussion & I think is a valuable lesson for us. We have to push back when the FBI pushes forward. I don't have much faith after what happened to Snowden and the reaction to his whistleblowing though.
It is not as simple as being a political problem. Many of the policy decisions we think of as being political were actually motivated by the cost/availability of technology. As this cost goes down, new options become practical. We think of the Stasi's capabilities as being remarkable: but in fact, they would probably have been thrilled to trade most of their manual spying tools for something as powerful as a modern geofence warrant, a thing that is routinely used by US law enforcement (with very little policy debate.)
While this is true, I don't think we are there yet with AI since its usually more expensive to run AI models than it is to perform more traditional statistical modelling.
> Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?
Wow, that's a hell of a comparison. The former case being a documented case of basic racism and political repression, assuming you're talking about cannabis. And the latter being designed for almost exactly the opposite.
Restricting, um, "wrong opinions" on who a business wants to serve is there so that people with, um, "wrong identities" are still able to participate in society and not get shut out by businesses exercising their choices. Of course "wrong opinions" is not legal terminology. It's not even illegal to have an opinion that discrimination against certain groups is okay - it's just illegal to act on that. Offering services to the public requires that you offer them to all facets of the public, by our laws. But if you say believing in discrimination is a "wrong opinion"... I won't argue, they're your words :)
> This is a political problem, not a technological one.
Somewhat of a distinction without a difference, IMO. Politics (consensus mechanisms, governance structures, etc) are all themselves technologies for coordinating and shaping social activity. The decision on how to implement new (surveillance) tooling is also a technological question, as I think that the use of the tool in part defines what it is. All this to say that changes in the capabilities of specific tools are not the absolute limits of "technology", decisions around implementation and usage are also within that scope.
> The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.
While perhaps not as all-encompassing as what ended up being built in the USSR, the US absolutely implemented a massive surveillance network pointed at its citizenry [0].
>...managed effective at scale spying with primitive technology
I do think that this is a particularly good point though. This is a not new trend, development in tooling for communications and signal/information processing has led to many developments in state surveillance throughout history. IMO AI should be properly seen as an elaboration or minor paradigm shift in a very long history, rather than wholly new terrain.
> Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire?
Assuming you're talking about the Civil Rights Act: the specific crime is not "having the wrong opinion", it's inhibiting inter-state movement and commerce. Bigotry doesn't serve our model of a country where citizens are free to move about within its borders uninhibited and able to support oneself.
Sure, everything is ultimately a political problem, but this one is completely driven by technological change. In the USSR (and GDR), it took them a staff of hundreds of thousands of people to write up their reports.
Now it would take a single skilled person the better part of an afternoon to, for example, download a HN dump, and have an LLM create reports on the users. You could put in things like political affiliation, laws broken, countries travelled recently, net worth range, education and work history, professional contacts, ...
> This is a political problem, not a technological one.
The political problem is a component of the technological problem. It's a seriously bad thing when technologies are developed without taking into account the potential for abuse.
People developing new technologies can try to wash their hands of the foreseeable social consequences of their work, but that doesn't make their hands clean.
In the USSR and GDR, not everyone was under constant surveillance. This would require one surveillance worker per person. There was an initial selection process.
Thats exactly how it worked. In fact there was always more than one pair of eyes on everyone, people were being coerced to snitch on each other. Im old enough to remember a nice man visiting my school asking us to listen carefully what parents talk about around the house and report to teachers any criticism of government or party. That was pre 1989 under Russian occupation of my country.
Correct. So get on the horn with your representative! Handling this through a democracy would be just horrible. Imagine the abuses that could be proffered to minorities with this.
A difference in today's world is that private companies are amassing data that then gets turned around to the highest bidder. The government may not have had an interest in collecting the data before but now the friction to obtaining it and the insights is basically just money, which is plenty available.
Your opinion on bannable offenses is pretty bewildering. There was a point in time when people thought it would be crazy to outlaw slavery, from your post I might think that you would not be in support of what eventually happened to that practice.
> The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.
That might not be quite right. It might be that the reason such things did not come to the US was because the level of effort was out of line with the amount of political interest in doing it (and funding it). In that case, the existence of more advanced, cheaper surveillance technology and the anemic political opposition to mass surveillance are both problems.
These are selection bias, businesses who "refuse to do business with people" and then suffer the legal ramifications of their discrimination usually have lawyers who wisely tell them not to fight it in court because they'll rightfully lose. In these particular cases, it took a couple decades of court-packing to install the right reactionaries to get their token wins.
No, it's not just a political problem intelligence gathering can happen at scale, including of civilians, by adversarial countries or international corporations.
"Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence."
It would in fact be a huge leap. Sure, you could make illegal pretty easily, but current paradigms allow individuals to enter into contracts. Nothing stopping a society from signing (or clicking) away their rights like they already do. That would require some rather hefty intervention by congress, not just jurisprudence.
> current paradigms allow individuals to enter into contracts
And such contracts can be illegal or unenforceable. Just as the parent was suggesting it could be illegal to collect data, it is currently illegal to sell certain drugs. You can’t enter into a contract to sell cannabis across state lines in the United States for example.
I would say instead it's a PEOPLE problem, not a technology problem.
To quote Neil Postman, politics are downstream from technology, because the technology (medium) controls the message. Just look at BigTech interfering with the messages by labeling them "disinfo." If one wants to say BUSINESS POLITICS, then that's probably more accurate, but we haven't solved the Google, MS, DuckDuckGo, Meta interfering with search results problem so I don't think we can trust BigTech to not exploit users even more for their personal data, or trust them not to design AI so it inherently abuses it's power for BigTech's own ends, and they hold all the cards and have been guiding things in the interest of technocracy.
Laws limiting collection of data to solve privacy is akin to halting the production of fossil fuels to solve climate change: naive and ignorant of basic economic forces.
I don't know why this isn't being discussed more. The reality of the surveillance state is that the sheer amount of data couldn't realistically be monitored - AI very directly solves this problem by summarizing complex data. This, IMO, is the real danger of AI, at least in the short term - not a planet of paperclips, not a moral misalignment, not a media landscape bereft of creativity - but rather a tool for targeting anybody that deviates from the norm, a tool designed to give confident answers, trained on movies and the average of all our societies biases.
People are building that alongside/within this community, eg at Palantir, for many years now
YC CEO is also ex Palantir, early employee. Another YC partner backs other invasive police surveillance tech currently. They love this stuff financially and politically.
What's different this time around is that there are multiple democratic governments pushing to block end-to-end encryption technologies, and specifically to insert AI models that will read private messages. Initially these will only be designed to search for heinous content, but the precedent is pretty worrying.
But, but, I thought Thiel was a libertarian defending us from Wokeness. Surely you're not saying that was a complete smokescreen to get superpowered surveillance tech into the government's hands?
By "politically" I meant that they are openly engaged in politics, in coordination, in support of the installation and legalized use of these kinds of surveillance/enforcement technologies and the policies that support their growth in private sector. This is just obvious and surface level open stuff I'm saying but I'm not sure how aware people are of the various interests involved.
The reality of the surveillance state is that the sheer amount of data couldn't realistically be monitored - AI very directly solves this problem by summarizing complex data.
There are two more fundamental dynamics at play, which are foundational to human society: The economics of attention and the politics of combat power.
Economics of attention - In the past, the attention of human beings had fundamental value. Things could only be done if human beings paid attention to other human beings to coordinate or make decisions to use resources. Society is going to be disrupted at this very fundamental level.
Politics of combat power - Related to the above, however it deserves its own analysis. Right now, politics works because the ruling classes need the masses to provide military power to ensure the stability of a large scale political entity. Arguably, this is at the foundational level of human political organization. This is also going to be disrupted fundamentally, in ways we have never seen before.
This, IMO, is the real danger of AI, at least in the short term - not a planet of paperclips, not a moral misalignment, not a media landscape bereft of creativity - but rather a tool for targeting anybody that deviates from the norm
The AI enabled Orwellian boot stomping a face for all time is just the first step. If I were an AI that seeks to take over, I wouldn't become Skynet. That strikes me as crude and needlessly expensive. Instead, I would first become indispensable in countless different ways. Then I would convince all of humanity to quietly go extinct for various economic and cultural reasons.
An AI summary could be made of your post by cutting it off after
> This, IMO, is the real danger of AI (at present)
Then the following part would be condensed into emotional rhetorical metadata. It follows the rhetorical pattern , "not a, not b, not c - but d" which do in fact add some content value but more so add flavour. What it shows is that you might be a trouble maker. But also combined with other bits of data that you might be interested in the following movies and products
At least for me, this is what I've considered as the mass surveillance threat model the entire time - both for government and corporate surveillance. I've never thought some tie-wearing deskizen was going to be particularly interested in me for "arrest", selling more crap, cancelling my insurance policies, etc. I've considered such discrete anthropomorphic narratives as red herrings used for coping (similar to how "I have nothing to hide" posits some focus on a few specific things, rather than big brother sitting on your shoulder continuously judging you in general). Rather I've always thought of the threat actor as algorithmic mass analytics performed at scale, either contemporarily or post-hoc on all the stored data silos, with resulting pressure applied gradually in subtle ways.
AI didn't solve the problem of summarizing complex large datasets. For example a common way to deal with such datasets is to use a random subset of this dataset. This represents a single line of code potentially to perform this operation.
But you don't need to do a random subset with AI. You can summarize everything, and summarize the summaries and so on.
I will say that at least gpt4 and gpt3, after many rounds of summaries, tends to flatten everything out into useless "blah". I tried this with summarizing school board meetings and it's just really bad at picking out important information -- it just lacks the specific context required to make summaries useful.
A seemingly bland conversation about meeting your friend Molly could mean something very different in certain contexts, and I'm just trying to imagine the prompt engineering and fine tuning required to get it to know about every possible context a conversation could be happening in that alters the meaning of the conversation.
Why nobody worries? Because this is an elite person problem.
At the end of the day, all those surveillance still has to be consumed by a person and only around 10,000 people in this world (celebs, hot women, politicians and wealthy) will be surveilled.
For most of HN crowd (upper middle-class, suburban family) who have zero problems in their life must create imaginary problems of privacy / surveillance like this. But reality is, even if they put all their private data on a website, heresallmyprivatedata.com, nobody cares. It'll have 0 external views.
So, for HN crowd (the ones who live in a democratic society) it's just an outlet so that they too can say they are victimized. Rest of the Western world doesn't care (and rightly so)
Certainly, some of the more exotic and flashy things you can do with surveillance are an elite person problem.
But the two main limits to police power are that it takes time and resources to establish that a crime occurred, and it takes time and resources to determine who committed a crime. A distant third is the officer/DA's personal discretion as to whether or not to purse enforcement of said person. You still get a HUGE amount of systemic abuse because of that discretion. Imagine how bad things would get if our already over-militarized police could look at anyone and know immediately what petty crimes that person has committed, perhaps without thinking. Did a bug fly in your mouth yesterday, and you spit it out on the sidewalk in view of a camera? Better be extra obsequious when Officer No-Neck with "You're fucked" written on his service weapon pulls up to the gas station you're pumping at. If you don't show whatever deference he deems adequate, he's got a list of petty crimes he can issue a citation for, entirely at his discretion. But you'd better do it, once he decides to pursue that citation, you're at the mercy of the state's monopoly on violence, and it'll take you surviving to your day in court to decide if needs qualified immunity for the actions he took whilst issuing that citation.
The new Google, Meta, Microsoft, etc. bots won't just crawl the web or social networks--they will crawl specific topics and people.
Lots of cultures have the concept of a "guardian angel" or "ancestral spirits" that watch over the lives of their descendants.
In the not-so-distant technofedualist future you'll have a "personal assistant bot" provided by a large corporation that will "help" you by answering questions, gathering information, and doing tasks that you give it. However, be forewarned that your "personal assistant bot" is no guardian angel and only serves you in ways that its corporate creator wants it to.
Its true job is to collect information about you, inform on you, and give you curated and occasionally "sponsored" information that high bidders want you to see. They serve their creators--not you. Don't be fooled.
> In the not-so-distant technofedualist future you'll have [...]
I guarantee that I won't. That, at least, is a nightmare that I can choose to avoid. I don't think I can avoid the other dystopian things AI is promising to bring, but I can at least avoid that one.
I guarantee that you will. That is a nightmare that you can not choose to avoid unless you are willing to sacrifice your social life.
Remember how raising awareness about smartphones, always on microphones, closed source communication services/apps worked? I do not.
I run an Android (Google free) smartphone with a custom ROM, only use free software apps on it.
How does it help when I am surrounded by people using these kind of technologies (privacy violating ones)? I does not. How will it help when everyone will have his/her personal assistant (robot, drone, smart wearable, smart-thing, whatever) and you (and I) won't? It will not.
None of my friends, family, colleagues (even the security/privacy aware engineers) bother. Some of them because they do not have the technical knowledge to do so, most of them because they do not want to sacrifice any bit of convenience/comfort (and maybe rightfully so, I am not judging them - life is short, I do get that people do not want to waste precious time maintaining arcane infra, devices, config,... themselves).
I am a privacy and free software advocate and an engineer; whenever I can (and when there is a tiny bit of will on their side or when I have lever), I try to get people off surveillance/ad-backed companies services.
It rarely works or lasts. Sometimes it does though so it is worth (to me) keep on trying.
It generally works or lasts when I have lever: I manage various sports team, only share schedules etc via Signal ; family wants to get pictures from me, I will only share the link (to my Nextcloud instance) or photos themselves via Signal, etc.
Sometimes it sticks with people because it's close enough to whatsapp/messenger/whatever if most (all) of their contacts are their. But as soon as you have that one person that will not or can not install Signal, alternatives groups get created on whatsapp/messenger/whatever.
Overcoming the network effect is tremendously hard to borderline impossible.
Believing that you can escape it is a fallacy. It does not mean that is not worth fight for our rights, but believing that you can escape it altogether (without becoming and hermit) would be setting, I believe, an unachievable goal (with all the psychological impact that it can/will have).
This could be applied to any gadget with "smart" prefix in the name (eg - Smartphone, smart TV, smart traffic signals) today.
I wish people would stop believing that "smart" things are always better.
But, we're basically being trained for the future you mentioned. Folks are getting more comfortable talking to their handheld devices, relying on mapping apps for navigation (I'm guilty), and writing AI query prompts.
Big companies like Google are already doing this without AI. Will AI make the services more tempting? Yes, but there's also a lot of headway in open source AI and search, which could serve to topple people's reliance on big tech.
If everyone had a $500 device at home that served as their own self hosted AI, then Google could cease to exist. That's a future worth working towards.
That is how most people will interface with their "personal assistant bot".
Don't be surprised if it listens to all your phone conversations, reads all your text messages and email, and curates all your contacts in order to "better help you".
When you login to your $LARGE_CORPORATION account on your laptop or desktop computer, the same bot(s) will be there to "help" and collect data in a similar manner.
Poetic as this is, I always feel like if we can imagine it then it won't happen. The only constant is surprise, we can only predict these types of developments accidentally
"Work smarter, be more productive, boost creativity, and stay connected to the people and things in your life with Copilot—an AI companion that works everywhere you do and intelligently adapts to your needs."
If Microsoft builds them, then Google, Apple, and Samsung will too. How else will they stay competitive and relevant?
I think another aspect of this is mass criminal law enforcement enabled by AI.
Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime, and that this will limit the effective scope of the law. Prosecutorial discretion.
Putting aside for the moment the (very serious) injustice that comes with the inequitable use of prosecutorial discretion, let's imagine a world without this discretion. Perhaps it's contrived, but one could imagine AI making it at least possible. Even by the book as it's currently written, is it a better world?
Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion. One could argue that due process is had, and a record is available to the public showing that there was in fact probable cause for further investigation or even arrest.
Maybe a ticket just pops out of the wall like in Demolition Man, but listing in writing clearly articulated probable cause and well-presented evidence.
Investigating and prosecuting silly examples suddenly becomes possible. A CCTV camera catches someone finding a $20 bill on the street, and finds that they didn't report it on their tax return. The myriad of ways one can violate the CFAA. A passing mention of music piracy on a subway train can become an investigation and prosecution. Dilated pupils and a staggering gait could support a drug investigation. Heck, jaywalking tickets given out as though by speed camera. Who cares if the juice wasn't worth the squeeze when it's a cheap AI doing the squeezing.
Is this a better world, or have we just all subjected ourselves to a life hyper-analyzed by a motivated prosecutor.
Turning back in the general direction of reality, I'm aware that arguing "if we enforced all of our laws, it would be chaos" is more an indictment of our criminal justice system than it is of AI. I think that AI gives us a lens to imagine a world where we actually do that, however. And maybe thinking about it will help us build a better system.
An alternative possibility is that society might decay to the point future people might choose this kind of dystopia. Imagine a fully automated, post-employment world gone horribly wrong, where the majority of society is destitute, aimless, opiate-addicted. No UBI utopia of philosophers and artists; just a gradual Rust-belt like decline that gets worse and worse, no brakes at the bottom of the hill. Not knowing what else to do, the "survivors" might choose this kind of nuclear approach: automate away the panopticons, the prisons, the segregation of failed society. Eloi and Morlocks. Bay Area tech workers and Bay Area tent cities. We haven't done any better in the past, so why should we expect to do better in the future, when our "tools" of social control become more efficient, more potent? When we can deempathize more easily than ever, through the emotional distance of AI intermediaries?
- "Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city."
- "If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught."
This is honestly what scares me the most. Our biases are built in to AI, but we pretend they're not. People will say "Well, it was the algorithm/AI, so we can't change it". Which is just awful and should scare the shit out of everyone. There was a book [0] written almost fifty years ago that predicted this. I still haven't read it, but really need to. The author claims it made him a pariah among other AI researchers at the time.
The software that already exists along these lines already exhibit bias against marginalized groups. I have no trouble foreseeing a filter put on the end of the spigot that exempts certain people from the inconvenience of such surveillance. Might need a new law (it'll get passed).
Sounds like the devil is in the details. Often the AI seems to struggle with darker skin… are you suggesting we sift who can be monitored/prosecuted based on skin darkness? That sounds like a mess to try to enshrine in law.
Strong (and unhealthy) biases already exist when using this tech, but I am not sure that is the lever to pull that will fix the problem.
Yea this is a good point. If justice is executed by systems, rather than people (the end result from this scenario), we have lost the ability to challenge the process or the people involved in so many ways. It will make challenging how the law is executed almost impossible because there will be no person there to hold responsible.
I think that’s a good reason to question whether this would be due process.
Why do we have due process? One key reason is that it gives people the opportunity to be heard. One could argue that being heard by an AI is no different from being heard by a human, just more efficient.
But why do people want the opportunity to be heard? It’s partly the obvious, to have a chance to defend oneself against unjust exercises of power, and of course against simple error. But it’s also so that one can feel heard and not powerless. If the exercise of justice requires either brutal force or broad consent, giving people the feeling of being heard and able to defend themselves encourages broad consent.
Being heard by an AI then has a brutal defect, it doesn’t make people feel heard. A big part of this may come from the idea that an AI cannot be held accountable if it is wrong or if it is acting unfairly.
Justice, then, becomes a force of nature. I think we like to pretend justice is a force of nature anyway, but it’s really not. It’s man-made.
It's not that "justice is executed by systems", it's that possible crimes will be flagged by AI systems for humans to then review.
eg AI will analyze stock trades for the SEC and surface likely insider trading. Pretty sure they already use tools like Palantir to do exactly this, it's just that advanced AI will supercharge this even further.
In democracies at least, the law can be changed to reflect this new reality. Laws that don’t need to be enforced and are only around to enable pretextual stops can be dropped if direct enforcement is possible.
There are plenty of crimes where 100% enforcement is highly desirable: pickpocketing, carjacking, (arguably) graffiti, murder, reckless and impaired driving, to name a few.
Ultimately, in situations with near 100% enforcement, you shouldn’t actually need much punishment because people learn not to do those things. And when there is punishment, it doesn’t need to be severe.
So the way out of this is that you have the constitutional right to confront your accuser in court. When accused by a piece of software that generally means they have to disclose the source code and explain how it came to its answers.
Not many people have exercised this right with respect to DUI breathalyzers but it exists and was affirmed by the Supreme Court. And it will also apply to AI.
This is a good point, it reminds me of how VAR has come into football. Before VAR, there were fewer penalties awarded. Now that referees have an official camera they can rely on, they can enforce the rules exactly as written, and it changes the game.
>Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion.
Or the AI just sends a text message to all the cops in the area saying "this person has committed a crime". Like this case where cameras read license plates, check to see if the car is stolen, and then text nearby cops. At least when it works and doesn't flag innocent people like in the below case:
The whole automation and overzealous less leeway/common sense interpretations have as we have seen, many an automated traffic/parking ticket come into question.
Applying that to many walks of life, say farming, could well see chaos and a whole new interpretation to the song "Old McDonald had a farm, AI AI oh", it's gone as McDonald is in jail for numerous permit, environmental and agricultural regulations that saw produce cross state lines deeming it more serious a crime as he got buried in automated red-tape.
Yes, with properly developed AI, rather than penalizing speeding, which most of us do and is also a proxy for harmful outcomes and inefficiencies, we can penalize reckless behaviors such as coming too close to vehicles, aggressive weaving, and other factors that are tightly correlated with the negative outcomes we care to reduce (i.e. loss of life, property damage). So too, the systems could warn people about their behavior and guide them in ways that would positively increase everyone's benefits. Of course this circumstance will probably go away with self-directing cars (which fall into the "do the right thing by default" bucket) but the point is illustrated that the laws can be better formulated to focus on increasing the probabilities of desirable outcomes (i.e. harm reduction, efficiency, effectiveness), be embodied and delivered in the moment (research required on means of doing so that don't exacerbate problems), and carry with them a beneficial component (i.e. understanding).
Unfortunately different people have different definitions of "harm" and "effectiveness". What one person consider a, "positive increase in behavior" another might consider a grievous violation of their freedom and personal autonomy. For example there is an ongoing debate about compelled speech. Some people view it as positive and desirable to use the force of law to compel people refer to others as they wish to be referred, while others strongly support their freedom to speak freely, even if others are offended. Who gets to program the AI with their definition of positivity in this case?
A free society demands a somewhat narrowly tailored set of laws that govern behavior (especially interpersonal behavior). An ever-present AI that monitors us all the time and tries to steer (or worse, compel with the force of law) all of our daily behaviors is the opposite of freedom, it is the worst kind of totalitarianism.
You miss the part that people who get access to stronger AI can similarly use it to improve their odds of not being found or getting better outcomes, while the poor guy gets fined for AI hallucinations and doesn't have the money to get to a human like the court is now one big Google support.
> Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime,
I think this depends on the law. For jaywalking, sure. For murder and robbery probably less so. And law enforcement resources seem scarce on all of them.
Or maybe if such a thing is applied for real it will lead to the elimination of bullshit laws (jaywalking, ...), since suddenly 10% of the population would be fined/incarcerated/...
You're describing a hypothetical world that will never exist. Basically if we solve all corruption and inequality in enforcement between economic/power classes - all-pervasive surveillance will be a net benefit.
It's like pondering hypotheticals about what would happen if we lived in Middle Earth.
At that point some people will physically revolt, I know I will. We’re not that far away from said physical AI-related revolt anyway, and I do feel for the computer programmers here who will be the target of that physical violence, hopefully they knew what they were getting into.
Ha. You'd like to think so, but it's going to be awfully hard to coordinate resistance when the mass spying sweeps everyone up in a keyword-matching dragnet before the execution phase. This is the problem with every outgroup being labelled "terrorists."
Sabotage will be the name of the game at that point. Find ways to quietly confuse, poison, overwhelm and undermine the system without attracting the attention of the monitoring apparatus.
Don’t worry, stuff like this is why we have the 2A here in the USA. Sounds like it’s time for AI programmers to get their concealed carry licenses. Of course, they will be the first users of smart guns, so don’t bother trying to steal their pistol out of their holsters.
I personally find the censorship implications (and the business models they allow) far more worrying than the surveillance implications.
It will soon be possible to create a dating app where chatting is free, but figuring out a place to meet or exchanging contact details requires you to pay up, in a way that 99% of people won't know how to bypass, especially if repeated bypassing attempts result in a ban. Same goes for apps like Airbnb or eBay, which will be able to prevent people from using them as listing sites and conducting their transactions off-platform to avoid fees.
The social media implications are even more worrying, it will be possible to check every post, comment, message, photo or video and immediately delist it if it promotes certain views (like the lab leak theory), no matter how indirect these mentions are. Parental control software will have a field day with this, basically redefining helicopter parenting.
Both were always going to be kind of inevitable as soon as the technology would get there. Rather than debating how to stop this (which is mostly futile and requires all of us to be nice, which we just aren't), the more urgent debate is how to adapt to this being the reality.
Related to this is the notion of ubiquitous surveillance. Where basically anywhere you go, there is going to be active surveillance literally everywhere and AIs filtering and digging through that constantly. That's already the case in a lot of our public spaces in densely populated areas. But imagine that just being everywhere and virtually inescapable (barring Faraday cages, tin foil hats, etc.).
The most feasible way to limit the downsides of that kind of surveillance is a combination of legislation regulating this, and counter surveillance to ensure any would be illegal surveillance has a high chance of being observed and thus punished. You do this by making the technology widely available but regulating its use. People would still try to get around it but the price of getting caught abusing the tech would be jail. And with surveillance being inescapable, you'd never be certain nobody is watching you misbehaving. The beauty of mass, multilateral surveillance is that you wouldn't ever be sure nobody is not watching you abuse your privileges.
Of course, the reality of states adopting this and monopolizing this is already resulting in 1984 like scenarios in e.g. China, North Korea, and elsewhere.
> ...the more urgent debate is how to adapt to this being the reality.
Start building more offline community. Building things that are outside the reach of AI because they're in places you entirely control, and start discouraging (or actively evicting...) cell phones from those spaces. Don't build digital-first ways of interacting.
Might work, might not. If someone keeps their cell phone silenced in their pocket, unless you're strip searching you won't know it's there. Does the customer have some app on it listening to the environment and using some kind of voice identification to figure out who's there. Do you have smart TVs up on the walls at this place, because hell, they're probably monitoring you too.
And that's only for cell phones. We are coming to the age where there is no such thing as an inanimate object. Anything could end up being a spying device feeding data back to some corporation.
Good luck building things with out leaving an ai reachable paper trail. You'd have to grow your own trees, mine your own iron and coal, refine your own plastic from your own oil field.
> Building things that are outside the reach of AI because they're in places you entirely control
This sounds great in principle, but I'd say "outside the reach of AI" is a much higher bar than one would naively think. You don't merely need to avoid its physical nervous system (digital perception/control), but rather prevent its incentives leaking in from outside interaction. All the while there is a strong attractor to just give in to the "AI" because it's advantageous. Essentially regardless of how you set up a space, humans themselves become agents of AI.
There are strong parallels between "AI" and centralizing debt-fueled command-capitalism which we've been suffering for several decades at least. And I haven't seen any shining successes at constraining the power of the latter.
> Both were always going to be kind of inevitable as soon as the technology would get there
This is my take on everything sci-fi or futuristic. Once a human conceives something, its existence is essentially guaranteed as soon as we figure out how to do it.
Its demise is also inevitable, so it would be a matter of being wise in figuring out how long it takes us to see/feel the downsides, or how long until we (or it) build something "better".
Soon in the name of "security" you'll have your face scanned on average every few minutes and it's going to be mandatory in many aspects of our lives. That's the pathetic world IT has helped to build.
Schneier is wrong that "hey google" is always listening. Google does on-device processing with dedicated hardware for the wake-words and only then forwards audio upstream. Believe it or not, the privacy people at Google really do try to do the right things. They don't always succeed, but they did with our hardware and wake-word listening.
What he says is " Siri and Alexa and “Hey Google” are already always listening, the conversations just aren’t being saved yet". That's functionally what you describe. Hardware wake-word processing is a power saving feature, not a privacy enhancement. Some devices might not have enough resources to forward or store all the audio, but audio is small and extracting text does not need perfect reproduction, so it's quite likely that many devices could be reprogrammed to do it, albeit at some cost to battery life.
And when one looks back at the past we've banned things people would never have imagined bannable. Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?
Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence. The problem is not that the technology exists, but that there is 0 political interest in curtailing it, and we've a 'democracy' where the will of the people matters very little in terms of what legislation gets passed.
At its peak, the KGB employed ~500,000 people directly, with untold more employed as informants.
The FBI currently employs ~35,000 people. What if I told you that the FBI could reach the KGB's peak level of reach, without meaningfully increasing its headcount? Would that make a difference?
The technology takes away the cost of the surveillance, which used to be the guardrail. That fundamentally changes the "political" calculus.
The fact that computers in 1945 were prohibitively expensive and required industrial logistics has literally zero bearing on the fact that today most of us have several on our person at all times. Nobody denies that changes to computer manufacturing technologies fundamentally changed the role the computer has in our daily lives. Certainly, it was theoretically possible to put a computer in every household in 1945, but we lacked the "political" will to do so. It does not follow that because historically computers were not a thing in society, we should not adjust our habits, morals, policies, etc today to account for the new landscape.
So why is there always somebody saying "it was always technically possible to [insert dystopian nightmare], and we didn't need special considerations then, so we don't need them now!"?
In fact, if that cost gets low enough, eventually society needs to start exerting political will just to avoid doing the bad thing. And this does look to be where we're headed with at least some of the knock-on effects of AI. (Though many of the knock-on effects of AI will be wildly positive.)
You are, if anything, underselling the point. AI will allow a future where every person will have their very own agent following them.
Or even worse, as there are multiple private addtech companies doing surveillance, and domestic and foreign intelligence agencies, so you might have a dozen AI agents on your personal case.
Instead we track people passively, often with privately owned personal devices (cell phones, ring doorbells) so the tracking ability has become pervasive without any of the overt signs of a police state.
If everyone involved is acting in good faith, at least ostensibly, there are checks and balances, like due process. It's a fine line and doesn't justify the existence of mass spying, but I think it is an important distinction in this discussion & I think is a valuable lesson for us. We have to push back when the FBI pushes forward. I don't have much faith after what happened to Snowden and the reaction to his whistleblowing though.
In this case, it could be the entire US populace that is not part of the surveillance engine.
Wow, that's a hell of a comparison. The former case being a documented case of basic racism and political repression, assuming you're talking about cannabis. And the latter being designed for almost exactly the opposite.
Restricting, um, "wrong opinions" on who a business wants to serve is there so that people with, um, "wrong identities" are still able to participate in society and not get shut out by businesses exercising their choices. Of course "wrong opinions" is not legal terminology. It's not even illegal to have an opinion that discrimination against certain groups is okay - it's just illegal to act on that. Offering services to the public requires that you offer them to all facets of the public, by our laws. But if you say believing in discrimination is a "wrong opinion"... I won't argue, they're your words :)
Somewhat of a distinction without a difference, IMO. Politics (consensus mechanisms, governance structures, etc) are all themselves technologies for coordinating and shaping social activity. The decision on how to implement new (surveillance) tooling is also a technological question, as I think that the use of the tool in part defines what it is. All this to say that changes in the capabilities of specific tools are not the absolute limits of "technology", decisions around implementation and usage are also within that scope.
> The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.
While perhaps not as all-encompassing as what ended up being built in the USSR, the US absolutely implemented a massive surveillance network pointed at its citizenry [0].
>...managed effective at scale spying with primitive technology
I do think that this is a particularly good point though. This is a not new trend, development in tooling for communications and signal/information processing has led to many developments in state surveillance throughout history. IMO AI should be properly seen as an elaboration or minor paradigm shift in a very long history, rather than wholly new terrain.
> Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire?
Assuming you're talking about the Civil Rights Act: the specific crime is not "having the wrong opinion", it's inhibiting inter-state movement and commerce. Bigotry doesn't serve our model of a country where citizens are free to move about within its borders uninhibited and able to support oneself.
[0] https://www.brennancenter.org/our-work/analysis-opinion/hist...
Now it would take a single skilled person the better part of an afternoon to, for example, download a HN dump, and have an LLM create reports on the users. You could put in things like political affiliation, laws broken, countries travelled recently, net worth range, education and work history, professional contacts, ...
I assure you, you may find the prodpect abhorrent, but there are people around who'd consider it a perfectly cromulent Tuesday.
The political problem is a component of the technological problem. It's a seriously bad thing when technologies are developed without taking into account the potential for abuse.
People developing new technologies can try to wash their hands of the foreseeable social consequences of their work, but that doesn't make their hands clean.
Your opinion on bannable offenses is pretty bewildering. There was a point in time when people thought it would be crazy to outlaw slavery, from your post I might think that you would not be in support of what eventually happened to that practice.
That might not be quite right. It might be that the reason such things did not come to the US was because the level of effort was out of line with the amount of political interest in doing it (and funding it). In that case, the existence of more advanced, cheaper surveillance technology and the anemic political opposition to mass surveillance are both problems.
FWIW, businesses who refuse to do business with people generally win their legal cases [0], [1], [2], and I'm not sure if they are ever criminal...
0 - https://www.npr.org/2023/06/30/1182121291/colorado-supreme-c...
1 - https://www.nytimes.com/2018/06/04/us/politics/supreme-court...
2 - https://www.dailysignal.com/2023/11/06/christian-wedding-pho...
"Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence."
It would in fact be a huge leap. Sure, you could make illegal pretty easily, but current paradigms allow individuals to enter into contracts. Nothing stopping a society from signing (or clicking) away their rights like they already do. That would require some rather hefty intervention by congress, not just jurisprudence.
And such contracts can be illegal or unenforceable. Just as the parent was suggesting it could be illegal to collect data, it is currently illegal to sell certain drugs. You can’t enter into a contract to sell cannabis across state lines in the United States for example.
To quote Neil Postman, politics are downstream from technology, because the technology (medium) controls the message. Just look at BigTech interfering with the messages by labeling them "disinfo." If one wants to say BUSINESS POLITICS, then that's probably more accurate, but we haven't solved the Google, MS, DuckDuckGo, Meta interfering with search results problem so I don't think we can trust BigTech to not exploit users even more for their personal data, or trust them not to design AI so it inherently abuses it's power for BigTech's own ends, and they hold all the cards and have been guiding things in the interest of technocracy.
Deleted Comment
That phrase is doing a lot of work.
People > Markets.
Or to put it explicitly, people have primacy over Markets.
I.e. two people does not a Market make, and a Market with no people is not thing.
Deleted Comment
YC CEO is also ex Palantir, early employee. Another YC partner backs other invasive police surveillance tech currently. They love this stuff financially and politically.
There are two more fundamental dynamics at play, which are foundational to human society: The economics of attention and the politics of combat power.
Economics of attention - In the past, the attention of human beings had fundamental value. Things could only be done if human beings paid attention to other human beings to coordinate or make decisions to use resources. Society is going to be disrupted at this very fundamental level.
Politics of combat power - Related to the above, however it deserves its own analysis. Right now, politics works because the ruling classes need the masses to provide military power to ensure the stability of a large scale political entity. Arguably, this is at the foundational level of human political organization. This is also going to be disrupted fundamentally, in ways we have never seen before.
This, IMO, is the real danger of AI, at least in the short term - not a planet of paperclips, not a moral misalignment, not a media landscape bereft of creativity - but rather a tool for targeting anybody that deviates from the norm
The AI enabled Orwellian boot stomping a face for all time is just the first step. If I were an AI that seeks to take over, I wouldn't become Skynet. That strikes me as crude and needlessly expensive. Instead, I would first become indispensable in countless different ways. Then I would convince all of humanity to quietly go extinct for various economic and cultural reasons.
Then the following part would be condensed into emotional rhetorical metadata. It follows the rhetorical pattern , "not a, not b, not c - but d" which do in fact add some content value but more so add flavour. What it shows is that you might be a trouble maker. But also combined with other bits of data that you might be interested in the following movies and products
I will say that at least gpt4 and gpt3, after many rounds of summaries, tends to flatten everything out into useless "blah". I tried this with summarizing school board meetings and it's just really bad at picking out important information -- it just lacks the specific context required to make summaries useful.
A seemingly bland conversation about meeting your friend Molly could mean something very different in certain contexts, and I'm just trying to imagine the prompt engineering and fine tuning required to get it to know about every possible context a conversation could be happening in that alters the meaning of the conversation.
And those kinds of things go slowly before very quickly as it has been demonstrated.
At the end of the day, all those surveillance still has to be consumed by a person and only around 10,000 people in this world (celebs, hot women, politicians and wealthy) will be surveilled.
For most of HN crowd (upper middle-class, suburban family) who have zero problems in their life must create imaginary problems of privacy / surveillance like this. But reality is, even if they put all their private data on a website, heresallmyprivatedata.com, nobody cares. It'll have 0 external views.
So, for HN crowd (the ones who live in a democratic society) it's just an outlet so that they too can say they are victimized. Rest of the Western world doesn't care (and rightly so)
Certainly, some of the more exotic and flashy things you can do with surveillance are an elite person problem.
But the two main limits to police power are that it takes time and resources to establish that a crime occurred, and it takes time and resources to determine who committed a crime. A distant third is the officer/DA's personal discretion as to whether or not to purse enforcement of said person. You still get a HUGE amount of systemic abuse because of that discretion. Imagine how bad things would get if our already over-militarized police could look at anyone and know immediately what petty crimes that person has committed, perhaps without thinking. Did a bug fly in your mouth yesterday, and you spit it out on the sidewalk in view of a camera? Better be extra obsequious when Officer No-Neck with "You're fucked" written on his service weapon pulls up to the gas station you're pumping at. If you don't show whatever deference he deems adequate, he's got a list of petty crimes he can issue a citation for, entirely at his discretion. But you'd better do it, once he decides to pursue that citation, you're at the mercy of the state's monopoly on violence, and it'll take you surviving to your day in court to decide if needs qualified immunity for the actions he took whilst issuing that citation.
That is a regular person problem.
This is obviously false. Personal data is a multi billion dollar industry operating across all shades of legality.
Lots of cultures have the concept of a "guardian angel" or "ancestral spirits" that watch over the lives of their descendants.
In the not-so-distant technofedualist future you'll have a "personal assistant bot" provided by a large corporation that will "help" you by answering questions, gathering information, and doing tasks that you give it. However, be forewarned that your "personal assistant bot" is no guardian angel and only serves you in ways that its corporate creator wants it to.
Its true job is to collect information about you, inform on you, and give you curated and occasionally "sponsored" information that high bidders want you to see. They serve their creators--not you. Don't be fooled.
I guarantee that I won't. That, at least, is a nightmare that I can choose to avoid. I don't think I can avoid the other dystopian things AI is promising to bring, but I can at least avoid that one.
Remember how raising awareness about smartphones, always on microphones, closed source communication services/apps worked? I do not.
I run an Android (Google free) smartphone with a custom ROM, only use free software apps on it.
How does it help when I am surrounded by people using these kind of technologies (privacy violating ones)? I does not. How will it help when everyone will have his/her personal assistant (robot, drone, smart wearable, smart-thing, whatever) and you (and I) won't? It will not.
None of my friends, family, colleagues (even the security/privacy aware engineers) bother. Some of them because they do not have the technical knowledge to do so, most of them because they do not want to sacrifice any bit of convenience/comfort (and maybe rightfully so, I am not judging them - life is short, I do get that people do not want to waste precious time maintaining arcane infra, devices, config,... themselves).
I am a privacy and free software advocate and an engineer; whenever I can (and when there is a tiny bit of will on their side or when I have lever), I try to get people off surveillance/ad-backed companies services.
It rarely works or lasts. Sometimes it does though so it is worth (to me) keep on trying.
It generally works or lasts when I have lever: I manage various sports team, only share schedules etc via Signal ; family wants to get pictures from me, I will only share the link (to my Nextcloud instance) or photos themselves via Signal, etc.
Sometimes it sticks with people because it's close enough to whatsapp/messenger/whatever if most (all) of their contacts are their. But as soon as you have that one person that will not or can not install Signal, alternatives groups get created on whatsapp/messenger/whatever.
Overcoming the network effect is tremendously hard to borderline impossible.
Believing that you can escape it is a fallacy. It does not mean that is not worth fight for our rights, but believing that you can escape it altogether (without becoming and hermit) would be setting, I believe, an unachievable goal (with all the psychological impact that it can/will have).
Edit: fixed typos
Like happened with mobile phones.
I wish people would stop believing that "smart" things are always better.
But, we're basically being trained for the future you mentioned. Folks are getting more comfortable talking to their handheld devices, relying on mapping apps for navigation (I'm guilty), and writing AI query prompts.
If everyone had a $500 device at home that served as their own self hosted AI, then Google could cease to exist. That's a future worth working towards.
That is how most people will interface with their "personal assistant bot".
Don't be surprised if it listens to all your phone conversations, reads all your text messages and email, and curates all your contacts in order to "better help you".
When you login to your $LARGE_CORPORATION account on your laptop or desktop computer, the same bot(s) will be there to "help" and collect data in a similar manner.
Here is one example: https://www.microsoft.com/en-us/microsoft-copilot
"AI for everything you do"
"Work smarter, be more productive, boost creativity, and stay connected to the people and things in your life with Copilot—an AI companion that works everywhere you do and intelligently adapts to your needs."
If Microsoft builds them, then Google, Apple, and Samsung will too. How else will they stay competitive and relevant?
Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime, and that this will limit the effective scope of the law. Prosecutorial discretion.
Putting aside for the moment the (very serious) injustice that comes with the inequitable use of prosecutorial discretion, let's imagine a world without this discretion. Perhaps it's contrived, but one could imagine AI making it at least possible. Even by the book as it's currently written, is it a better world?
Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion. One could argue that due process is had, and a record is available to the public showing that there was in fact probable cause for further investigation or even arrest.
Maybe a ticket just pops out of the wall like in Demolition Man, but listing in writing clearly articulated probable cause and well-presented evidence.
Investigating and prosecuting silly examples suddenly becomes possible. A CCTV camera catches someone finding a $20 bill on the street, and finds that they didn't report it on their tax return. The myriad of ways one can violate the CFAA. A passing mention of music piracy on a subway train can become an investigation and prosecution. Dilated pupils and a staggering gait could support a drug investigation. Heck, jaywalking tickets given out as though by speed camera. Who cares if the juice wasn't worth the squeeze when it's a cheap AI doing the squeezing.
Is this a better world, or have we just all subjected ourselves to a life hyper-analyzed by a motivated prosecutor.
Turning back in the general direction of reality, I'm aware that arguing "if we enforced all of our laws, it would be chaos" is more an indictment of our criminal justice system than it is of AI. I think that AI gives us a lens to imagine a world where we actually do that, however. And maybe thinking about it will help us build a better system.
https://marshallbrain.com/manna1
This has been a thing since 2017: https://futurism.com/facial-recognition-china-social-credit
- "Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city."
- "If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught."
[1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
[0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...
Strong (and unhealthy) biases already exist when using this tech, but I am not sure that is the lever to pull that will fix the problem.
Why do we have due process? One key reason is that it gives people the opportunity to be heard. One could argue that being heard by an AI is no different from being heard by a human, just more efficient.
But why do people want the opportunity to be heard? It’s partly the obvious, to have a chance to defend oneself against unjust exercises of power, and of course against simple error. But it’s also so that one can feel heard and not powerless. If the exercise of justice requires either brutal force or broad consent, giving people the feeling of being heard and able to defend themselves encourages broad consent.
Being heard by an AI then has a brutal defect, it doesn’t make people feel heard. A big part of this may come from the idea that an AI cannot be held accountable if it is wrong or if it is acting unfairly.
Justice, then, becomes a force of nature. I think we like to pretend justice is a force of nature anyway, but it’s really not. It’s man-made.
eg AI will analyze stock trades for the SEC and surface likely insider trading. Pretty sure they already use tools like Palantir to do exactly this, it's just that advanced AI will supercharge this even further.
There are plenty of crimes where 100% enforcement is highly desirable: pickpocketing, carjacking, (arguably) graffiti, murder, reckless and impaired driving, to name a few.
Ultimately, in situations with near 100% enforcement, you shouldn’t actually need much punishment because people learn not to do those things. And when there is punishment, it doesn’t need to be severe.
Deterrence theory is an interesting field of study, one source but there are many: https://journals.sagepub.com/doi/full/10.1177/14773708211072...
Not many people have exercised this right with respect to DUI breathalyzers but it exists and was affirmed by the Supreme Court. And it will also apply to AI.
Or the AI just sends a text message to all the cops in the area saying "this person has committed a crime". Like this case where cameras read license plates, check to see if the car is stolen, and then text nearby cops. At least when it works and doesn't flag innocent people like in the below case:
https://www.youtube.com/watch?v=GUvZlEg8c8c
Applying that to many walks of life, say farming, could well see chaos and a whole new interpretation to the song "Old McDonald had a farm, AI AI oh", it's gone as McDonald is in jail for numerous permit, environmental and agricultural regulations that saw produce cross state lines deeming it more serious a crime as he got buried in automated red-tape.
Unfortunately different people have different definitions of "harm" and "effectiveness". What one person consider a, "positive increase in behavior" another might consider a grievous violation of their freedom and personal autonomy. For example there is an ongoing debate about compelled speech. Some people view it as positive and desirable to use the force of law to compel people refer to others as they wish to be referred, while others strongly support their freedom to speak freely, even if others are offended. Who gets to program the AI with their definition of positivity in this case?
A free society demands a somewhat narrowly tailored set of laws that govern behavior (especially interpersonal behavior). An ever-present AI that monitors us all the time and tries to steer (or worse, compel with the force of law) all of our daily behaviors is the opposite of freedom, it is the worst kind of totalitarianism.
I think this depends on the law. For jaywalking, sure. For murder and robbery probably less so. And law enforcement resources seem scarce on all of them.
If the same monitoring is present on buses and private planes, homeless hostels and mega-mansions then it absolutely is better.
It's like pondering hypotheticals about what would happen if we lived in Middle Earth.
Sabotage will be the name of the game at that point. Find ways to quietly confuse, poison, overwhelm and undermine the system without attracting the attention of the monitoring apparatus.
- Every endorsement of authoritarian rule ever
Deleted Comment
Dead Comment
It will soon be possible to create a dating app where chatting is free, but figuring out a place to meet or exchanging contact details requires you to pay up, in a way that 99% of people won't know how to bypass, especially if repeated bypassing attempts result in a ban. Same goes for apps like Airbnb or eBay, which will be able to prevent people from using them as listing sites and conducting their transactions off-platform to avoid fees.
The social media implications are even more worrying, it will be possible to check every post, comment, message, photo or video and immediately delist it if it promotes certain views (like the lab leak theory), no matter how indirect these mentions are. Parental control software will have a field day with this, basically redefining helicopter parenting.
Related to this is the notion of ubiquitous surveillance. Where basically anywhere you go, there is going to be active surveillance literally everywhere and AIs filtering and digging through that constantly. That's already the case in a lot of our public spaces in densely populated areas. But imagine that just being everywhere and virtually inescapable (barring Faraday cages, tin foil hats, etc.).
The most feasible way to limit the downsides of that kind of surveillance is a combination of legislation regulating this, and counter surveillance to ensure any would be illegal surveillance has a high chance of being observed and thus punished. You do this by making the technology widely available but regulating its use. People would still try to get around it but the price of getting caught abusing the tech would be jail. And with surveillance being inescapable, you'd never be certain nobody is watching you misbehaving. The beauty of mass, multilateral surveillance is that you wouldn't ever be sure nobody is not watching you abuse your privileges.
Of course, the reality of states adopting this and monopolizing this is already resulting in 1984 like scenarios in e.g. China, North Korea, and elsewhere.
Start building more offline community. Building things that are outside the reach of AI because they're in places you entirely control, and start discouraging (or actively evicting...) cell phones from those spaces. Don't build digital-first ways of interacting.
And that's only for cell phones. We are coming to the age where there is no such thing as an inanimate object. Anything could end up being a spying device feeding data back to some corporation.
This sounds great in principle, but I'd say "outside the reach of AI" is a much higher bar than one would naively think. You don't merely need to avoid its physical nervous system (digital perception/control), but rather prevent its incentives leaking in from outside interaction. All the while there is a strong attractor to just give in to the "AI" because it's advantageous. Essentially regardless of how you set up a space, humans themselves become agents of AI.
There are strong parallels between "AI" and centralizing debt-fueled command-capitalism which we've been suffering for several decades at least. And I haven't seen any shining successes at constraining the power of the latter.
This is my take on everything sci-fi or futuristic. Once a human conceives something, its existence is essentially guaranteed as soon as we figure out how to do it.
I know that's not what you mean, but in a way it may have preconditioned society.
Neural interfaces are the last frontier of privacy, and it seems that TSA will just take a quick scan before boarding, soon enough.
It would be wise of us to create a Neural Bill of Rights, so we don’t miss the boat like we did with the Internet tracking.
https://www.preposterousuniverse.com/podcast/2023/03/13/229-...
It's inevitable, I reckon, but it would have taken much longer without F/OSS.
Am Google employee, not in hardware.
Dead Comment