I had a similar thought last time I was in an airport for an international flight and instead of scanning my boarding pass and looking at my passport they just let everyone walk through and as you passed the door it would tell you your seat number.
When I was in Mexico I filed a report with the airport after an employee selling timeshares was overly aggressive and grabbed my arm and try to block me from leaving. Quickly they showed me a video of my entire time with all my movements at the airport so they could pinpoint the employee.
Like the article says I think it is just a matter of time until such systems are everywhere. We are already getting normalized to it at public transportation hubs with almost 0 objections. Soon most municipalities or even private businesses will implement it and no one will care because it already happens to them at the airport, so why make a fuss about it at the grocery store or on a public sidewalk.
> and no one will care because it already happens to them at the airport, so why make a fuss about it at the grocery store or on a public sidewalk.
You may be overestimating how many unique/different people travel through airports, especially more than once or twice to notice the tracking. People who travel once or twice total in their life by air, (are usually easy to spot), far more concerned with getting through a confusing hectic situation then noticing or even knowing that using facial recognition is new and not simply a special thing (because 9/11). And, the majority of Americans have travelled to zero or one country, last time I saw numbers on it. That country is usually Mexico or Canada where they drive (or walk).
I think once it starts trying to hit close to home where people have a routine and are not as stressed by a new situation and have the bandwidth to--at a minimum--take a pause, will ask questions about what is going on.
I’m thinking it will only be a matter of time before (if it’s not already the case) that things like self-checkout systems that do HQ faces level video for facial recognition and identification, akin to any number of dystopian novels/movies where some protagonist cannot move around without face covering because there are scanners, or even something like Idiocracy where the public is so conditioned that they immediately report someone who does not obey the government regime’s requirement to have some barcode.
Was in Canada two years ago to snowboard. They were taking everybody's pictures as they were going through customs. I remember going to the counter with my Mom and the guy asked us what we would be doing and I before I even answered he waved me through. My Mom looked at me and said, "We obviously didn't fit the profile they're looking for."
You're right about just trying to get through the process. I was the only one in our family who was like, "No idea why they were taking pictures of everybody when they just whisked us through customs." My Dad snapped that it was because of 9/11 and we weren't lawbreakers so just deal with it.
The comment was interesting since my Dad worked for Lockheed Martin for 30 years and used to travel constantly. He mentioned the idea of a "high trust society" is now gone forever and facial recognition and other technologies are now necessary to give that trust back to the general public so they can feel safe with air travel.
It’s pretty much too late by the time that happens. People’s general indifference regarding privacy never ceases to amaze me, we really put up no fight whatsoever
But, will they even realize when/where they're being surveilled?
Out of sight, out of mind. If there isn't a large video camera tracking them as they move across a shop or down the street, I'm not sure many people will even notice.
> Quickly they showed me a video of my entire time with all my movements at the airport so they could pinpoint the employee.
This is just as interesting as it is creepy, but that's the world we live and this is hacker news. So, how quickly was was quickly. You made your report, they get the proper people involved, and then they show you the video. How much time passed before you were viewing the video?
For someone that plays with quickly assembling an edited video from a library of video content using a database full of cuepoints, this is a very interesting problem to solve. What did the final video look like? Was it an assembled video with cuts like in a spy movie with the best angles selected in sequence? Was it each of the cameras in a multi-cam like view just starting from the time they ID'd the flight you arrived on? Did they draw the boxes around you to show the system "knew" you?
I'm really curious how dystopian we actually are with the facial recognition systems like this.
> I'm really curious how dystopian we actually are
No idea how widespread it is, but in Singapore airport the system is tightly integrated. You are "tagged" when you check in, and "tagged out" as you board, with your appearance associated with your intended flight details. If you miss your flight or otherwise spend too much time in the secure zone, you are highlighted in the system and will eventually be approached. Arriving passengers are also given a time limit to take their next action, be it clear immigration or enter transit, and lingering will also trigger a response.
All in the name of safety and security but I can't help but feel a measure of discomfort with it all.
It records the license plates of all cars entering and leaving the parking lots. You can associate names to faces which we do for all employees and the system automatically records when people enter and leave buildings. You can even just tell it to find all people with a blue shirt in a particular camera in a time window. It can automatically detect people shouting.
Those sorts of systems run in realtime. They neither know (or care) who you are. They work by identifying people and pulling out appearance characteristics (like blue coat/red hair/beard/etc) and hashing them in a database. After that, it's straightforward to track similar looking people via connected cameras, with a bit of human assistance.
Twenty (!) years ago I got home from a drug store shopping trip and realized I had been charged for some expensive items I didn't buy. I called, they immediately found me on their surveillance recording, saw the items were actually bought by the previous person in line, and quickly refunded me. No face recognition was involved (they just used the timestamp from my receipt), but the experience immediately made me a fan of video monitoring.
I worked in a retail/pc repair place about 10 years ago. Boss phoned me one day to say X (customer) device is missing have I seen it? I immediately knew it had been stolen and who by. I was on my own in the shop, 10 minutes before closing and I had been busy for the previous hour so the device was in the front of the shop instead of stored away securely like they normally would be. I was able to find the video within about 30 seconds of getting in and pinpoint the guy. I actually recognised him and was able to tell the police where I saw him somewhat frequently (as I lived nearby too).
Without it, I think all the gingers would have pointed at me rather than me being tired and making a mistake.
It's a different thing though. In your case they used a timestamp to manually look at footage and confirm an identity. In OP's case, automated recognition is used to identify and track people, in aggregation mass.
I was talking with an employee at a grocery store, who told me that management one day decided to review the surveillance footage, and fired a bunch of employees who were caught pilfering.
The thing with you example is that there is a "time and location bound context" due to which the false positive rate can be _massively_ reduced.
But for nation wide public search the false positive rate is just way to high for it to work well.
Once someone managed to leave a "local/time" context (e.g. known accident at known location and time) without leaving too many traces (in the US easy due to wide use of private cars everyone) the false positive rate makes such systems often practically hardly helpful.
I seriously pisses me off that they make the font so small on the opt-out signage and you get told by a uniform to stare at the camera like you have no choice. Everything you don't fight for ends up getting taken.
I tend to just stop and read the fine print for things that might matter or if I have the time, even if I'm holding up a queue. I've spent several minutes at the entrance gate to a parking building because of the giant poster of T&Cs. I ask librarians to find books for me because the catalogue computer has a multi-screen T&C that I can't be bothered reading. I've turned away a customer from by business because their purchasing conditions included an onerous indemnification clause which they refused to alter. I discovered you don't need ID to travel on local flights because the T&C led me to calling the airline who gave me a password to use instead. I've also found several mistakes in T&Cs that nobody probably notices because nobody reads them.
I just experienced one of these facial scanners in the UK while boarding a plane for the US. The thought had occurred to me that this could become the norm and that there’s nothing one could actually do about it and that we are already living in the dystopian future we feared, where no one can truly ever be anonymous. But I also wondered about various problem scenarios. If the scanner couldn’t match your face, would they deny you entry? If so, what would happen if someone had plastic surgery or some other condition that altered their face? What if this technology becomes so pervasive that your face is scanned everywhere you go? Where does any of this end?
This reminds me of the early days of applying speech recognition. Some use cases were surprisingly good, like non-pretrained company directory name recognition. Shockingly good and it fails soft because there are a small number of possible alternative matches.
Other cases, like games where the user's voice changes due to excitement/stress, were incredibly bad.
What you describe at the end has already happened in China. Municipalities (at least the large ones) routinely have cameras with facial recognition everywhere in public. The police has power to pull up this kind of information without warrants (it's China, so what do warrants even mean).
I think the best we can hope for is that government officials are subject to more surveillance than regular people. Everyone is going to have at least some surveillance.
The individual tracking systems were getting secretly installed at a local to me state school about 10 years ago. It's got to be pretty advanced by now.
There's a huge difference between the historical intent of that principle and the way that these days everyone in a given space can be exhaustively recorded and tracked 24/7.
This saying isn't even true. Many countries have cultural expectations and legal structures providing some level of privacy in public. The very first GDPR fine issued stemmed from a business security camera that needlessly recorded people on the sidewalk.
The ease of mass surveillance and analysis/tracking makes it worse. Machine powered automatic analysis and tracking is more than just video recording. I hope that difference is apparent.
"Hum," a new novel by Helen Phillips, addresses this question precisely.
The premise: A woman who's not well off financially after losing her job signs up for a study in which an advanced robot surgically alters her face ever so minimally so as to use her as a test case for the company's state-of-the-art/bleeding edge (sorry) facial recognition software.
She signed up because having become unemployed with no prospect of future employment, her husband's job as a gig-handyman which is mostly pest control and pays terribly, and two young children, she fears being evicted from their apartment.
The study offers a huge payment in advance, enough for their family to live in comfort for 10 months without any other income source.
One problem soon becomes apparent: in altering her appearance ever so slightly, her family and everyone she knows are taken aback: she look just like she used to, but somehow not quite: the study is intended to see how surveillance video handles faces in the uncanny valley — by creating them.
NO — I have not ruined the book if you're thinking about reading it: my introduction above happens early on, following which the story explodes in unexpected, compelling directions.
This book is beautifully written: it's sci-fi, the sixth book by a highly regarded and awarded novelist.
Their conclusion reminds me of this lady in China, Lao Rongzhi, who was a serial killer along with her lover, Fa Ziying [0]. They both went around the country extorting and killing people, and, while Fa was arrested in 1999 via a police standoff, Lao was on the run for two decades, having had plastic surgery to change her face enough that most humans wouldn't have recognized her.
But in those two decades, the state of facial recognition software had rapidly increased and she was recognized by a camera at a mall and matched to a national database of known criminals. At first police thought it were an error but after taking DNA evidence, it was confirmed to be the same person, and she was summarily executed.
In this day and age, I don't think anyone can truly hide from facial recognition.
Nitpick: Summary execution means execution without due process. As per Wikipedia there was a quite thorough legal process all the way to the supreme court.
"On September 9, 2021, Lao was sentenced to death by the Nanchang Intermediate People's Court for intentional homicide, kidnapping, and robbery. She was also stripped of her political rights for life and had all of her personal property confiscated. Lao appealed her conviction in court, and the second trial was held on August 18, 2022 at Jiangxi Provincial Higher People's Court. Although Lao admitted to being an accomplice to Fa, she claimed to have only done so in fear of losing her own life, as Fa had physically and sexually abused her throughout their relationship. On November 30 of the same year, the court upheld the death sentence. On December 18, 2023, the Nanchang Intermediate People's Court carried out the execution of Lao Rongzhi, with the approval of the Supreme People's Court."
Your overall point holds that there was China's version of due process and plenty of elapsed time between her capture and subsequent execution. Therefore it was not a summary execution. Nowhere close. Moreover, to call this out is not a nitpick, it's an important factual correction of the OP.
However I would nitpick that while summary executions do include those without due process, the defining characteristic is simply speed. If the execution happened uncharacteristically fast compared to typical executions, even if all due process afford to her was followed, then she was still summarily executed.
Hmm, "cameras reported a 97.3% match". I would assume that for a random person, the match level would be random. 1÷(1 −.973) ~ 37. IE, 1 in 37 people would be tagged by the cameras. If you're talk China, that means matching millions of people in millions of malls.
Possibly the actual match level was higher. But still, the way facial recognition seems to work even now is that it provides a consistent "hash value" for a face but with a limited number of digits/information (). This be useful if you know other things about the person (IE, if you know someone is a passenger on plane X, you can very likely guess which one) but still wouldn't scale unless you want a lot of false positives and are after specific people.
Authorities seem to like to say DNA and facial recognition caught people since it implies an omniscience to these authorities (I note above someone throwing out the either wrong or meaningless "97.3% value). Certainly, these technologies do catch people but they still limited and expensive.
The "97.3%" match is probably just the confidence value - I don't think a frequentist interpretation makes sense for this. I'm not an expert in face recognition, but these systems are very accurate, typically like >99.5% accuracy with most of the errors coming from recall rather than precision. They're also not _that_ expensive. Real-time detection on embedded devices has been possible for around a decade and costs for high quality detection have come down a lot in recent years.
Still, you're right that at those scales these systems will invariably slip once in a while and it's scary to think that this might enough to be considered a criminal, especially because people often treat these systems as infallible.
The only way a percentage match means anything here, is that the facial recognition software returns a probability distribution of representing the likelihood that the person identified is each member of the set. I'm sure that 97.3% is actually low for most matches, since she had extensive plastic surgery.
Another related thing to consider, if she had plastic surgery what are the odds that among a billion people there isn’t someone whose face looks more like her original face than her face looks like her original face.
The person they executed admitted to being Lao Rongzhi, admitted to participating in the crimes, but claimed she was not responsible because of abuse she suffered at the hands of Fa Ziying. While false and forced confessions are absolutely a thing, hers doesn’t really fit that pattern. She acknowledged being involved, showed remorse for the killings, but distanced herself from them and minimized her involvement in violence, focusing on the robberies. After being presented with DNA evidence, it doesn’t appear that she ever claimed not to be Lao again nor did her defense seem to ever attempt to put that forward, but both of them put forward a rigorous defense to attempt to save her.
Anything is possible, but it seems from her own actions for years up until her execution that it was in fact her and she only denied it to the local police initially, hoping to be let go.
If what you’re trying to do is to publish prepared images of yourself, that won’t be facially recognized as you, then the answer is “not very much at all actually” — see https://sandlab.cs.uchicago.edu/fawkes/. Adversarially prepared images can still look entirely like you, with all the facial-recognition-busting data being encoded at an almost-steganographic level vs our regular human perception.
My understanding is that this (interesting) project has been abandoned, and since then, the face recognition models have been train to defend against it.
Very likely correct in the literal sense (you shouldn’t rely on the published software); but I believe the approach it uses is still relevant / generalizable. I.e. you can take whatever the current state-of-the-art facial recognition model is, and follow the steps in their paper to produce an adversarial image cloaker that will fool that model while being minimally perceptually obvious to a human.
(As the models get better, the produced cloaker retains its ability to fool the model, while the “minimally perceptually obvious to a human” property is what gets sacrificed — even their 2022 version of the software started to do slightly-evident things like visibly increasing the contour of a person’s nose.)
“Asking our governments to create laws to protect us is much easier than…”
A bit naive that, it’s too late since data is already mostly available and it just takes a different government to make this protection obsolete.
That’s why we Germans/Europeans have tried to fight data collections and for protections for so long and quite hard (and probably have one of the most sophisticated policies and regulations in place) but over time it just becomes an impossibility to keep data collections as low as possible (first small exceptions for in itself very valid reasons, then more and more participants and normalization until there is no protection left…)
It's not too late. Maybe it is for us: but in 100 years, who will really care about a database of uncomfortably-personal details about their dead ancestors? (Sure, DNA will still be an issue, but give that 1000 years and we'll probably have a new MRCA.) If we put a stop to things now (or soon), we can nip this in the bud.
It's probably not too late for us, either. Facial recognition by skull shape is still a concern, but only if the bad guys get up-to-date video of us. Otherwise, all they can do is investigate our historical activity. Other types of data have greater caveats preventing them from being useful long-term, provided we not participate in the illusion that it's "impossible to put the genie back in the bottle".
So what you're suggesting is we do whatever we can to avoid hitting 2 degrees of universal facial recognition precision? Given that the 1.5 degree target is now inevitably impossible.
But the Germans still ask people to register their religion, ostensibly so the government can give tax money to the relevant religion. Sorry, but the German government asking people to provide their religion to the government just reminds me of something unpleasant.
I've often wondered what would happen if I wandered around with a bright IR led flashing on my lapel at about 30 or 60 Hz and sufficiently invisible to human eyes yet low wavelength enough to get into most CMOS chips and dazzle the camera.
I think this on shopping trips routinely. I don't like being surveiled and even though I have nothing to hide (I've never shoplifted in my life!) I hate the persuasive nature of it all. I don't even mind being followed by a human that much, but I do mind algorithmic analysis that is far more effective, scary, and invasive. Sadly I think the answer to this experiment would be being asked to leave or an uncomfortable chat with a policeman. Nevertheless I silently would like someone braver than me to try it. You're allowed to wear a light on your clothes -- why not make it an IR one?
Not a bad piece, all told, though the general practical advice hasn't really changed in the decade-plus since I last touched the stuff: stop looking up (in general), keep as much of your face obscured as practical, try mixing up patterns to make it difficult for algorithms to match you over time, know where cameras are and how to avoid them, and if you do have to enter a known surveillance area, exit it as quickly and discreetly as possible - and adjust outfits between surveillance areas if you're particularly paranoid.
That said, let me just help dash any hopes of fooling government surveillance right now. Any competent Nation State that has an axe to grind with you specifically, already has you in their dragnet. They already have enough information to match your face in grainy analog B&W surveillance footage from an ancient grocery store camera. You're not beating those short of significant cosmetic surgery or prosthetics of some sort, and even then, if they want you badly enough then they'll just pull partial prints off something you touched and validate that way.
Always remember the first rule of security: if someone really wants something you have badly enough, there's nothing you can do to stop them. With that in mind, plan accordingly. It's why I don't go to protests myself, or otherwise engage with events where I know facial recognition tech is deployed: I'm in that data set, multiple times, with pristine reference materials, simply by virtue of past work (not including the updates via passport photos or Global Entry access). My safest bet is simply not to put myself in that position in the first place, and that's likely yours as well.
When I was in Mexico I filed a report with the airport after an employee selling timeshares was overly aggressive and grabbed my arm and try to block me from leaving. Quickly they showed me a video of my entire time with all my movements at the airport so they could pinpoint the employee.
Like the article says I think it is just a matter of time until such systems are everywhere. We are already getting normalized to it at public transportation hubs with almost 0 objections. Soon most municipalities or even private businesses will implement it and no one will care because it already happens to them at the airport, so why make a fuss about it at the grocery store or on a public sidewalk.
You may be overestimating how many unique/different people travel through airports, especially more than once or twice to notice the tracking. People who travel once or twice total in their life by air, (are usually easy to spot), far more concerned with getting through a confusing hectic situation then noticing or even knowing that using facial recognition is new and not simply a special thing (because 9/11). And, the majority of Americans have travelled to zero or one country, last time I saw numbers on it. That country is usually Mexico or Canada where they drive (or walk).
I think once it starts trying to hit close to home where people have a routine and are not as stressed by a new situation and have the bandwidth to--at a minimum--take a pause, will ask questions about what is going on.
You're right about just trying to get through the process. I was the only one in our family who was like, "No idea why they were taking pictures of everybody when they just whisked us through customs." My Dad snapped that it was because of 9/11 and we weren't lawbreakers so just deal with it.
The comment was interesting since my Dad worked for Lockheed Martin for 30 years and used to travel constantly. He mentioned the idea of a "high trust society" is now gone forever and facial recognition and other technologies are now necessary to give that trust back to the general public so they can feel safe with air travel.
Out of sight, out of mind. If there isn't a large video camera tracking them as they move across a shop or down the street, I'm not sure many people will even notice.
This is just as interesting as it is creepy, but that's the world we live and this is hacker news. So, how quickly was was quickly. You made your report, they get the proper people involved, and then they show you the video. How much time passed before you were viewing the video?
For someone that plays with quickly assembling an edited video from a library of video content using a database full of cuepoints, this is a very interesting problem to solve. What did the final video look like? Was it an assembled video with cuts like in a spy movie with the best angles selected in sequence? Was it each of the cameras in a multi-cam like view just starting from the time they ID'd the flight you arrived on? Did they draw the boxes around you to show the system "knew" you?
I'm really curious how dystopian we actually are with the facial recognition systems like this.
No idea how widespread it is, but in Singapore airport the system is tightly integrated. You are "tagged" when you check in, and "tagged out" as you board, with your appearance associated with your intended flight details. If you miss your flight or otherwise spend too much time in the secure zone, you are highlighted in the system and will eventually be approached. Arriving passengers are also given a time limit to take their next action, be it clear immigration or enter transit, and lingering will also trigger a response.
All in the name of safety and security but I can't help but feel a measure of discomfort with it all.
It records the license plates of all cars entering and leaving the parking lots. You can associate names to faces which we do for all employees and the system automatically records when people enter and leave buildings. You can even just tell it to find all people with a blue shirt in a particular camera in a time window. It can automatically detect people shouting.
[1] https://www.youtube.com/watch?v=07inDETl3LQ
Without it, I think all the gingers would have pointed at me rather than me being tired and making a mistake.
But for nation wide public search the false positive rate is just way to high for it to work well.
Once someone managed to leave a "local/time" context (e.g. known accident at known location and time) without leaving too many traces (in the US easy due to wide use of private cars everyone) the false positive rate makes such systems often practically hardly helpful.
You don't have to have your photo taken to enter the US if you're a citizen, but who wants to deal with the hassle? And on and on it goes.
go ahead and decline while the cop is holding your passport.
Other cases, like games where the user's voice changes due to excitement/stress, were incredibly bad.
Some countries have strong privacy laws, such as Switzerland.
rub my jeans sailor. no 3d xrays for me.
Because my business is my business and nobody else's. Full stop.
The premise: A woman who's not well off financially after losing her job signs up for a study in which an advanced robot surgically alters her face ever so minimally so as to use her as a test case for the company's state-of-the-art/bleeding edge (sorry) facial recognition software.
She signed up because having become unemployed with no prospect of future employment, her husband's job as a gig-handyman which is mostly pest control and pays terribly, and two young children, she fears being evicted from their apartment.
The study offers a huge payment in advance, enough for their family to live in comfort for 10 months without any other income source.
One problem soon becomes apparent: in altering her appearance ever so slightly, her family and everyone she knows are taken aback: she look just like she used to, but somehow not quite: the study is intended to see how surveillance video handles faces in the uncanny valley — by creating them.
NO — I have not ruined the book if you're thinking about reading it: my introduction above happens early on, following which the story explodes in unexpected, compelling directions.
This book is beautifully written: it's sci-fi, the sixth book by a highly regarded and awarded novelist.
Read the first 19 pages (of 244) here: https://www.amazon.com/Hum-Novel-Helen-Phillips/dp/166800883...
Hopefully folks understand that this is dystopian rather than a roadmap to their next product proposal
But in those two decades, the state of facial recognition software had rapidly increased and she was recognized by a camera at a mall and matched to a national database of known criminals. At first police thought it were an error but after taking DNA evidence, it was confirmed to be the same person, and she was summarily executed.
In this day and age, I don't think anyone can truly hide from facial recognition.
[0] https://www.youtube.com/watch?v=I7D3mOHsVhg
Nitpick: Summary execution means execution without due process. As per Wikipedia there was a quite thorough legal process all the way to the supreme court.
"On September 9, 2021, Lao was sentenced to death by the Nanchang Intermediate People's Court for intentional homicide, kidnapping, and robbery. She was also stripped of her political rights for life and had all of her personal property confiscated. Lao appealed her conviction in court, and the second trial was held on August 18, 2022 at Jiangxi Provincial Higher People's Court. Although Lao admitted to being an accomplice to Fa, she claimed to have only done so in fear of losing her own life, as Fa had physically and sexually abused her throughout their relationship. On November 30 of the same year, the court upheld the death sentence. On December 18, 2023, the Nanchang Intermediate People's Court carried out the execution of Lao Rongzhi, with the approval of the Supreme People's Court."
https://en.m.wikipedia.org/wiki/Fa_Ziying_and_Lao_Rongzhi
However I would nitpick that while summary executions do include those without due process, the defining characteristic is simply speed. If the execution happened uncharacteristically fast compared to typical executions, even if all due process afford to her was followed, then she was still summarily executed.
Possibly the actual match level was higher. But still, the way facial recognition seems to work even now is that it provides a consistent "hash value" for a face but with a limited number of digits/information (). This be useful if you know other things about the person (IE, if you know someone is a passenger on plane X, you can very likely guess which one) but still wouldn't scale unless you want a lot of false positives and are after specific people.
Authorities seem to like to say DNA and facial recognition caught people since it implies an omniscience to these authorities (I note above someone throwing out the either wrong or meaningless "97.3% value). Certainly, these technologies do catch people but they still limited and expensive.
Still, you're right that at those scales these systems will invariably slip once in a while and it's scary to think that this might enough to be considered a criminal, especially because people often treat these systems as infallible.
Why would you assume that?
Deleted Comment
Anything is possible, but it seems from her own actions for years up until her execution that it was in fact her and she only denied it to the local police initially, hoping to be let go.
Deleted Comment
(As the models get better, the produced cloaker retains its ability to fool the model, while the “minimally perceptually obvious to a human” property is what gets sacrificed — even their 2022 version of the software started to do slightly-evident things like visibly increasing the contour of a person’s nose.)
A bit naive that, it’s too late since data is already mostly available and it just takes a different government to make this protection obsolete.
That’s why we Germans/Europeans have tried to fight data collections and for protections for so long and quite hard (and probably have one of the most sophisticated policies and regulations in place) but over time it just becomes an impossibility to keep data collections as low as possible (first small exceptions for in itself very valid reasons, then more and more participants and normalization until there is no protection left…)
It's probably not too late for us, either. Facial recognition by skull shape is still a concern, but only if the bad guys get up-to-date video of us. Otherwise, all they can do is investigate our historical activity. Other types of data have greater caveats preventing them from being useful long-term, provided we not participate in the illusion that it's "impossible to put the genie back in the bottle".
I think this on shopping trips routinely. I don't like being surveiled and even though I have nothing to hide (I've never shoplifted in my life!) I hate the persuasive nature of it all. I don't even mind being followed by a human that much, but I do mind algorithmic analysis that is far more effective, scary, and invasive. Sadly I think the answer to this experiment would be being asked to leave or an uncomfortable chat with a policeman. Nevertheless I silently would like someone braver than me to try it. You're allowed to wear a light on your clothes -- why not make it an IR one?
Taboo opinions inspired by W.O.P.R. Avoid playing the game:
- Stay clear of areas with cameras when possible. Revenue impacting.
- Do Zoom or Jitsi calls with businesses and associates when you can.
- Become self sufficient. Stop spending money when it is not required and have healthy groceries delivered to you. Reduce tax revenue.
- Work from home if your company permits it. Go mostly off grid.
- Hire someone to run errands for you when they can not be avoided. Pay cash to a neighbors kid to run into town.
I know none of this will be popular with anyone but I am that guy.
That said, let me just help dash any hopes of fooling government surveillance right now. Any competent Nation State that has an axe to grind with you specifically, already has you in their dragnet. They already have enough information to match your face in grainy analog B&W surveillance footage from an ancient grocery store camera. You're not beating those short of significant cosmetic surgery or prosthetics of some sort, and even then, if they want you badly enough then they'll just pull partial prints off something you touched and validate that way.
Always remember the first rule of security: if someone really wants something you have badly enough, there's nothing you can do to stop them. With that in mind, plan accordingly. It's why I don't go to protests myself, or otherwise engage with events where I know facial recognition tech is deployed: I'm in that data set, multiple times, with pristine reference materials, simply by virtue of past work (not including the updates via passport photos or Global Entry access). My safest bet is simply not to put myself in that position in the first place, and that's likely yours as well.
Dead Comment