I was aware of most of the data protection and privacy concerns presented, but I wasn't aware facial recognition system are being used as widely as suggested here.
If these systems ever become widely adopted I might seriously consider obscuring my face in public.
Here in the UK I've noticed over the last year or so many supermarket self checkouts have been fitted with cameras and screens. I'm not naive enough to believe I wasn't being recorded previously, but I can't help but find this trend of sticking a camera directly in my face whenever I'm trying to make a purchase extremely insulting and violating to my sense of privacy. Now after watching this I have almost no doubt that facial recognition software is installed on these systems.
I've spoken to other people about the rudeness of this but most people seem to think it's fine. Perhaps I'm just weird and more bothered about this stuff than most people. If sticking a camera in someone's face when they're trying to purchase something in a pharmacy isn't going too far though I do wonder if the average person would really care about anything presented here.
I played around with Amazon's Rekognition software. I took one of those youtube videos where someone takes a picture of themselves every day for 10 years. It was fairly ideal conditions (consistent lighting, same pose for the most part) but the kid also went from 12 years old to 22, so his face definitely changed a lot. I used the first image as the image to compare the rest to (12 years old at the time), and I was surprised that it got pretty much almost all of them with a high degree of confidence (80%+). And the 80% ones were terrible lighting, sunglasses or an image of his girlfriend he slipped in there.
Even the sunglasses, beard, face paint, bad lighting or puberty didn't throw off the model.
The open source dlib model was considerably worse, but AWS Rekognition was incredible
There was this one incident in China where the facial recognition system mistaked the face of a chinese celebrity on a bus for a jaywalker... so the system isn't perfect for special conditions/environments yet. However I do believe that results today are already outstanding and will only get better.
They're completely different. This is solely for intimidation purposes.
Cameras previously would show a wide angle view of the store so you can see when someone put something under their jacket, etc. I can understand and accept this.
In comparison these new cameras have a very swallow angle of view, they're zoomed in on your face and they're in portrait. They would be completely useless if you wanted to see if someone was, say, putting something under their jacket at checkout.
These are there to purposefully record your face and let you know that they're doing that full colour and HD whether you like it or not. It's vile and extremely rude. I've never signed an agreement accepting such an obtrusive and unreasonable violation of my privacy when I enter a supermarket - at least online I'd have to accept T&Cs before placing my order.
In about 2000 a friend of mine worked in las vegas. At that time, they used facial recognition to identify everyone coming in the door to detect card counters and other troublemakers. They shared this information widely among all the casinos.
It is > 20 years later now... I wonder how accommodating they are when a whale walks in?
I was initially put off by the webcam permission requirement, but the terms and conditions page says it's basically an art project and they don't send any data off (unless you explicitly accept it at the end) so I gave it a chance.
I'm glad I allowed webcam permission because it was an interesting, informative, and fun look at biometric tracking.
Apparently I'm "violently average" which is not a way I would previously have described myself. According to this site the most unusual thing about me is that I read the terms and conditions before ticking the "accept" box.
Same, but I think people generally over-estimate how much different from average they are. (It's from the webcam's pov anyway) Usually I also don't like interactions with the webcam but this was interesting enough.
I didn't read them because I don't believe them anyway. If the website uses my camera, I'll just assume that everything is recorded and sent to advertisers and shady governments before I closed the tab. So I did not want continue with this one. It was after reading the comments here I decided to give it a go. Terms and conditions played not role.
> According to this site the most unusual thing about me is that I read the terms and conditions before ticking the "accept" box.
I’d put forward the hypothesis that people who read the terms, and who are therefore concerned about privacy, are also less likely to be willing to agree to submit the data at the end.
I think many people commenting on the model making bad predictions are missing the point.
The speaker argues that even though models are known to be inaccurate, companies like tinder or insurance companies might still use the model outputs since they have nothing better.
Therefore, in some future (or already today?) you can suffer from bad model predictions because you are "not normal enough" for the model to make good predictions, and might therefore receive a wrong predicted life expectancy and higher insurance bills.
Insurance companies hire very smart actuaries... and thus currently use actuarial models. Actuarial models aren't perfect either. However, throwing them out to use one of these machine prediction models would almost certainly be a disaster for the insurance company. And there is currently a lot of competition for life insurance.
Using this sort of data to enrich and refine existing data, vs throwing current stuff all out in favor of this newer data... that's what I'd expect (enrichment vs replacement). I'm fairly confident insurance companies have areas in their models where they know there's stuff they don't know. If more data can enrich their models to provide better accuracy, why wouldn't they?
If one of these models is on average better, then they would gain an advantage by using it. The problem is for the "not normal enough" folks, it may be _harder_ to remedy an invalid classification, particularly if there are no fallbacks or work arounds. I was cued into this once by an ML book that gave an example of a fraud detection company using an actually worse algorithm, because when it gave false positives it was easier to understand and hence easier to manually override. But if it is less profitable to operate this way, and there is no regulation around it, people getting falsely classified may be out of luck. That's where the discussion around regulation needs to happen, I think.
This is the worst anti-surveillance argument. The last thing I want is to be accurately predicted. As far as I'm concerned, once the models are perfected and they can accurately predict everything you will do or say, things will be far worse.
This isnt an usolvable problem though since you can calculate the strength of model fit for a particular data point. You learn how to do this with linear models in stat 101 so everyone who is paid to be a datascientist will no doubt understand this.
Unfortunately the theory for linear models does not translate easily to deep learning based models, which this demo is based on.
The "strength of model fit" becomes much more complicated and is an active field of deep learning research.
For the extra paranoid, I can confirm that it does work offline after you press the button and let it load. You can hide your face until it loads, and then cut the connection. Be sure to close the tab before you go online again.
Most people have such weird and illogical views on privacy. A website collecting face pictures without anything else is pretty useless. Walking in to a retail store owned by a company using facial recognition on a huge range of owned stores is a serious privacy issue.
Could be used for a scam, by a stalker, for social engineering, or several other evil ploys. Today, there are so many possible bad actions which are happening... On average, they are unlikely, but not impossible. And who knows about the long run.
> If you open a website in a fresh browser context and let it use your camera, isn’t this about the same as walking down a street with CCTV cameras?
In my country, there are harsh regulations on public cameras. Private people are not even allowed to capture you outside from being in the background.
It could use that to impersonate you in a mobile banking application through biometric authentication. It's getting pretty popular in my country (I think it's required by law or something).
I stumbled across this when it launched. Here's the talk by the artist that gives some background information:
https://youtu.be/bp23r-Gtdkk
Rather than talking about webcam permissions, I think we should talk about how much we use and rely on bad ML models. Dating apps might want to rate attractiveness, but we have no checks in place to see how we're being rated. Especially free open access models probably don't come with a thorough bias&limitations datasheet.
I'm also reasonably sure other factors also influence these algorithms. Hair style, framing, lighting, the clothes found on your shoulders, angle, distortion by the camera lens, and most importantly, similarity to high-scoring faces in the source data set.
Then you're missing the point the website makes about this and other algorithms like it being extremely unreliable. There are tons of biases in the training data, and not just ethnic/cultural ones mentioned. For starters, you're comparing a crappy webcam to a model that in all likelihood is based on people's best selfies.
The model tries to fit you into a very narrow niche. On top of that it will do so poorly.
Interestingly, if you click on the ToS you end up on a page explaining how it works (can't link it).
The beauty scoring model was found on Github (this or this one). The models to predict age, gender and facial expression/emotion are part of FaceApiJS, which forms the backbone of this project. Do note that its developer doesn't fully divulge on which photos the models were trained. Also, FaceApiJS is bad at detecting "Asian guys".
Apparently some dating apps rates their users with these sort of algorithms. Maybe I'm living under a rock but I did not know that was a thing.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 786641.
Indeed - I was also surprised by this. One possibility, which I hope is true, is that the authors are funded for something else (some actual research). If they spent a few hours of their workday to throw this together, they might be obligated to cite their funding agency.
I know for a fact that I've acknowledged funding agencies on papers about topics that were at best extremely tangential to my grant.
A less flattering possibility is that they wanna use EU affiliation as a badge for respect for privacy or something like that.
I was aware of most of the data protection and privacy concerns presented, but I wasn't aware facial recognition system are being used as widely as suggested here.
If these systems ever become widely adopted I might seriously consider obscuring my face in public.
Here in the UK I've noticed over the last year or so many supermarket self checkouts have been fitted with cameras and screens. I'm not naive enough to believe I wasn't being recorded previously, but I can't help but find this trend of sticking a camera directly in my face whenever I'm trying to make a purchase extremely insulting and violating to my sense of privacy. Now after watching this I have almost no doubt that facial recognition software is installed on these systems.
I've spoken to other people about the rudeness of this but most people seem to think it's fine. Perhaps I'm just weird and more bothered about this stuff than most people. If sticking a camera in someone's face when they're trying to purchase something in a pharmacy isn't going too far though I do wonder if the average person would really care about anything presented here.
Even the sunglasses, beard, face paint, bad lighting or puberty didn't throw off the model.
The open source dlib model was considerably worse, but AWS Rekognition was incredible
https://mleverything.substack.com/p/how-facial-recognition-w...
https://www.theverge.com/2018/11/22/18107885/china-facial-re...https://you.com/search?q=facial+recognition+in+china
Cameras previously would show a wide angle view of the store so you can see when someone put something under their jacket, etc. I can understand and accept this.
In comparison these new cameras have a very swallow angle of view, they're zoomed in on your face and they're in portrait. They would be completely useless if you wanted to see if someone was, say, putting something under their jacket at checkout.
These are there to purposefully record your face and let you know that they're doing that full colour and HD whether you like it or not. It's vile and extremely rude. I've never signed an agreement accepting such an obtrusive and unreasonable violation of my privacy when I enter a supermarket - at least online I'd have to accept T&Cs before placing my order.
It is > 20 years later now... I wonder how accommodating they are when a whale walks in?
I'm glad I allowed webcam permission because it was an interesting, informative, and fun look at biometric tracking.
Apparently I'm "violently average" which is not a way I would previously have described myself. According to this site the most unusual thing about me is that I read the terms and conditions before ticking the "accept" box.
I’d put forward the hypothesis that people who read the terms, and who are therefore concerned about privacy, are also less likely to be willing to agree to submit the data at the end.
This is the worst anti-surveillance argument. The last thing I want is to be accurately predicted. As far as I'm concerned, once the models are perfected and they can accurately predict everything you will do or say, things will be far worse.
I had the pleasure of meeting him a while ago and seeing him give a talk about how companies gather data and the chilling effects on society.
Deleted Comment
If you're extra extra paranoid, use a disposable VM in Qubes OS.
https://www.qubes-os.org/doc/how-to-use-disposables/
If you open a website in a fresh browser context and let it use your camera, isn’t this about the same as walking down a street with CCTV cameras?
Could be used for a scam, by a stalker, for social engineering, or several other evil ploys. Today, there are so many possible bad actions which are happening... On average, they are unlikely, but not impossible. And who knows about the long run.
> If you open a website in a fresh browser context and let it use your camera, isn’t this about the same as walking down a street with CCTV cameras?
In my country, there are harsh regulations on public cameras. Private people are not even allowed to capture you outside from being in the background.
The worst case scenario is far greater in magnitude than the best case scenario.
Just not worth any risk here.
It's more akin to those spying doorbells from Amazon and friends, which I personally would try to avoid when I can.
The concentration of data and the lack of necessity of your face being recorded in the first place change the decision making process significantly.
I get your point, but I am not sure how fresh our browser context is.
Deleted Comment
No thanks.
Rather than talking about webcam permissions, I think we should talk about how much we use and rely on bad ML models. Dating apps might want to rate attractiveness, but we have no checks in place to see how we're being rated. Especially free open access models probably don't come with a thorough bias&limitations datasheet.
That video by the creator of this website is a must watch by everybody reading this thread; it is not all fun and games.
Thanks for posting this.
I took a shower, shaved, brushed my teeth, put on a nice shirt, fixed my hair and offered a good angle of my best smile at the camera. I was now a 9.
Deleted Comment
The model tries to fit you into a very narrow niche. On top of that it will do so poorly.
The beauty scoring model was found on Github (this or this one). The models to predict age, gender and facial expression/emotion are part of FaceApiJS, which forms the backbone of this project. Do note that its developer doesn't fully divulge on which photos the models were trained. Also, FaceApiJS is bad at detecting "Asian guys".
Apparently some dating apps rates their users with these sort of algorithms. Maybe I'm living under a rock but I did not know that was a thing.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 786641.
https://cordis.europa.eu/project/id/786641
I know for a fact that I've acknowledged funding agencies on papers about topics that were at best extremely tangential to my grant.
A less flattering possibility is that they wanna use EU affiliation as a badge for respect for privacy or something like that.