You'd be surprised / scared / outraged if you knew how common this is. Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for. Advertising in the physical world is just as scummy as it's online equivalent.
Face tracking, emotional analytics and vision based demographics analysis is a pretty huge industry. There's a entire spectrum of uses for this tech, from the altruistic (psychology labs, humans factors research), to the well, not.
I've lost track of how many times I've said this on HN:
We need HIPPA for all personal information, not just medical. We have an expectation of privacy in being "lost in the crowd" when we're out and about. Our physical & online whereabouts, who we're physically with, who we're communicating with, our personal contact information, and obviously payment information is private information that can be harmful if not kept private (false positives in automated legal systems, identity theft, and including all the defenses of securing medical information).
Anybody who chooses to hold such information must regard it with a high level of respect and privacy. Since nobody is doing so, and there are no penalties for violating privacy, and this gets into fundamental rights and proper functioning of society, it seems applicable to federal law.
The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]
And lots of retailers, banks, etc, are using systems that track people's visits across multiple locations. [2]
You'll see a lot of these systems being sold as fraud/loss prevention solutions. The reason for this is that it's a relatively easy sell this way - customers can count how many thieves they've caught this way to easily determine the ROI they're getting on the system. Once the systems are in place, it's relatively easy to start using them for marketing related purposes.
Not all uses of systems like these are necessarily unethical. Consider a case where you want to set up a rule like 'if the average lineup length at the checkouts exceeds 5 people, call backup cashiers'. The problem is that once you have something like this in place, it's very tempting for company execs to want to use the data for legal but less than ethical purposes.
That's always going to be tempting, and the only real tractable solution is for society to have a larger conversation on the ethics so the law can catch up with it.
Note that some ethical consensus is key---without it, companies can just price "Well, some customers think image recognition is creepy" into the risk model and do it anyway. Compare privacy concerns---people talk big about their concerns over privacy, but in practice, we're still in a world where a survey-taker can get very personal information from a random individual at a mall by offering a free candy bar. Until and unless people arrive at a common consensus that their personal information---including their face---has value or they have a proprietary right to that information, even in public, there's no real tractable solution to this problem.
... because there's no real agreement that there's a problem to solve.
"The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]"
Your source in the marketing material of an IP camera manufacturer.
We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running. These manufacturers, like Axis, whom you cite, would love for such capabilities but they are still very uncommon.
Just yesterday I was hearing news of how most of the retail giants and lots of smaller retail stores are going out of business due to competition from ecommerce. If that means the end of practices like this, then good riddance.
It also picked up the colors in my aloha shirt perfectly. (Anyone who knows me knows that I am to aloha shirts as Steve Jobs was to black turtlenecks.)
When I want to feel young and go shopping for shirts, now I know what to do!
I tried the demo. A little sad, that it sees me as 12 years older than I am, and that I apparently always look angry and disgusted! I'm going to blame it on my glasses and bushy beard, and try to look at the bright side - apparently face scanning systems aren't quite good enough to get a read on me yet. (And try not to be too sad about looking like a grumpy old man)
(Anyone else with a glasses, a beard, or other non-typical facial features want to comment? I'm curious now how well their system handles these?)
Same experience here. Shows my age 10-15 years more than actual. I tried to smile and it just filled up the "disgust" bar. Neutral expression shows a high amount of "sadness". I wear glasses too, and have a slight beard.
I don't think glasses and beard are non-typical facial features!
With a bald head, beard/mo, and reading glasses it doesn't detect a face. Without the glasses it estimates me as 7 years younger than I am - and reasonably high on anger and sadness...
I did well... it said I was an angry 33 y/o male. Well, I am male... I'm almost 50... and I didn't think smiling at the camera conveyed anger... but who am I to question our AI overlords ;-)
(edit)... on the other hand they probably were just trying to sell product so thought flattery was the right approach...
Says I'm 4 years older than I am (31 / 27) and have high levels of sadness.
Covered up my receding hairline a bit and it said 29. I reckon if I shaved I could get it down to about 22 since that's how old people usually think I am.
The real metric to judge it'
s effectiveness is by comparing its accuracy to an average human observer's responses. I doubt a human would do a lot better at estimating someone's age.
I was very unhappy to discover that shoes now often have RFIDs built into soles. This + anti-theft RFIDs readers that are already deployed by the entry of most stores can allow to easily assign unique ids to shoppers.
Most anti-theft tags are not RFID and the gates are not full RFID readers. At least in Europe, vast majority I see are still based on simple resonators that get disabled on checkout. Effectively, the gates only provide a yes/no signal and can't be used for tracking.
Applied Science has a good video on how they work:
Do you have a source for this? All I can find online is the occasional use of RFID for stock management or the odd marketing campaign. But nothing about customer tracking
Perhaps. On the other hand, it's something that good salespeople are doing internally already; there's an argument to be made that this is just automating yet another part of the customer service process.
(There's an old joke that sometimes shows up on HN about augmenting an automated bugtracker to snap a photo when a crash is detected or a bug is reported, so developers can be reminded that bugs tie down to real people who are actually sad / angry that the software failed them ;) ).
Why is this scummy exactly? If a salesperson was to try to sell to you in a store, they would take into account how you appear and act to tailor the sale. There's nothing wrong with that. Why is it suddenly bad if a machine does it?
Because when you talk to a salesperson you know you're being looked at (and reciprocally you're looking at them), and human memory is limited so it's unlikely they will retain any "data" about you when the contact is finished.
Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.
Well, your behavior and appearance isn't logged in some computer somewhere available for someone to look at whenever they want. Not to mention, face-to-face interaction means you know someone else is watching. This allows someone to do this without your knowledge.
I find it funny that the store doesn't trust their salespersons to make such a judgement on their own. Probably they hope to do analytics on what kind of people are visiting and when. Selling the data would only make sense if they are able to link it to an identity, I am not sure that they can legally do that.
Well, you never know into what dystophia you are heading...
Most humans today are prejudiced against nonorganic life due to not growing up interacting with anyone but other meatbags.
There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.
Germany is an outlier; their history with the Nazis makes the country unusually conservative about anything that could be abused for mass-surveillance purposes.
Not that this is a bad thing---being able to think differently like this is one of the positives of having countries!---but relative to the rest of the world, what Germany considers "surveillance" is unusual and sometimes surprising.
That demo is amusing. I did a Google image search for "N year old faces" for N = 5, 9, 10 and 14, and eliminated results where the accompanying text did not confirm the age, and then gave some of the remaining ones to it. It was always at least 10 years too old on its guess for these children. It got the gender right maybe 3/4 of the time.
I also tried it on a few internet porn images. It looks like it is definitely only relying on the face for determining gender, or it thinks that there are a lot of women with hairy flat chests and large penises...
Is it creepy? Sure. But anyone can run it. I was looking at a rewrite to work in CLI with a web interface instead. But the core loop is the magic part that makes everything work nicely.
No one's going to go to jail for their first offence of putting a in an advertising sign.
What's the worst that could happen? The local advertising regulator will order you to meet the regulatory standards or remove the cameras, but give you x number of weeks to act per unit installed.
I will be honest, I have no real problems with this. Then again I enjoyed some of the concepts for advertising shown in Minority Report which did feature ads which could identify you.
the idea of collecting who looked at your display is invaluable. It would be beneficial for both government and ad agency. Ad agency is obvious but government could learn if displays present information people want and it was presented in a manner to catch their attention. The negative aspects of government use could be limited through privacy laws and such.
Presumably most cashiers would just optimise to pressing the same button on every transaction, since doing it "right" makes no difference observable to them.
In a few stores here in Australia, I am asked for my post code (or country of residence if no post code) by the cashier when ringing up a sale. Predominantly at tourist/tour related points of sale, but I've also had it an electronics and white goods stores.
No idea if the operator is also recording gender and perceived age group etc., but I do know that on most occasions, you can opt not to answer the post code question.
When you go to those weekend open inspections in Sydney, you are almost guaranteed to be asked for your postcode. They record that as well.
I actually did some experiments - for different properties in roughly the same area/price range, I told different real estate agents different postcodes. It is beyond reasonable doubt that the code you tell them play a huge role on how they rank you as potential buyers. When you tell them a random north shore post code, you are guaranteed to receive a nice & friendly follow up on the coming Monday, however if you tell them that you live in the west (when mostly inspecting north shore properties), they would smile and immediately end the whole conversation.
The sample size here is ~50, which I believe is big enough to draw some reasonable conclusions.
I'm not sure if it's true, but a friend once told me the reason a store asked for your zip code was to see if they had a large audience coming from a certain area. This let them know other locations to possibly open other stores.
By cross-referencing your name (from your credit/debit card) with your zip/post-code, stores are able to determine specifically who you are with greater probability than without the zip/post-code.
I volunteer at the Boston Museum of Science on Sundays and we also track how many people we interact with at the various activities. We log by group, so log might read "1 man, 1 woman, 2 boys, 1 girl (family)" or "3 women, 10 girls, 12 boys (school group)"
It's really handy to see how many people the activities attract, and who they appeal to most. You're tracked everywhere!
You could distinguish people who are married or are parents (with false negatives) by recording people who are at the till with their spouse or children. (Send out demographic-research cards once, to a few of the same stores you've collected this info from, to derive a normalization factor that will make such collected observations useful from then on.)
You could make a note of a person's seeming affect—positive/negative/neutral emotion.
During 2010-2012, I was part of a startup called Clownfish Media. We basically created something very similar to this and got scary accurate results then. Given how accessible computer vision has become, the image in the tweet comes at no surprise to me.
Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).
Next time you look at digital signage, just be aware that it is probably looking back at you.
For me I knew how our data was anonymized. So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them, even the unique identifier was just a hash.
Obviously it depends on how much data you collect/store, personally I don't think the things shown in OP are all that onerous (sex, age group, gender, rage, time spent looking at ad).
I work on digital signage, our product isn't using facial expression recognition yet, but it has been asked and will eventually be a part of the system.
What's the difference between this and an anonymised dataset? No PII is tracked, it's just looking at you and calculating what emotion you're likely feeling to show more targeted advertising.
I mean, I'm personally against it but we've got to prove a higher and higher ROI to justify the cost of digital signage, this leads to just that.
"Hi. I am the original taker of the photo. There is a screen that normally shows peppes pizza advertisements in front of peppes pizza in Oslo S. The advertisements had crashed revealing what was running underneath the ads. As I approached the screen to take a picture, the screen began scrolling with my generic information - That I am young male (sorry my profile picture was misleading, not a woman), wearing glasses, where I was looking, and if I was smiling and how much I was smiling. The intention behind my original post on facebook was merely to point out that people may not know that these sort of demographics are being collected about them merely by approaching and looking at an advertisement. the camera was not, at a glance, evident. It was merely meant as informational, maybe to point out what we all know or suspect anyway, but just to put it out in the open. I believe the only intent behind the data collected is engagement and demographic statistics for better targeted advertisements."
It is still a BIG ethical issue for some people. Myself I see this as just the natural progression we are headed. If we don't have rules about this kind of technology it will very much be "Minority Report" in a decade.
To be honest, I am fairly surprised at the reaction here on HN. It's not really surprising to see such system, it would be more surprising if such system did not exist because offline ads is a huge business and the technology is here. This goes together with conversion tracking at physical shops, etc.
I am equally surprised by the comments about how come engineers implement such systems, how they find it ethical, etc. I'm sorry, but it sounds just a bit out of touch with the real world, or just outside of HN bubble. Given the things that money motivates people to do, it's probably one of the least unethical things that has been done.
I am not judging that this is right or wrong, I am simply stating the fact that nothing about this should be surprising. Yes, this is slightly sad, but that's simply the reality of technological advancement. It's not really possible to expect the rest of the world to use the technology only for things considered 'right', etc.
Well, nothing here is surprising beyond maybe the scale of things (if a random pizza joint now uses facial recognition in ads, who else is using them?). But those things still need to be called out and opposed, because peer pressure is an important part of morality in society. People are social animals, and are less likely to do things that are disliked by their friends.
Looking at things from a little distance, the whole thing is abhorrent, and paints a really sad state of our society. I wrote this many times, and will keep writing it: if you did the same things personally to your friend that people in advertising industry do to everyone, you'd most likely get punched in the face. And yet somehow marketing became a respectable occupation.
There isn't really much consensus---even on HN---that passive demographic data collection is a bad thing alone. People claim it is, and I believe they feel it is---then they turn around and do things that compromise their stated beliefs because it's convenient.
I liken it to the gap between the rhetoric around open source and free software and the reality that Windows and Mac OS make up approximately 90% of OS marketshare. You can believe what you want to believe, but from a business standpoint you'd be putting yourself at a disadvantage if you structure your business requiring FOSS operating systems to climb to even 25% of marketshare; there's a similar situation, probably, for customer data tracking and advertising preference tracking.
Lots of Black Mirror is commentary on where we are now, not where we're going, even if the episode itself is set in future (15 Million Merits is an easy example).
I think, such reaction is just because article is less "techy" than it could be, but more about "moral" aspect.
OTOH, what is so interesting in simple face recognition, innit? That future became a past quite fast, meanwhile teh human rights never get old. (smile.jpg)
As someone working on a similar project (specifically, emotion recognition) I'm highly interested to hear how such a product should look like to be not considered unethical. So far from the comments I see that:
- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera
- no raw data should be stored
- it should be used to collect statistics, not identify individuals (?)
Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?
The ethics are simple: If you don't get opt-in consent, many subjects are going to feel violated. Even if you assure them you anonymize the data.
It's not enough to put a warning next to the camera because you've already captured them at that point and it's too late. If anywhere it would need to be at the entrance to the store.
If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.
If a store has a warning label on the device engaging in this, it's bad because it's too late for not entering the store. I'm gonna complain right now at the store manager, maybe call the cops or sue. I'll be vocal about actively hating the store, the brand, the manager, the employees.
If I went to a store engaging in this without telling and I later learn about it, then I'm calling Keyser Söze and it's pitchforks and beheading time.
I suppose it will take a couple more generations of brainwashing to have the population ready to accept this kind of highly invasive technology. IIRC about 10 years the big brother awards was awarded to a french industry group for their blue book describing how to condition a population to accept surveillance and control technology over a few generations.
It's actually pretty simple - don't use it on people.
Advertising? No. Sales? Definitely no.
Augmenting that single-player video game so that it adjusts content depending on emotions and gaze of the player? Ok. Better if the player is explicitly told the game will track their reactions though.
EDIT:
Also, another angle. Even for advertisers / "sales optimization", I'd forgive you if that was a local, on-site system. But if it's meant as a SaaS, with deployments connected to vendor's butt, then I am gonna actively try to screw with it if I learn there's one installed anywhere I frequent. Hopefully new EU laws will curb that, though.
The only ethical possibility in my view is for it to not exist. I don't like having my emotions manipulated to make me buy more stuff, regardless of whether I am anonymized or not. But then again, I think similarly of a lot of the non-targeted advertising; the recognition just add whole new level of disgusting.
What about collecting statistics to make better decisions? Let's say, you go to your favorite jeans store, but find out that current collection is disgusting. Does it sound ok for you if some sort of system would analyze your attitude to the product to improve it in later versions?
I've watched the same hysteria & concerns about all kinds of privacy-invading systems. Social Security Numbers, credit cards, computer IDs, camera GPS, search queries, and piles of other tech all start popularization with "OMG evil people can do evil things with that data to hurt you!" Save for a few holdouts (usually much older folks), society at large has completely accepted all that tech as normal. Just takes about a decade of the convenience overwhelming the fear. I despise SSNs, but cutting my taxes by $1500/yr (child tax credit) is motivating; credit cards suck for a zillion reasons, but swipe-and-done is so damn convenient; no question Google has an impressive model of me but those search results are enormously useful; etc.
I have a question: Do you do trials in a controlled environment, where you actually have proper feedback and a distinct comparison between self-described state and machine analysis? Because in my opinion, systems like these are like modern day version of astrology (at least, when they are only based on vision and not things like fMRT imaging or proper psychological analysis). I know seriously depressed people, who always had a smile on their face (maybe a social coping mechanism), as well as "angry" looking coworkers, who had a very good mood most of the time. It very easy do misinterpret a persons mood, when the only "interaction" is: looking at them, and analyzing their facial features.
When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.
1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.
2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.
The fact that you are already working on this says something about your willingness to do something distasteful to earn a paycheck. A slightly bigger paycheck would probably mean you would relax you morals even further. Even if your product starts out with stickers and no logging, I bet it doesn't stay that way for long. Not if the paycheck can be bigger.
To me the only way this could be ethical is by the project being limited to private space (a lab, a room in your house). No data is to be recorded ever, runs on an airgapped computer, doesn't try to identify people, every people subjected to it has to be fully aware of what this is about and the implication it can have.
The opinion of most people here is that facial recognition technology is for the most part creepy if used in a commercial setting. Mine is slightly different. I think it's fine if you want to show me a different advertisement or sign based on an interpretation of my expression. I also think it's ok if you track my position within a mall and see which shops I visit and when. I would draw the line at attaching personally identifiable information to that data such as a name or a photo of my face. Anyone who decides to do that is probably going to cause harm/inconvenience to me (I don't want junk email from shops I happened to visit but didn't buy in).
I should also state that I think the first use of my data is ultimately unprofitable. Will the extra cents you make by advertising cinnabon to depressed looking people or hairdressers to long haired people really offset the cost of developing such a system? If applied to a broad population any customisation effects will be marginal.
I also believe that the non-anonymous tracking system is much more likely to produce value for companies and it would be very tempting when gathering anonymous data to cross reference with actual individual information. My concern about any tracking system is that by the motivation of profit it could easily shift from an ethical to non ethical space.
I'm kind of surprised that it didn't have some sort of Data Protection warning near it already, but I'm not sure if the EU data protection directive covers Norway as well.
We have pretty strong laws regarding this. It has created several news articles for the past week, and the Norwegian Data Protection Authority has already commented and said they don't believe this is legal. Stickers where added after the initial discovery.
I'm curious what's the boundary between ethical and unethical. People constantly analyze each other's mood and it's perceived positively. But doing the same thing massively using automated tools is often considered inappropriate. So is it because of using technology, massiveness, purposes? I hope there's a way to make such things both efficient and not unethical.
Uh, I got sidetracked and brain hammered by the devolving discussion on that Twitter thread, thus couldn't find the context for this pizza shop kiosk - Is it a customer service portal that attempts to identify the person in front of it to try and match up with an order, or a plain advertising display that is trying to capture the demographics of the people who happen to stop in front of it and look at it?
To summarize, it's an experimental project, there is so far only one such screen at the train station. If someone stands within 5 meters of the screen it will try to classify their age and gender and show a targeted ad, and record how long they looked at it. The raw images are not saved.
The screen uses a software called Kairos to analyze faces. It can estimate age, gender, and whether you are "white, black, hispanic, asian, or other".
According to the marketing manager at Peppe's Pizza, he thought there would a label on the screen saying what's going on, but in fact there is just a small sticker on the back of it, which is quite hard to see.
The company making the screen, ProtoTV, says that people should be okay with this because ads on the internet are even more targeted. A government representative says that the system might violate laws about surveillance cameras.
So what am I supposed to do if I find this invading my privacy? Not walk within 10m of such a billboard? It's not like there's any other active way to opt out of this.
I suppose a banner saying "you are being tracked" on tom of such a billboard could have quite an interesting effect. Come to think of it I've seen "Smile, you're being recorded!" in some shops.
You'd have a point if this was a general statement, but in the case of automated facial recognition it's almost universally despised across the world.
If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.
I would totally get interested in building this, have a half-good working prototype. All before I'd even considered that someone else might use it for evil...
The technology interests me and I would gladly work on a system that implemented such features.
As an ethical programmer, I'd be sure to incorporate security, anonymization, and be able to draw the line so that I can help businesses make more money (since that's what they pay me for), and advance technology at the same time.
This is only unethical when it's used in a system that infers more information aside from general demographics (which, BTW nearly everywhere collects), and makes them vulnerable to interception outside of the pizza company.
It saddens me that people are complaining about this yet people are doing way worse in our profession like extorting business for money through ransom-ware, hacking personal information and bank accounts, creating robots that kill people. But no people are worried that while walking around in public place a picture is taken of them and an add is changed to target them.
Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.
I feel a little bad about calling out one API provider specifically, so here's a bunch more: https://www.kairos.com/https://skybiometry.com/https://azure.microsoft.com/en-us/services/cognitive-service...http://www.affectiva.com/http://www.crowdemotion.co.uk/http://emovu.com/e/https://www.faceplusplus.com/
Face tracking, emotional analytics and vision based demographics analysis is a pretty huge industry. There's a entire spectrum of uses for this tech, from the altruistic (psychology labs, humans factors research), to the well, not.
We need HIPPA for all personal information, not just medical. We have an expectation of privacy in being "lost in the crowd" when we're out and about. Our physical & online whereabouts, who we're physically with, who we're communicating with, our personal contact information, and obviously payment information is private information that can be harmful if not kept private (false positives in automated legal systems, identity theft, and including all the defenses of securing medical information).
Anybody who chooses to hold such information must regard it with a high level of respect and privacy. Since nobody is doing so, and there are no penalties for violating privacy, and this gets into fundamental rights and proper functioning of society, it seems applicable to federal law.
I thought Minority Report was still a few years away...
Thanks for the links, this stuff is both fascinating and scary.
https://aws.amazon.com/rekognition/
Sentiment analysis for everyone!
Deleted Comment
The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]
And lots of retailers, banks, etc, are using systems that track people's visits across multiple locations. [2]
You'll see a lot of these systems being sold as fraud/loss prevention solutions. The reason for this is that it's a relatively easy sell this way - customers can count how many thieves they've caught this way to easily determine the ROI they're getting on the system. Once the systems are in place, it's relatively easy to start using them for marketing related purposes.
Not all uses of systems like these are necessarily unethical. Consider a case where you want to set up a rule like 'if the average lineup length at the checkouts exceeds 5 people, call backup cashiers'. The problem is that once you have something like this in place, it's very tempting for company execs to want to use the data for legal but less than ethical purposes.
[1] https://www.axis.com/ca/en/solutions-by-application/facial-r... [2] https://www.facefirst.com/solutions/face-recognition-predict...
Note that some ethical consensus is key---without it, companies can just price "Well, some customers think image recognition is creepy" into the risk model and do it anyway. Compare privacy concerns---people talk big about their concerns over privacy, but in practice, we're still in a world where a survey-taker can get very personal information from a random individual at a mall by offering a free candy bar. Until and unless people arrive at a common consensus that their personal information---including their face---has value or they have a proprietary right to that information, even in public, there's no real tractable solution to this problem.
... because there's no real agreement that there's a problem to solve.
Your source in the marketing material of an IP camera manufacturer.
We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running. These manufacturers, like Axis, whom you cite, would love for such capabilities but they are still very uncommon.
Deleted Comment
This is a wonderful app. I will use it every day!
It also picked up the colors in my aloha shirt perfectly. (Anyone who knows me knows that I am to aloha shirts as Steve Jobs was to black turtlenecks.)
When I want to feel young and go shopping for shirts, now I know what to do!
But it also scores me high for anger and sadness, despite (what I thought to be!) a rather neutral expression. Perhaps it knows more than we think :)
(Anyone else with a glasses, a beard, or other non-typical facial features want to comment? I'm curious now how well their system handles these?)
Thinks he is 45 years old. He is 64! Not calibrated for the superior Russian genetics.
[1] https://pbs.twimg.com/media/CuV5wciUAAA0aBz.jpg
I don't think glasses and beard are non-typical facial features!
The second time it thought I was 28, which increased my happiness even more.
https://how-old.net/
My partner tried it and it took 10 years off her age, and found an angry 31 year old man hiding in the folds of her clothing!
(edit)... on the other hand they probably were just trying to sell product so thought flattery was the right approach...
Covered up my receding hairline a bit and it said 29. I reckon if I shaved I could get it down to about 22 since that's how old people usually think I am.
Pulled a disgusted face and it said 47. Hmm.
Applied Science has a good video on how they work:
https://benkrasnow.blogspot.si/2015/11/how-anti-theft-tags-w...
http://lifehacker.com/how-retail-stores-track-you-using-your...
I tried variations of the standard expressions and pulled off sad, disgust, anger quite easily.
I knew binge watching Lie To Me before my psychology mid would come handy at some point!
Deleted Comment
Someone recently thought I was 30. People aren't any better than computers.
I am not 91.
Understand how your customers feel. Detect and measure facial expressions like happiness, surprise, sadness, disgust, anger and fear."
Creepy, indeed.
(There's an old joke that sometimes shows up on HN about augmenting an automated bugtracker to snap a photo when a crash is detected or a bug is reported, so developers can be reminded that bugs tie down to real people who are actually sad / angry that the software failed them ;) ).
I'd rather not give them my facial image so they can optimize for me.
Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.
It's just creepy.
The salesperson doesn't know in what shops you have been before.
The salesperson might also not know you talked to his colleague the day before.
This is about trust and privacy. You can't trust what they do with your data.
Well, you never know into what dystophia you are heading...
There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.
I don't believe this. This kind of advertisement in public would be illegal in germany as it is mass surveillance.
Not that this is a bad thing---being able to think differently like this is one of the positives of having countries!---but relative to the rest of the world, what Germany considers "surveillance" is unusual and sometimes surprising.
I also tried it on a few internet porn images. It looks like it is definitely only relying on the face for determining gender, or it thinks that there are a lot of women with hairy flat chests and large penises...
https://gitlab.com/crankylinuxuser/uWho
Runs on CPU only, realtime 1280x720 @ 15 fps.
Is it creepy? Sure. But anyone can run it. I was looking at a rewrite to work in CLI with a web interface instead. But the core loop is the magic part that makes everything work nicely.
What's the worst that could happen? The local advertising regulator will order you to meet the regulatory standards or remove the cameras, but give you x number of weeks to act per unit installed.
Maybe i should try to sell it..
the idea of collecting who looked at your display is invaluable. It would be beneficial for both government and ad agency. Ad agency is obvious but government could learn if displays present information people want and it was presented in a manner to catch their attention. The negative aspects of government use could be limited through privacy laws and such.
Microsoft's service is much more consistent and accurate (I've tested the same images...): https://azure.microsoft.com/en-us/services/cognitive-service...
Deleted Comment
"WorkSmart can track workers' keystroke activity and take webcam images to ensure they're doing their jobs."
http://www.oregonlive.com/silicon-forest/index.ssf/2017/05/j...
Deleted Comment
Edit: Here is what the buttons look like. Gender and age. https://image.slidesharecdn.com/hvc-c-android-prototype20141...
The cash register had a matrix of buttons
M: [0-9][10-19][20-29][30-54][55+]
F: [0-9][10-19][20-29][30-54][55+]
They'd just push a button as they punched in your order.
No idea if the operator is also recording gender and perceived age group etc., but I do know that on most occasions, you can opt not to answer the post code question.
I actually did some experiments - for different properties in roughly the same area/price range, I told different real estate agents different postcodes. It is beyond reasonable doubt that the code you tell them play a huge role on how they rank you as potential buyers. When you tell them a random north shore post code, you are guaranteed to receive a nice & friendly follow up on the coming Monday, however if you tell them that you live in the west (when mostly inspecting north shore properties), they would smile and immediately end the whole conversation.
The sample size here is ~50, which I believe is big enough to draw some reasonable conclusions.
It's really handy to see how many people the activities attract, and who they appeal to most. You're tracked everywhere!
Facial recognition is a unique identifier but cashiers have access to almost nothing they can record... [Edit] What was it?
Edit: clarified that I am asking about the history here, what information was manually collected by cashiers as parent stated
The tweeted advertisement system also looks like it's only recording demographics. Not individual personal IDs.
You could make a note of a person's seeming affect—positive/negative/neutral emotion.
Deleted Comment
Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).
Next time you look at digital signage, just be aware that it is probably looking back at you.
For me I knew how our data was anonymized. So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them, even the unique identifier was just a hash.
Obviously it depends on how much data you collect/store, personally I don't think the things shown in OP are all that onerous (sex, age group, gender, rage, time spent looking at ad).
What's the difference between this and an anonymised dataset? No PII is tracked, it's just looking at you and calculating what emotion you're likely feeling to show more targeted advertising.
I mean, I'm personally against it but we've got to prove a higher and higher ROI to justify the cost of digital signage, this leads to just that.
"Hi. I am the original taker of the photo. There is a screen that normally shows peppes pizza advertisements in front of peppes pizza in Oslo S. The advertisements had crashed revealing what was running underneath the ads. As I approached the screen to take a picture, the screen began scrolling with my generic information - That I am young male (sorry my profile picture was misleading, not a woman), wearing glasses, where I was looking, and if I was smiling and how much I was smiling. The intention behind my original post on facebook was merely to point out that people may not know that these sort of demographics are being collected about them merely by approaching and looking at an advertisement. the camera was not, at a glance, evident. It was merely meant as informational, maybe to point out what we all know or suspect anyway, but just to put it out in the open. I believe the only intent behind the data collected is engagement and demographic statistics for better targeted advertisements."
Source: https://www.reddit.com/r/norge/comments/67jox4/denne_kr%C3%A...
Deleted Comment
I am equally surprised by the comments about how come engineers implement such systems, how they find it ethical, etc. I'm sorry, but it sounds just a bit out of touch with the real world, or just outside of HN bubble. Given the things that money motivates people to do, it's probably one of the least unethical things that has been done.
I am not judging that this is right or wrong, I am simply stating the fact that nothing about this should be surprising. Yes, this is slightly sad, but that's simply the reality of technological advancement. It's not really possible to expect the rest of the world to use the technology only for things considered 'right', etc.
Looking at things from a little distance, the whole thing is abhorrent, and paints a really sad state of our society. I wrote this many times, and will keep writing it: if you did the same things personally to your friend that people in advertising industry do to everyone, you'd most likely get punched in the face. And yet somehow marketing became a respectable occupation.
There isn't really much consensus---even on HN---that passive demographic data collection is a bad thing alone. People claim it is, and I believe they feel it is---then they turn around and do things that compromise their stated beliefs because it's convenient.
I liken it to the gap between the rhetoric around open source and free software and the reality that Windows and Mac OS make up approximately 90% of OS marketshare. You can believe what you want to believe, but from a business standpoint you'd be putting yourself at a disadvantage if you structure your business requiring FOSS operating systems to climb to even 25% of marketshare; there's a similar situation, probably, for customer data tracking and advertising preference tracking.
It feels straight out of Black Mirror.
OTOH, what is so interesting in simple face recognition, innit? That future became a past quite fast, meanwhile teh human rights never get old. (smile.jpg)
- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera
- no raw data should be stored
- it should be used to collect statistics, not identify individuals (?)
Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?
It's not enough to put a warning next to the camera because you've already captured them at that point and it's too late. If anywhere it would need to be at the entrance to the store.
But guess what? Customers HATE this stuff. The backlash and lost business is not worth it. See: http://abc7news.com/business/philz-to-stop-tracking-customer...
If a store has a warning label on the device engaging in this, it's bad because it's too late for not entering the store. I'm gonna complain right now at the store manager, maybe call the cops or sue. I'll be vocal about actively hating the store, the brand, the manager, the employees.
If I went to a store engaging in this without telling and I later learn about it, then I'm calling Keyser Söze and it's pitchforks and beheading time.
I suppose it will take a couple more generations of brainwashing to have the population ready to accept this kind of highly invasive technology. IIRC about 10 years the big brother awards was awarded to a french industry group for their blue book describing how to condition a population to accept surveillance and control technology over a few generations.
Advertising? No. Sales? Definitely no.
Augmenting that single-player video game so that it adjusts content depending on emotions and gaze of the player? Ok. Better if the player is explicitly told the game will track their reactions though.
EDIT:
Also, another angle. Even for advertisers / "sales optimization", I'd forgive you if that was a local, on-site system. But if it's meant as a SaaS, with deployments connected to vendor's butt, then I am gonna actively try to screw with it if I learn there's one installed anywhere I frequent. Hopefully new EU laws will curb that, though.
What about collecting statistics to make better decisions? Let's say, you go to your favorite jeans store, but find out that current collection is disgusting. Does it sound ok for you if some sort of system would analyze your attitude to the product to improve it in later versions?
When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.
1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.
2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.
Facial recognition is by essence creepy.
I should also state that I think the first use of my data is ultimately unprofitable. Will the extra cents you make by advertising cinnabon to depressed looking people or hairdressers to long haired people really offset the cost of developing such a system? If applied to a broad population any customisation effects will be marginal.
I also believe that the non-anonymous tracking system is much more likely to produce value for companies and it would be very tempting when gathering anonymous data to cross reference with actual individual information. My concern about any tracking system is that by the motivation of profit it could easily shift from an ethical to non ethical space.
For it to not be used without opt-in consent. And to not hold any other benefits/privileges/incentives behind the wall of opting in.
http://www.dinside.no/okonomi/reklameskilt-ser-hvem-du-er/67...
Edit: there is more information in the article. Not going to read it, though.
The screen uses a software called Kairos to analyze faces. It can estimate age, gender, and whether you are "white, black, hispanic, asian, or other".
According to the marketing manager at Peppe's Pizza, he thought there would a label on the screen saying what's going on, but in fact there is just a small sticker on the back of it, which is quite hard to see.
The company making the screen, ProtoTV, says that people should be okay with this because ads on the internet are even more targeted. A government representative says that the system might violate laws about surveillance cameras.
(If the "Am" part of the name seems familiar it's because it's Alan Sugar's son)
They installed one at a local petrol station so they lost my business. Perhaps I should be more vocal about it.
I'm not saying there isn't an absolute right and wrong, but I certainly don't find this as abhorrent as most people here, apparently.
I suppose a banner saying "you are being tracked" on tom of such a billboard could have quite an interesting effect. Come to think of it I've seen "Smile, you're being recorded!" in some shops.
If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.
As an ethical programmer, I'd be sure to incorporate security, anonymization, and be able to draw the line so that I can help businesses make more money (since that's what they pay me for), and advance technology at the same time.
This is only unethical when it's used in a system that infers more information aside from general demographics (which, BTW nearly everywhere collects), and makes them vulnerable to interception outside of the pizza company.
and from the same guy:
"Don’t ask how you’re going to pay your rent working ethically. Ask why you’re open to behaving unethically in the first place." https://deardesignstudent.com/ethics-and-paying-rent-86e972c...