I do think it's completely unacceptable if Meta makes the glasses unable to be used for routine functions without (a) other humans reviewing your private content and (b) AI training on your content. There needs to be total transparency to people when this is happening - these are absolutes.
But I'm a bit confused by the article because it describes things that seem really unlikely given how the glasses work. They shine a bright light whenever recording. Are people really going into bathrooms, having sex, sharing rooms with people undressed while this light is on? Or is this deliberate tampering, malfunctioning, or Meta capturing footage without activating the light (hard to believe even Meta would do this intentionally).
Agreed. I'm confused trying to map what the article is saying to what's happening at a technical level. For example, obviously it's not doing on-device inference, so it's unsurprising that it won't work without a network connection, but this is totally distinct from your recordings ending up getting labeled. It talks about being able to opt into that, which is one thing. But I guess I don't understand if you don't opt in, if the data still gets sent out for labeling.
I feel like this article is either a bombshell, or totally confused.
My reading was that as soon as you enable the "AI" functionality you are opted into having your recordings labeled.
"But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off."
>> but this is totally distinct from your recordings ending up getting labeled
The distinction here occurs wherever the data is processed, and it sounds as if the difference between using your video for labeling versus privately processing it through an AI is deliberately confusing and obscured to the user by the way the terms of service are written. Once the video is uploaded, which is necessary for the basic function, it's unclear how or whether it can be separated from other streams that do go through labeling. This confusion also seems to be an intentional dark pattern.
I do believe people do all of that with the light on. And then there are also people who tamper with the device to deactivate the light. You can find guides for that online.
The funny thing about the light is that it doesn't even matter when surreptitious recording devices are trivial to make these days. You can never know when you're being recorded, even when no one is wearing glasses.
This is historically what we've had consumer protection regulations for. When they put lead, radium, asbestos, arsenic, or other poisons in consumer products the regulators step in and put a stop to it. It should be pretty clear at this point these consumer tech companies are no different--they're just producing poison. And it's not like there weren't signs, it's been like this for damn near a quarter century.
> (hard to believe even Meta would do this intentionally)
Are you referring to the same company that runs Facebook, WhatsApp and Instagram? Meta has, for well over a decade, been caught multiple times -as recently as 2 years ago caught for the third time I know of- burrowing into areas of phones that their apps weren't directly given access to. Android phones have been highly susceptible to this kind of snooping.
I'm going to guess that people are intentionally recording themselves having sex, assuming that they are creating a local recording that is not sent to Meta. Does the light mean "camera is recording" or "cloud services are involved"?
The article isn't clear on this point, I believe because Meta isn't clear on this themselves. Other bits of this piece highlight third parties reviewing the responses of the AI assistant; it's possible that people are recording and some sound they make triggers the AI assistant which, in turn, leads to the video being reviewed.
OTOH, Meta could just be desperate for training content and they're just slurping up all recordings by people who've opted into the AI function. It would be great for them to clarify how this works.
I am very much confused. People recorded sex way before the meta-spy-glasses.
I mean, not as if I were to visit such sites, right ... but video recordings can be done in numerous ways. Also on small devices. I mean the smartphones are fairly small.
If you're not paying a subscription for Meta to AI process your audio and video then they're going to get value out of it some way. It's just like any other 'free' digital service
It is absolutely within possibility that all "camera is on" lights are software controlled just like the camera and independently of the camera. They are meant to tell the user that they are using the camera. They are not meant to tell anyone that the owner of the devices back-end is using the camera.
It is also completely unacceptable to capture the public space without oversight and consent from third parties. If glass users are fine with that, why wouldn’t they accept it for themselves?
This is a very important window into how the industry, by and large, views users and the concept of privacy. It's not merely authoritarian and predatory, to them users are subhuman.
The worst part isn't even that quote, its that nothing structurally has changed one bit since then. The business model still requires users as the product. Glasses that upload video to Meta's servers is the entire point.
This was one of the first hits on Kagi. 404 has a similar article (I think) but it's behind a paywall.
"The demand for this ‘Ray-Ban hack’ has been steadily increasing, with the hobbyist’s waiting list growing longer by the day. This demonstrates a clear desire among Ray-Ban owners to exercise more control over their privacy and mitigate concerns about unknowingly recording others."
If anyone were to record even when the light is not shining, it would be Meta. This would not surprise me at all, they have everything to win and nothing to lose, no country would fine them anything remotely relevant compared to the value of the data they'd be getting.
Presumably the 'drive-by' downvotes are coming from the ad-tech industry who would prefer the population to simply bend over and grab ankles with both hands the moment they request our personal data?
I'll confess that I like my Meta Ray Ban glasses: I love using them to listen to podcasts at the pool/beach, while riding my bike, and it's cool to snap a quick picture of my kids without pulling out my phone.
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).
I don't understand how a parent can be OK non-consenually uploading pictures of their children's real faces to an ad driven AI company famous for abusing people's data and manipulating children on their platforms.
It is because they don't understand the scope of the problem. People are inclined to think that other people who have treated them kindly mean well also in the long term.
Probably the majority of the planet share family photos on facebook, messenger, whatsapp or instagram - all meta properties. On the whole nothing much bad happens.
Those settings are IMO likely not doing what you think they are. Or might be doing strictly, precisely what they say they are.
[OFF] "Share data about your Meta devices to help improve Meta products." doesn't preclude sharing data for other purposes.
[OFF] "Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." doesn't preclude sending them to Meta's cloud for permanent storage.
Last year they pushed out an update stating if any “Meta AI” is left on, they can access image data for training,
I turned the AI off and used them as headphones and taking videos while biking. After a couple rides, I couldn’t bring myself to put them on because people started to recognize them and I realized I didn’t want to be associated with them (people are right to assume Meta has access to what they see).
Meta Ray Bans, if kept simple, could have been a great product. They ruined them.
After all that has been revealed to us over the past 15 years, it is really disheartening to see people still thinking that setting a few toggles will prevent these companies from abusing them.
Just continues to prove that if you solve a bit of inconvenience for them, people will let you exploit them and their families.
I'll confess I look at Meta Glasses the same as Google Glasses: A big sign saying "punch me in the face". If you enter some premises I'm in while wearing those, I'm either leaving or they will have to come off your face somehow.
Wearing these glasses is just as obnoxious as walking around putting your phone in people's faces while recording.
If it say "punch me in the face" then you have bigger problems. And after you got recorded showing what it says to you they might be growing. Tell them what you think but don't forget "Pretty, I feel pretty, ..." - just in case.
I think the most likely case is: this company is labeling images from meta AI use from people who opted-in to share their data with Meta.
It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.
I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.
A simple on/off toggle isn't going to prevent them from using your data. If your data is in their server then it's going to be used one way or another. Whether in an anonymous way or shipped to where there are no privacy laws.
Your setting is off cloud media until the company arbitrarily turns it on for you. Seems crazy now, won’t be ten years from now. They’re just boiling the frog all the way.
The core issue here is that "to provide the service" in privacy policies has become a catch-all that can justify almost anything.
I work on web products in the EU and we had to redesign our entire data pipeline for GDPR compliance. The key principle is "data minimization" — you collect only what's strictly necessary and delete it after processing.
Meta's approach seems to be the opposite: collect everything, process in the cloud, and use vague language to keep the door open for secondary uses like
labeling and training.
The fact that turning off "Cloud media" might not actually prevent your data from being sent to Meta's servers for inference is a textbook dark pattern. Users
see a toggle and assume they have control. In practice, the toggle only controls one specific processing path while others remain active.
Under GDPR, this would likely fail the "informed consent" test — consent must be specific, unambiguous, and freely given. But enforcement is slow and fines are just a cost of doing business at Meta's scale.
i don't trust the zuck at all, and am not naive about any of this. I'm sure the words used above are watertight in court of law but I bet you there are shenanigans in places where light don't reach
You might enjoy these conveniences now, but this is just the pre-enshitification stage. Soon enough, to take advantage of those features you will have advertisements integrated into your view, and your data will be scraped for whatever its worth to Meta.
Ghanaian authorities are seeking the arrest of a Russsian national who was using glasses to record himself picking up, and sleeping with, women in Ghana and Kenya. He uploaded them to social media and telegram. Was quite the story on African tech twitter last month.
Don't you need to obtain consent before filming random people in the street? I already feel uncomfortable when someone takes a photo in public and I happen to be in it, but this type of device takes things to an entirely different level. With smart glasses, there's no visible cue that you're being recorded. No phone held up, no camera in sight. I'm questioning the legality of this in Europe, where privacy laws tend to be stricter. In the meantime, should I just assume that anyone wearing these glasses is always filming? And would I be within my rights to ask them to stop the moment I notice them?
Note that there is a difference between being allowed to take a photograph, and being allowed to share it. Unless you're threatening or harassing, you're mostly free to photograph as you want. But you might not be allowed to publish it.
Pretty much the same in Finland. You are allowed to film/photograph as much as you want in a public place, but publishing the material might be against the law depending on the contents. Particularly the law regarding "dissemination of information that violates privacy". It's fine to publish a photo of people walking on the street, but you'll probably get into trouble for uploading an arrest to YouTube where the suspect is recognizable.
In a general rule you can record. But sending it to Meta AI would be a AVG (GDPR) violation in the Netherlands if no consent is given as you share it with a third party. There is also the difference of recording a public place with people in the background and clearly recording someone: The first is fine, the second is not (without consent). You also cannot disable the recording light, doing so would put you up for libel en decency lawsuits (and libel and public decency can be criminal, not just misdemeanors).
So if you take a video of specific people looking at flowers at the Keukenhof you would have to ask them for permission if you are recording them primarily and publish it but recording for yourself is fine as it is a clearly public space. If you take a picture of all the flower and catch some people in it in the background you are fine. If you do it in a place where people do not expect it they can ask you to remove the video and they have to (e.g. in a restaurant when you are eating as it is not expected to be recorded there).
There are some exceptions for journalism, law enforcement and public good. I doubt strongly any Meta (AI) post would classify for that.
There is also the small caveat that if you can avoid recording innocent bystanders you must. E.g. putting up a doorbell camera and pointing it to the street instead of your door is bad as it's easily avoidable by putting it top down.
An important distinction is that you are allowed to film/photograph when you are actively doing it (so the glasses do belong in that category). You're not allowed to set up a camera to autonomously film/photograph outside of your own private property.
Besides that there is the issue of publishing said footage, as others point out.
US here. Definitely more permissive than any EU nation. Public space typically means free for all in terms of recording[1]. The incident I link is relevant as we are bound to see a whole new bunch of 'content creators' going for various new ways to engage the public.
That would make taking pictures impossible, so no, such a requirement cannot be reasonably() codified into law.
(
) By reasonably I mean in a way to be actually followed. Of course there are lots of impossible laws created by politicians to cater to their fan base.
These glasses have a light when recording. You can buy many hidden recording glasses that are much more discrete with no light. Are you also paranoid when someone has their smartphone in their shirt pocket with the camera exposed?
On the french trains, you can sit opposite someone else. I'm feeling really uncomfortable when this person scrolls on its phone, with the phone back camera pointing to me for hours.
I sometime ask this person to hide the camera and they generally understand my feeling.
In Germany, you don't need permission for recording image material (including moving images) in public places, though usage of the material might be restricted.
However, audio recording of conversations is prohibited.
Filming is legal. In public spaces (streets, parks), there is no "reasonable expectation of privacy." You do not need permission to point a camera. The exceptions are usually for offensive or harassing type of filming.
Publishing is regulated. In EU, once you share the footage , you are "processing personal data" under GDPR. There are also exceptions where publishing without permission is legal. Legitimate Interest (security footage or incidental background), Public Interest/Journalism, and Artistic Expression.
Generally you must ask permission to publish, not to film. Although asking permission to film is good ethical principle too.
Note that there is a difference between Panoramafreiheit (freedom to record a public building / space with people walking around) versus recording the street before your house with an always-on security camera (almost always forbidden).
Even having a fake camera pointing at a public space can be forbidden as it creates surveilance pressure on people using the space.
Given that the article is from a Swedish publication, you often need prior permission to use a security camera which could take images of the genera public. Much of this is regulated with GDPR.
Meta aims to introduce facial recognition to its smart glasses while its biggest critics are distracted, according to a report from The New York Times. In an internal document reviewed by The Times, Meta says it will launch the feature “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
I never understand why a company would put something like this in writing.
I worked at a midsize financial company before and whenever there was something even approaching a legal or ethical grey area, we'd pick up the phone and say come to my office to talk, and then you'd close the door.
We weren't doing anything nearly as nefarious as Meta, yet everyone was always aware that email and phone conversations were recorded and archived.
Same reason project 2025 was put in writing. When you have large organizations you need to distribute communication. It's really just about cooperation and logistics
>I never understand why a company would put something like this in writing.
Do you believe these companies and individuals will ever see consequences for putting this in writing? I don't think they will, and I assume they believe the same based on their actions. Why waste time being "moral" when you don't lose anything for being immoral and stand to gain something if your gamble wins?
I mean, there's a whole philosophical outlook about being a good person and some people just want to do without needing enforcement, but those people also dont tend to become one of the largest corporations on the planet.
The long term goal might indeed be unrecognizable designs. Perhaps augmented reality contact lens. It will take a long time but people tend to slowly get used to giving more and more of their privacy away. Mojo Vision made a prototype of this. It's more the display but you can imagine the camera being somewhere else and streaming to the lens in an unobstructed way.
I'm not the kind of person to wear those, but if I was and someone tried to slap them off me I might feel really threatened if you catch my drift. And since I won't be able to see too well, it will take some extra effort... Was that remaining movement the next punch, or death throes? Can't see too well, better safe than sorry!
I really cannot comprehend how someone can work for a company like that and maintain possession of a soul. I feel like the older I’m getting, the further away I am from understanding.
Gen Z doesn’t seem to carry the millennial “making the world a better place” sensibility. They are all hustle culture, all the time. While I appreciate a lot of their culture this is the aspect that makes me nervous about the future.
I'm 37, single, no family or extended family b/c of an...interesting...childhood.
Every day I understand more and more that I have something really priceless and rare, complete luxury of choice, and 99% of people don't. (as with all things, it has its downside: nothing matters!)
I refused to get "stuck" in my hometown, which motivated me from college dropout to FAANG. Once I got there, it was novel to me that even rich people get "stuck" due to inability to imagine losing status, and also responsiblities that come with obvious, healthy, lifestyle choices (i.e. marriage and kids)
The soulless kids who used to go into finance joined tech and are inspired by the current crop of tech billionaires in the way that their predecessors were inspired by Gordon Gecko.
> how someone can work for a company like that and maintain possession of a soul
I mean, they don’t. There isn’t a single decent person who has ever worked at Meta, and that started long before this nonsense. The entire company is about the social destruction of its users. Everything anyone there works on drives towards that goal.
The individuals making these decisions are 100% aware of what they are doing. Driving for and implementing stuff like this is for profits, bonuses, and internal recognition.
What do you mean? They're fully aware this would be received poorly by "certain groups" and are applying all that highly-praised brain power to getting around that undesirable issue to keep their RSUs growing.
Most people are just trying to get through their day and not worry about ethical questions.
I'd say that's terrible, but I'm not confident I'd be a better person if my livelihood depended on doing that sort of work, though I hope I'd be better.
Why is it always this accusatory “while you were distracted”-style rhetoric?
Who has been distracted from Facebook’s shenanigans? Who are they talking about? Is it me? Because I can tell you I have certainly not been distracted on that front. Am I supposed to feel guilty? Am I supposed to hold somebody accountable who should’ve been paying attention?
I do actually understand why it’s done, but I just find it very grating and if your goal is to actually raise awareness, shaming people is generally not the way to go about it.
Also the classic “we can walk and chew bubblegum at the same time” thing
This is Meta claiming in their internal communications that they plan on doing it while people are distracted with other concerns.
It isn't really "rhetoric", they're talking like they believe this actually happens, this is strategy.
And I tend to agree with them that things like attention and political capital are ultimately finite resources.
I've found that the "we can do two things" and "we can walk and chew bubblegum" line of argument to be simplistic and just wrong (and pretty incredibly patronizing). I think the world works exactly the way Meta thinks that it does here.
It might blow up and turn into a Streisand effect, but more often than not this kind of strategy works.
Much like how people think they can multitask and talk on the phone and drive at the same time and every scientific measure of it shows that they really can't.
> Who has been distracted from Facebook’s shenanigans? Who are they talking about? Is it me? Because I can tell you I have certainly not been distracted on that front.
On September 11th 2001 a UK government department's press chief told their subordinates it was a "good day to bury bad news".
The idea is pretty simple - you might be obligated to announce something that you know will be poorly received, like poor train performance figures, but you can decide the exact day you announce it, like on a day when thousands have died in a terror attack. What would otherwise be front-page news is relegated to a few paragraphs on page 14.
Facebook proposes a similar strategy: Get the feature ready to go, wait until there's some much bigger news story, and deploy it that day.
The facebook execs literally plotted to relaunch their unpopular product while people were distracted by other bad news.
> “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.
American society has a finite aggregate supply of attention. Politicians and megacorporations often exploit this fact. This Verge article is a leak that verifies that Meta is actively and brazenly continuing to exploit it.
Is that a good enough explanation to reduce your feelings of being personally targeted?
interesting (respectfully!) take that the "while you were distracted" rhetoric is coming from investigative journalists/commenters - i read this more as Meta's admission that they're betting on critics being distracted than an admonition by outside observers. it's probably easier to sneak up on a person to rob them when it's foggy; that's not victim blaming.
I usually hate this kind of click bait, but I think in this case it's warranted, since their explicit policy was to do this "while they are distracted". Verbatim.
I was in engineering school back in ~2012 when Google Glass came out. One of my classmates got hold of a pair when they were still quite uncommon and wore them to an extracurricular club meeting. Within minutes someone made a comment about him wearing the "creeper" glasses and asked if he was filming. He never wore them to the club again.
I just don't see a world where that doesn't happen with Meta glasses.
An entire new generation of people have been born and raised into a world that is more accepting of always recording and being recorded since 14 years ago.
Even in an environment where filming (with phones) is common and acceptable, smart glasses can still come off as rude because others find it hard tell if you are recording or not.
To record a video on your phone you need to hold your phone up pointed at the other person, usually not in the same way you would normally use a phone. If you see someone holding his phone steady at face level and pointing at something without making finger movements, you know he is filming. If someone is pointing his phone down towards the ground and scrolling around with his thumb, you know he is probably not.
To record from a pair of smart glasses you just need to look at someone, as you would normally look at any other thing. Yes there will be an LED on, but the person being recorded probably couldn't see it if it is in a bright, busy environment and you are more than a few steps away, plus there will be aftermarket modifications to disable the LED. In short, there is no way you can reliably tell if someone's smart glasses are filming you. You have to assume that worst.
A common fear for younger people has become being recorded and becoming famous in some embarrassing video. I don't see the problem as having gone away.
And they will soon find out that world's make believe. No one I know, and I know hundreds and hundreds if not thousands of people would allow themselves in a room to be recorded surreptitiously.
And yet, the New York Times reports that all the hottest clubs are banning phones on the dance floor. Perhaps in reaction to having lived the downsides of omnipresent social surveillance, the youngest adults in my life are uniformly sober about the perils of oversharing.
Then again, there may be some selection bias at play…
I'm not sure if you have experience with teenagers, but you’ll quickly realize they are even more resistant to this technology than we ever were. For the vast majority of kids today, this is their worst nightmare. They will reject it even more forcefully than we have.
In Sweden, kids have stopped showering after PE class due to this concern.
The world is not deterministic, and we can shape norms of how we interact with each other. We don't have to accept being constantly recorded just because the technology makes it possible.
Unfortunately, the Meta glasses look much more normal, and a person who isn't actively looking for them (and especially one who is unaware of them) isn't likely to notice them.
Not perfect, but better than nothing I guess. I don't think I've noticed the glasses IRL anywhere, but if I start seeing them, I'm definitely installing the app and avoiding any interactions with those people.
A family member has one and I didn't notice until they had to charge their pair. The little circles are subtle giveaways otherwise they look like regular pair of glasses. When everything is always on, I'd like to keep my house "off" and those things are a direct violation of that.
10 years have elapsed, peoples expectations have changed a lot. Back around the time of the first iPhone, it was pretty common to see signs in gym changing rooms akin to 'no cameras permitted'... Now you'd have to physically separate people from their phones before entering the locker room if you are going to enforce that.
And all of that is to ignore that neither gen1 or 2 of Google Glass attempted to look like regular glasses. The Meta frames are largely indistinguishable from regular glasses unless you are very up close.
Unfortunately, "The French-Italian eyewear brand [EssilorLuxottica] said it sold over 7 million AI glasses last year, up from the 2 million that the company sold in 2023 and 2024 combined." from https://www.cnbc.com/2026/02/11/ray-ban-maker-essilorluxotti... . That's at least 9 million units in the field, probably 1000x more than Google Glass ever sold, and more than 3x growth in sales in one year.
[EDIT] I really shouldn't need to say this on Hacker News but don't shoot the messenger for messages you don't want to hear. Reporting a fact does not imply approval or disapproval of it.
No, we need to make this as socially radioactive as possible. We don't need to establish a permission structure to allow Facebook to continue doing this without repercussion.
Judging from the examples reported on in the article, Meta's smart glasses are either very easy to accidentally trigger or quite popular with actual creeps
There are a lot of creeps out there. In summertime I'm pretty often tanning in nude beaches. Almost every time, somewhere there is a guy around with a cellphone or such a spy glass.
You're already in that world. Phones have ubiqitous cameras and they are normalized at this point. It's a common scene in a movie where instead of helping someone who was hurt, people just pull out their phones and film.
Cameras on glasses will be normalized too. A few HNer types will scream. The rest of the "nothing to hide so nothing to fear" group will just wear them. (not saying I agree with "nothing to hide so nothing to fear". Rather, I'm saying that's common way of thinking. Common enough that it's likely people will wear these eventually.
How about this marketing approach: "College woman, tired of creepers trying to hit on you. Worried about getting roofied. Wear these glasses and turn the creeps in".
> I just don't see a world where that doesn't happen with Meta glasses.
People widely accept mass surveilance and facial recognition, including by doorbells, phones, cameras on the street, etc. They post images and videos online to corporations that perform facial recognition. They accept government collecting data broadly by facial recognition.
People accept all sorts of horrors and nonsense, unrelated to and many times much worse than privacy violations, because (I think) they are normalized on social media, which is controlled editorially by Zuckerberg, Musk, Ellison, etc.
I'm not saying we're doomed. I'm saying nobody else will save us. We have to make it happen.
I don't know. I clearly remember a time when phones first got cameras and there were debates on whether or not we should prohibit phones in public bathrooms. Perceptions changed. Fast.
In the US, at least, it's pretty much legal to record the public as long as people have no expectation of privacy (IANAL, exclusions apply, non-commercial use, etc)
It's difficult to draw a bright line between these activities:
- I told someone else something I saw the other day
- I painted a picture of the public square or wrote a book about specific activities that I witnessed
- I specifically remembered an individual based on their face, visible tattoos, location, license plate, or some other unique factor and voluntarily testified to that fact in a court of law
- I spent every day at the same corner making note of the various people/vehicles that I saw
- I stuck a camera at that same point (perhaps on my private properly directly abutting a public space) and recorded everything, posted it publicly on the internet, and used automated technology to identify people, text, vehicles, etc
- I paid a different person every day to follow someone around and record what they did
- I developed a drone system that could follow specific individuals/vehicles from airspace I'm allowed to occupy
Pretty much everything I described above is legal in most of the United States. Obviously it gets creepier and more uncomfortable going down the list (I don't really like it when I'm the subject of any of these activities) but how do you stop this?
I'll at least throw out some options
- Implement some form of right to forget
- Forbid individuals or organizations from doing any of these
- Enact actual "civil rights" level privacy protections (extend HIPAA? automatic copyright for human faces? new amendment?) that include protection of individual's DNA, unique facial features, and other "uniquely human" attributes
There is a world, because when the displays are high quality and they're thinner and lighter, they're going to replace phones, and almost everyone will be wearing them.
Nah, I don't see it. They've been trying to make smart glasses a thing for over a decade and it's not working. Nobody wants them. I don't think it's necessarily a privacy thing, it's just that smart glasses don't solve a real problem. Same with VR.
I think that since the input modalities are (seemingly) restricted to eye movement and sound, that it is impractical to replace a phone, where someone can engage privately.
It doesn't matter how high quality, convenient, or light they are, as long as wearing glasses isn't inherently cool, normal people aren't going to choose to wear them.
Are they going to be as hard to keep clean as glasses. Honestly it’s the biggest problem I have with sunglasses, it’s that as soon as you get a speck of dirt on them they’re annoying. And if it starts raining you can’t see anything (and you look like a tool).
As much as I disagree with the cameras, you should not have been downvoted. If anything, people who are against the cameras need to see your anecdotal experience so that they can see how easy it will be for these cameras to proliferate.
It seems like a more polite way of handling this in private spaces is just to ask that people take them off - just like we do when a pig farmer walks into our house with their boots on.
I get why people are creeped out by them, but we get filmed or photographed hundreds of times a day in a big city when we are in public spaces. Gatekeeping a potentially useful technology for being filmed in public -- well, everyone is _already_ filmed in public. ATM cameras, stoplight cameras, drone cameras, smartphone cameras, security cameras, doorbell cameras. You are on camera every time you step out of your house. You are on camera every time you open your work computer. Singling out cameras in eyeglasses as "creepy" is kind of worrying about a drop in the ocean. Cameras on self-driving cars. Nanny cams. Closed-circuit cameras. The things are everywhere, and they are always invasions of privacy. Why is the line the "creeper" glasses?
I'd be ok with it if we were for banning all non-consensual recordings in all spaces. But we're very much not.
And if we're not, then having a personal heads-up display that is contextual to your current surroundings or has augmented reality capability is too useful to not use (eventually). I'm bad with names, and good with faces. That use-case alone would be worth it for me, if it were available.
"It seems like a more polite way of handling this in private spaces is just to ask that people take them off - just like we do when a pig farmer walks into our house with their boots on."
Just FYI, they do heavily market this towards RX glasses wearers. So, you wouldn't quite be able to just as simply ask someone to take off their glasses and no longer be able to see.
These glasses are doing incredibly well from a sales perspective. Social norms have shifted, user generated content is huge, being a video influencer is a real job - so seeing people filming is more accepted than 12 yea ago.
It doesn’t mean I like it but these are not going away. I do think they lack a killer app, but there’s a part there with conversational AI that can act on your behalf
It's strange to me that that's the line society seems to have drawn in the sand. Body cam, no problem. Doorbell cam, practically universal. Body cam worn on the face? No way. I wonder why.
Police body cams are typically only used while on-duty and in public, where there is no expectation of privacy. They also don't automatically send video into the cloud to be analyzed by a human for AI training, as mentioned in this article. Video is usually only retrieved if needed on a case-by-case basis.
Doorbell cameras are also typically pointed toward public streets, where again, there is no expectation of privacy. Even then, many people have been removing Ring cameras after they were shown to automatically upload video without user's knowledge.
Body cam - used to protect the police and people being policed in a potentially hot conflict. Recording is scoped to these specific interactions that rarely occur for most people.
Doorbell cam - highly controversial. See response to dog-finding superbowl ad.
Body cam wore on face - Mass surveillance in potentially every conceivable social context. Data owned by Meta, a company known for building a profile on people that don't even use their products.
Body cams are directly visible, and are there to add accountability to the actions of law enforcement. These glasses are covert cameras. Someone that doesn't know what they look like isn't going to know someone might be filming. That's a big difference.
Not sure how it is where you live, but doorbell cameras are commonly criticized where I live. With many people claiming they don't feel comfortable walking around anymore knowing that the entire neighborhood is filming them.
Cop body cam footage is more likely to help you vs a cop than get you into trouble because a cop is already there watching what you’re doing. IE: Thank god the cop’s camera was off when I was buying crack, I might have gotten in trouble otherwise… fails because a cop was already watching you.
Cops also announce their presence in uniforms and are operating as government agents. People already moderate their behavior around cops so being recorded isn’t as big a deal.
Body cameras aren't hidden and are worn by public officials while on duty, doorbell cameras are no more invasive than an CCTV camera a home owner might have installed on their premise.
I think the difference is that these cameras are relatively concealed, and can be used to record every interaction, even in pretty intimate/private settings. Yes you could do this with a cell phone but it would be pretty obvious your recording if you're trying to get more than just the audio of an interaction.
What do you mean bodycam isn't a problem? Do people wear body cams to normal social occasions?
People are more okay with cameras in public areas and less okay if it's in intimate, social, private situations, inside apartments, individual offices etc.
I also don't like having doorbell cams everywhere, at least not the ones that upload all their footage to the ~great mass surveillance network in the sky~ Cloud(TM). I don't think that's an uncommon point of view. And body cams are only worn by cops and at least provide some concrete benefits in terms of increasing police accountability.
Lines were and are always weird, all the time. Americans killing 150 girls yesterday in a school, just a footnote in the news, already gone today. Some rando killing 10 people in a university in my country, endless discussion, politicians, punduits all up in arms spewing their opinions for months, discussing it to no end. Only difference? I don't know. I don't know almost anyone in my country, they're all as foreign to me as some girls in Iran. There's no difference to me.
There's very little sense to me in searching for meaning in any of this. It just is, people are that way. There are no lines and boundaries based on anything but just whims.
Everyone should assume that _anything_ connected to internet will get uploaded to internet and someone within the company will have permission to review the contents regardless of what the policy says.
1. Debugging for troubleshooting.
2. Analytical for making product better.
3. Bugs that collects your info when it shouldn't.
4. Bugs from 3rd party vendor if company uses those.
5. Insecure process. Getting access to a private content within the company is trivial due to coarse permission model.
Source: I worked at two well known social media companies. Trust & Safety and data infra teams
My concern was whether the glasses might record or transmit data while switched off or in standby mode.
From what I can tell, they don’t do this intentionally. So the risk is broadly similar to other modern electronic devices.
The creepiness concern is real, but I think people misplace where the actual surveillance happens. The most consequential stores of personal data aren’t ad networks they’re things like banks, hospitals, insurers, and telecoms. These institutions hold information about your health, finances, movements, and relationships, indexed and searchable by employees you’ve never met, governed by policies you’ve never read.
Realistically, there’s very little an individual can do to completely opt out.
My take is: if the main outcomes are that I get shown ads for things I don’t need and my facecomputer knows the difference between a fork and a spoon… I… I can live with that.
> Realistically, there’s very little an individual can do to completely opt out.
Yes, but it's possible, at the cost of some minor inconvenience, to greatly limit data collected about you.
Communicate over private channels (Signal, own XMPP servers, NOT Whatsapp), pay in cash or crypto, runs free software on all your devices, and deny Internet access to devices across the board (this includes all TVs/monitors, all "smart" devices, cars, and other appliances).
The real issue is that (as these glasses exemplify), it is difficult to prevent others to intentionally or unintentionally provide data to surveillance companies. This happens when you walk in front of a Ring camera, when someone uploads a selfie to Facebook and you happen to be in the background, and in countless other situations.
> it is difficult to prevent others to intentionally or unintentionally provide data to surveillance companies
One that bothers me a lot are all the apps that want people to share your contacts to find your friends. This is a quick way for them to get all the contact information, which may also include birthdays and other more sensitive details.
Even if I were to never make a Facebook account, I could almost guarantee they still have my name, address, phone number, DOB, and maybe more.
> So the risk is broadly similar to other modern electronic devices.
No. When your record a video on your phone, it is not being reviewed annotators. Generally companies only pay to get labeling done on data that is being used to train (or evaluate) ML models.
But I'm a bit confused by the article because it describes things that seem really unlikely given how the glasses work. They shine a bright light whenever recording. Are people really going into bathrooms, having sex, sharing rooms with people undressed while this light is on? Or is this deliberate tampering, malfunctioning, or Meta capturing footage without activating the light (hard to believe even Meta would do this intentionally).
I feel like this article is either a bombshell, or totally confused.
"But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off."
The distinction here occurs wherever the data is processed, and it sounds as if the difference between using your video for labeling versus privately processing it through an AI is deliberately confusing and obscured to the user by the way the terms of service are written. Once the video is uploaded, which is necessary for the basic function, it's unclear how or whether it can be separated from other streams that do go through labeling. This confusion also seems to be an intentional dark pattern.
Are you referring to the same company that runs Facebook, WhatsApp and Instagram? Meta has, for well over a decade, been caught multiple times -as recently as 2 years ago caught for the third time I know of- burrowing into areas of phones that their apps weren't directly given access to. Android phones have been highly susceptible to this kind of snooping.
https://www.techradar.com/pro/security/meta-halts-phone-and-...
OTOH, Meta could just be desperate for training content and they're just slurping up all recordings by people who've opted into the AI function. It would be great for them to clarify how this works.
I mean, not as if I were to visit such sites, right ... but video recordings can be done in numerous ways. Also on small devices. I mean the smartphones are fairly small.
Deleted Comment
Dead Comment
And regardless of any privacy policy or the like, you still have to worry about Room 641A scenarios [https://en.wikipedia.org/wiki/Room_641A].
Can you imaging a Stasi that has a large portion of the population also wearing pervasive surveillance tech? Amazing!
Hard to believe they would?
This is Mark Zuckerberg we are talking about.
It's hard to believe they wouldn't.
Dead Comment
Deleted Comment
Hahahahahahahaha
ZUCK: yea so if you ever need info about anyone at harvard
ZUCK: just ask
ZUCK: i have over 4000 emails, pictures, addresses, sns
FRIEND: what!? how’d you manage that one?
ZUCK: people just submitted it
ZUCK: i don’t know why
ZUCK: they “trust me”
ZUCK: dumb fucks
Actual quote, BTW [1].
[1] https://www.newyorker.com/magazine/2010/09/20/the-face-of-fa...
I’m sure you’ve never said anything callous or snarky, and were a bastion of morality as a teenager.
"The demand for this ‘Ray-Ban hack’ has been steadily increasing, with the hobbyist’s waiting list growing longer by the day. This demonstrates a clear desire among Ray-Ban owners to exercise more control over their privacy and mitigate concerns about unknowingly recording others."
https://bytetrending.com/2025/10/28/ray-ban-hack-disabling-t...
This is why WE have the GDPR. To outlaw and prevent exploitation such as this.
Deleted Comment
[0] https://www.bbc.com/news/technology-36596070
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).
Is it because younger people don't care about privacy anymore?
The terminology you chose is tasteless, loaded, and detracts from your point.
[OFF] "Share data about your Meta devices to help improve Meta products." doesn't preclude sharing data for other purposes.
[OFF] "Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." doesn't preclude sending them to Meta's cloud for permanent storage.
I turned the AI off and used them as headphones and taking videos while biking. After a couple rides, I couldn’t bring myself to put them on because people started to recognize them and I realized I didn’t want to be associated with them (people are right to assume Meta has access to what they see).
Meta Ray Bans, if kept simple, could have been a great product. They ruined them.
Just continues to prove that if you solve a bit of inconvenience for them, people will let you exploit them and their families.
Wearing these glasses is just as obnoxious as walking around putting your phone in people's faces while recording.
It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.
I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.
They are creepy as fuck.
I’m embarrassed to wear my non-Meta Raybans now. That logo is a liability.
Deleted Comment
Deleted Comment
https://www.bbc.com/news/articles/c9wn5p299eko
There is (in general) no expectation of privacy in public in Europe. How you can use the material though, is a different matter ...
So if you take a video of specific people looking at flowers at the Keukenhof you would have to ask them for permission if you are recording them primarily and publish it but recording for yourself is fine as it is a clearly public space. If you take a picture of all the flower and catch some people in it in the background you are fine. If you do it in a place where people do not expect it they can ask you to remove the video and they have to (e.g. in a restaurant when you are eating as it is not expected to be recorded there).
There are some exceptions for journalism, law enforcement and public good. I doubt strongly any Meta (AI) post would classify for that.
There is also the small caveat that if you can avoid recording innocent bystanders you must. E.g. putting up a doorbell camera and pointing it to the street instead of your door is bad as it's easily avoidable by putting it top down.
Besides that there is the issue of publishing said footage, as others point out.
https://patch.com/illinois/lakezurich/il-student-punches-pro...
Different laws in different countries.
> before filming random people in the street?
That would make taking pictures impossible, so no, such a requirement cannot be reasonably() codified into law.
(
) By reasonably I mean in a way to be actually followed. Of course there are lots of impossible laws created by politicians to cater to their fan base.If you could not take photos of people in public places it would imply banning a lot of things that have been acceptable for a long time.
I sometime ask this person to hide the camera and they generally understand my feeling.
However, audio recording of conversations is prohibited.
Filming is legal. In public spaces (streets, parks), there is no "reasonable expectation of privacy." You do not need permission to point a camera. The exceptions are usually for offensive or harassing type of filming.
Publishing is regulated. In EU, once you share the footage , you are "processing personal data" under GDPR. There are also exceptions where publishing without permission is legal. Legitimate Interest (security footage or incidental background), Public Interest/Journalism, and Artistic Expression.
Generally you must ask permission to publish, not to film. Although asking permission to film is good ethical principle too.
Even having a fake camera pointing at a public space can be forbidden as it creates surveilance pressure on people using the space.
I mean, otherwise countries couldn’t use security cameras
https://www.imy.se/en/individuals/camera-surveillance/
I worked at a midsize financial company before and whenever there was something even approaching a legal or ethical grey area, we'd pick up the phone and say come to my office to talk, and then you'd close the door.
We weren't doing anything nearly as nefarious as Meta, yet everyone was always aware that email and phone conversations were recorded and archived.
Now, one wonders what constitutes "nefarious" or a grey zone worth hiding in their minds.
Do you believe these companies and individuals will ever see consequences for putting this in writing? I don't think they will, and I assume they believe the same based on their actions. Why waste time being "moral" when you don't lose anything for being immoral and stand to gain something if your gamble wins?
I mean, there's a whole philosophical outlook about being a good person and some people just want to do without needing enforcement, but those people also dont tend to become one of the largest corporations on the planet.
But still nefarious. Thats kinda messed up, to be honest.
Every day I understand more and more that I have something really priceless and rare, complete luxury of choice, and 99% of people don't. (as with all things, it has its downside: nothing matters!)
I refused to get "stuck" in my hometown, which motivated me from college dropout to FAANG. Once I got there, it was novel to me that even rich people get "stuck" due to inability to imagine losing status, and also responsiblities that come with obvious, healthy, lifestyle choices (i.e. marriage and kids)
I mean, they don’t. There isn’t a single decent person who has ever worked at Meta, and that started long before this nonsense. The entire company is about the social destruction of its users. Everything anyone there works on drives towards that goal.
Please don't respond with how you think people justify, I want to hear the actual reasons. I'm tired of speculative responses to questions like these.
Please do share if you've had to deal with similar situations too. And feel free to respond with green accounts.
I legitimately want to understand why this happens. Not why from management, why from engineers.
Most people are just trying to get through their day and not worry about ethical questions.
I'd say that's terrible, but I'm not confident I'd be a better person if my livelihood depended on doing that sort of work, though I hope I'd be better.
Who has been distracted from Facebook’s shenanigans? Who are they talking about? Is it me? Because I can tell you I have certainly not been distracted on that front. Am I supposed to feel guilty? Am I supposed to hold somebody accountable who should’ve been paying attention?
I do actually understand why it’s done, but I just find it very grating and if your goal is to actually raise awareness, shaming people is generally not the way to go about it.
Also the classic “we can walk and chew bubblegum at the same time” thing
It isn't really "rhetoric", they're talking like they believe this actually happens, this is strategy.
And I tend to agree with them that things like attention and political capital are ultimately finite resources.
I've found that the "we can do two things" and "we can walk and chew bubblegum" line of argument to be simplistic and just wrong (and pretty incredibly patronizing). I think the world works exactly the way Meta thinks that it does here.
It might blow up and turn into a Streisand effect, but more often than not this kind of strategy works.
Much like how people think they can multitask and talk on the phone and drive at the same time and every scientific measure of it shows that they really can't.
On September 11th 2001 a UK government department's press chief told their subordinates it was a "good day to bury bad news".
The idea is pretty simple - you might be obligated to announce something that you know will be poorly received, like poor train performance figures, but you can decide the exact day you announce it, like on a day when thousands have died in a terror attack. What would otherwise be front-page news is relegated to a few paragraphs on page 14.
Facebook proposes a similar strategy: Get the feature ready to go, wait until there's some much bigger news story, and deploy it that day.
> “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.
Is that a good enough explanation to reduce your feelings of being personally targeted?
Deleted Comment
I just don't see a world where that doesn't happen with Meta glasses.
To record a video on your phone you need to hold your phone up pointed at the other person, usually not in the same way you would normally use a phone. If you see someone holding his phone steady at face level and pointing at something without making finger movements, you know he is filming. If someone is pointing his phone down towards the ground and scrolling around with his thumb, you know he is probably not.
To record from a pair of smart glasses you just need to look at someone, as you would normally look at any other thing. Yes there will be an LED on, but the person being recorded probably couldn't see it if it is in a bright, busy environment and you are more than a few steps away, plus there will be aftermarket modifications to disable the LED. In short, there is no way you can reliably tell if someone's smart glasses are filming you. You have to assume that worst.
Then again, there may be some selection bias at play…
https://www.nytimes.com/2025/11/21/nyregion/nyc-nightlife-no...
The world is not deterministic, and we can shape norms of how we interact with each other. We don't have to accept being constantly recorded just because the technology makes it possible.
Not perfect, but better than nothing I guess. I don't think I've noticed the glasses IRL anywhere, but if I start seeing them, I'm definitely installing the app and avoiding any interactions with those people.
And all of that is to ignore that neither gen1 or 2 of Google Glass attempted to look like regular glasses. The Meta frames are largely indistinguishable from regular glasses unless you are very up close.
[EDIT] I really shouldn't need to say this on Hacker News but don't shoot the messenger for messages you don't want to hear. Reporting a fact does not imply approval or disapproval of it.
https://xkcd.com/1807/
No, we need to make this as socially radioactive as possible. We don't need to establish a permission structure to allow Facebook to continue doing this without repercussion.
Cameras on glasses will be normalized too. A few HNer types will scream. The rest of the "nothing to hide so nothing to fear" group will just wear them. (not saying I agree with "nothing to hide so nothing to fear". Rather, I'm saying that's common way of thinking. Common enough that it's likely people will wear these eventually.
How about this marketing approach: "College woman, tired of creepers trying to hit on you. Worried about getting roofied. Wear these glasses and turn the creeps in".
People widely accept mass surveilance and facial recognition, including by doorbells, phones, cameras on the street, etc. They post images and videos online to corporations that perform facial recognition. They accept government collecting data broadly by facial recognition.
People accept all sorts of horrors and nonsense, unrelated to and many times much worse than privacy violations, because (I think) they are normalized on social media, which is controlled editorially by Zuckerberg, Musk, Ellison, etc.
I'm not saying we're doomed. I'm saying nobody else will save us. We have to make it happen.
I’ve seen stories of people banned from gyms for taking selfies in the locker room as people were walking by.
It's difficult to draw a bright line between these activities:
- I told someone else something I saw the other day
- I painted a picture of the public square or wrote a book about specific activities that I witnessed
- I specifically remembered an individual based on their face, visible tattoos, location, license plate, or some other unique factor and voluntarily testified to that fact in a court of law
- I spent every day at the same corner making note of the various people/vehicles that I saw
- I stuck a camera at that same point (perhaps on my private properly directly abutting a public space) and recorded everything, posted it publicly on the internet, and used automated technology to identify people, text, vehicles, etc
- I paid a different person every day to follow someone around and record what they did
- I developed a drone system that could follow specific individuals/vehicles from airspace I'm allowed to occupy
Pretty much everything I described above is legal in most of the United States. Obviously it gets creepier and more uncomfortable going down the list (I don't really like it when I'm the subject of any of these activities) but how do you stop this?
I'll at least throw out some options
- Implement some form of right to forget
- Forbid individuals or organizations from doing any of these
- Enact actual "civil rights" level privacy protections (extend HIPAA? automatic copyright for human faces? new amendment?) that include protection of individual's DNA, unique facial features, and other "uniquely human" attributes
My friends always have a cheap shot when I wear them but are completely fine now and appreciate fun candid videos I send them
Amazing for vacations with the kids
I get why people are creeped out by them, but we get filmed or photographed hundreds of times a day in a big city when we are in public spaces. Gatekeeping a potentially useful technology for being filmed in public -- well, everyone is _already_ filmed in public. ATM cameras, stoplight cameras, drone cameras, smartphone cameras, security cameras, doorbell cameras. You are on camera every time you step out of your house. You are on camera every time you open your work computer. Singling out cameras in eyeglasses as "creepy" is kind of worrying about a drop in the ocean. Cameras on self-driving cars. Nanny cams. Closed-circuit cameras. The things are everywhere, and they are always invasions of privacy. Why is the line the "creeper" glasses?
I'd be ok with it if we were for banning all non-consensual recordings in all spaces. But we're very much not.
And if we're not, then having a personal heads-up display that is contextual to your current surroundings or has augmented reality capability is too useful to not use (eventually). I'm bad with names, and good with faces. That use-case alone would be worth it for me, if it were available.
And we probably ought to regulate how all such footage is handled.
> banning all non-consensual recordings in all spaces
It's a false dichotomy. Even if recording is permitted that doesn't mean the systemic invasion of personal privacy needs to be.
Just FYI, they do heavily market this towards RX glasses wearers. So, you wouldn't quite be able to just as simply ask someone to take off their glasses and no longer be able to see.
Apparently they sold 7 million of these. So I think a whole lot of people don't care about this aspect.
Deleted Comment
Dead Comment
I propose we just assume people with meta glasses are recording others in public and we call them creeps. Shaming works, we should use it more.
Doorbell cameras are also typically pointed toward public streets, where again, there is no expectation of privacy. Even then, many people have been removing Ring cameras after they were shown to automatically upload video without user's knowledge.
Body cam - used to protect the police and people being policed in a potentially hot conflict. Recording is scoped to these specific interactions that rarely occur for most people.
Doorbell cam - highly controversial. See response to dog-finding superbowl ad.
Body cam wore on face - Mass surveillance in potentially every conceivable social context. Data owned by Meta, a company known for building a profile on people that don't even use their products.
Not sure how it is where you live, but doorbell cameras are commonly criticized where I live. With many people claiming they don't feel comfortable walking around anymore knowing that the entire neighborhood is filming them.
Cops also announce their presence in uniforms and are operating as government agents. People already moderate their behavior around cops so being recorded isn’t as big a deal.
I think the difference is that these cameras are relatively concealed, and can be used to record every interaction, even in pretty intimate/private settings. Yes you could do this with a cell phone but it would be pretty obvious your recording if you're trying to get more than just the audio of an interaction.
People are more okay with cameras in public areas and less okay if it's in intimate, social, private situations, inside apartments, individual offices etc.
A face camera has no light or warnings (you just put tape over the small light), and is operated by a pervert.
[1] https://www.youtube.com/watch?v=X9sVqKFkjiY
There's very little sense to me in searching for meaning in any of this. It just is, people are that way. There are no lines and boundaries based on anything but just whims.
1. Debugging for troubleshooting.
2. Analytical for making product better.
3. Bugs that collects your info when it shouldn't.
4. Bugs from 3rd party vendor if company uses those.
5. Insecure process. Getting access to a private content within the company is trivial due to coarse permission model.
Source: I worked at two well known social media companies. Trust & Safety and data infra teams
The creepiness concern is real, but I think people misplace where the actual surveillance happens. The most consequential stores of personal data aren’t ad networks they’re things like banks, hospitals, insurers, and telecoms. These institutions hold information about your health, finances, movements, and relationships, indexed and searchable by employees you’ve never met, governed by policies you’ve never read.
Realistically, there’s very little an individual can do to completely opt out.
My take is: if the main outcomes are that I get shown ads for things I don’t need and my facecomputer knows the difference between a fork and a spoon… I… I can live with that.
Yes, but it's possible, at the cost of some minor inconvenience, to greatly limit data collected about you.
Communicate over private channels (Signal, own XMPP servers, NOT Whatsapp), pay in cash or crypto, runs free software on all your devices, and deny Internet access to devices across the board (this includes all TVs/monitors, all "smart" devices, cars, and other appliances).
The real issue is that (as these glasses exemplify), it is difficult to prevent others to intentionally or unintentionally provide data to surveillance companies. This happens when you walk in front of a Ring camera, when someone uploads a selfie to Facebook and you happen to be in the background, and in countless other situations.
One that bothers me a lot are all the apps that want people to share your contacts to find your friends. This is a quick way for them to get all the contact information, which may also include birthdays and other more sensitive details.
Even if I were to never make a Facebook account, I could almost guarantee they still have my name, address, phone number, DOB, and maybe more.
No. When your record a video on your phone, it is not being reviewed annotators. Generally companies only pay to get labeling done on data that is being used to train (or evaluate) ML models.