Google has been working on this since November last year going by the wayback archive of the support page for this feature.
I'm not seeing any indication that Gemini can read your messages, though. You can compose messages and start calls, but I can't get it to read me any of my messages. In fact, I can't even get it to send messages to group chats, only to individual contacts.
The feature makes a lot of sense, of course. WhatsApp is to many countries across the globe what texting and calling is to Americans. If your smart assistant can't even interact with WhatsApp, it's basically useless for many people.
What Gemini can’t do with WhatsApp
Read or summarize your messages
Add or read images, gifs, or memes in your messages
Add or play audio or videos in your messages
Read or respond to WhatsApp notifications
If you connected Google Assistant to WhatsApp, it seems like data may flow that direction, but then you've already hooked WhatsApp into Google before so I don't think anyone will be surprised there.
Does anyone know how I can make Gemini read messages? I can't even find the assistant settings necessary for that stuff to function.
Exactly and only what any other random app on the phone could do
with WhatsApp, assuming that you have enabled that in exactly the
way you would have to enable any other random app to do it.
Google needs to not be abusing its position as the source of the OS to give its software special privilege to reach inside of third-party apps.
The line is blurry. Google is positioning Gemini not just as an app, but as a OS level feature. The OS can by definition reach into any third-app app to do anything it wants. I'll give some more examples of OS-level features in case it's not clear: copy/paste is an OS-level feature and it is designed to extract arbitrary text or content from third party apps (copy) and insert them into third party apps (paste); screenshotting is an OS-level feature and it is designed to capture the visible views of any third party app with the only exception being DRM content.
Apple Intelligence has similar marketing. In last year's WWDC, there was the whole "Siri, when is my mom's flight landing?" segment (see https://developer.apple.com/videos/play/wwdc2024/101/ at 1h22m) that didn't generate any controversy. So for some reason people think Siri should rightfully be an OS-level feature but Gemini should not. Got it. I guess Apple's PR is just that much better than Google's.
Unfortunately the situation on Android is that other apps cannot do anything with WhatsApp, and there's fuck all you can do about it as a user.
I shouldn't need Google special-casing Gemini to allow LLMs to interact with my messages. I should be able to wire up Tasker to WhatsApp on one end, and to OpenAI or Anthropic models of my choice via API calls on the other end. Alas, Android is basically like iPhone now, just with more faux choice of vendors and less quality control.
> Google needs to not be abusing its position as the source of the OS to give its software special privilege to reach inside of third-party apps.
There are some extremely useful features that you can implement with AI, but currently only at the OS level, not with normal app permissions-- namely live translation of audio streams that belong to another app (calls, video playback, etc.).
But I suppose you're still right; it would still be better if Android had an API for sharing app audio streams like this.
"Google's own documentation"... uh oh, first time?!
The first archived version of this page containing the "can't do" list was published Nov 2024. The email is about a change "making it easier" to be rolled out July 2025 so I would not bet someone else's money on this page being up to date. We'll find out I guess.
> The Gemini mobile app may support some of these actions with help from Google Assistant or the Utilities app, even with WhatsApp disabled in Gemini. Learn more about Google Assistant features in your Gemini mobile app and actions supported by the Utilities app.
People have been clowning on Apple for being behind on the AI stuff and -- while I'd never defend how they promised a bunch of features in 2024, showed them in ads, and sold iPhones based on vaporware, but still haven't shipped most of the features -- I will say, I imagine a lot of the hold-up is because they realized how dangerous it is to start trusting AI with the sensitive data on your phone. It's probably not too hard to make it work most of the time, but even if there's a 0.0001% chance the AI will send a sensitive image meant for your wife to your boss, you should probably reconsider shipping.
I don't believe Google has the tact to care as long as they look like a market competitor in something.
> -- I will say, I imagine a lot of the hold-up is because they realized how dangerous it is to start trusting AI with the sensitive data on your phone.
It was probably Apple being incompetent with their AI approach rather than being careful
Precisely. Its incredible that people think Apple is playing 4D chess with AI, when in reality the simplest answer is the most plausible - Apple has no clue wth to do with AI. Their own assistant - Siri - has been in shambles for close to a decade.
Structurally Apple is in a disadvantage, in the AI race. And no amount of waiting, or polish is going to help them - unless they partner with OpenAI, Anthropic or Google.
> but even if there's a 0.0001% chance the AI will send a sensitive image meant for your wife to your boss, you should probably reconsider shipping.
That's too low of a probability for Apple to care. The probability that YOU would do it yourself by some random series of accidents is probably orders of magnitude higher than that.
Do you really think you're going to send 1,000,000 nudes to your wife without accidentally sending one to the wrong person!?
> Do you really think you're going to send 1,000,000 nudes to your wife without accidentally sending one to the wrong person!?
That seems like the wrong way to spin this hypothetical probability.
A quick search says there are 1.38B iPhone users worldwide. According to[0], 87.8% of 18+ year olds have sexted, so let's estimate that to mean 1.21B users. Even if we assume users only ever send one nude, that means 1,210 gaffes if you assume one in a million.
It's the other way around. The probability is so low that it is incredibly unlikely to happen to any given individual. You would have to be paranoid to worry about it. On the other hand the probability is so high that, when considering the size of Apple's user base, such incidents would happen regularly.
If true, that's pathetic on Apple's part. The unreliability of LLMs was maybe the biggest topic in the entire tech industry around that time. To be ignorant of that basic fact would be an incredibly bad look.
I have no insider knowledge but to me on the outside it looks like the same old panicky hype-chasing we've all seen in other contexts. Some executives kept reading and hearing about AI AI AI!, and were terrified of being left behind. The many voices of reason within the company pointing out the correct risks and tradeoffs to consider were ignored while the over-confident voices blustered their way onto the roadmap.
The whole situation with Gemini Apps Activity setting is so frustrating. Even if I pay for Gemini Pro, the only way to make sure there will be no human looking at your chats is to set Apps Activity to off, which means you don't have any history for Gemini chats, even for the messages from a minute ago.
Wouldn't a blank homepage be exactly what you expect if you had no tracking enabled? The algorithm that generates the homepage is probably totally stunted with only empty logs to draw on.
Shhh! Don't let Google know about this trick. I've gotten stuck in approximately 0 doom scroll loops since disabling view history and I'd like to keep it that way.
It's not a punishment; it's entirely transactional. You make yourself less valuable to advertisers so you also make yourself less valuable to Google. Therefore Google provides you with fewer features such as a blank homepage.
The era of Google providing costly features to users with no benefit to itself is coming to an end.
Thanks for the suggestion!
I actually see that
Business Standard Google Workspace (for 1 user) that includes Gemini Access costs less than Gemini Pro subscription for an individual. I will give it a go.
These big tech companies are so frustrating. Why does every single aspect of our digital lives need to be monitored? It’s like whack a mole trying to get the most basic of privacy.
As we have all learned, ad and subscription models aren't mutually exclusive. You can still get ads while paying for a subscription.
In fact, I don't believe the ad model would have gone away if everyone started paying for a subscription. The bottom tier would still be filled with ads.
Ideally, the market would solve this. The companies that are pushing annoying would lose customers to the companies that don't. But since we don't live in a ideal world, I honestly think regulations would be the only way. Something like "If a customer pays for subscription in any way, you can't show ads" - and then let the companies put a realistic price to their subscription tiers, which makes it worthwhile for them.
It's totally possible to have the ad model without all the spying. It's just that marketers don't want that to be an option. They're all in on spying on us.
people didn't vote for shit, if they could vote they'd vote for no ads and no cost. companies like google destroyed this option on purpose. there is no reason why the vast majority of apps and services online can't be both free and ad free. if i look for tetris on the app store it's literally impossible to find a version that's both ad free and free of purchases despite the fact that i know there's at minimum 100 options that fit this criteria. google/apple just buries them and deliberately doesnt allow filtering to find them
You can pay for Google services. But even if you pay for Google One or YouTube premium, I'm sure that Google will still track your behavior and mine your data. Why would a company not "double-dip"?
We are paying for phones but we are still the product. Google Facebook etc were explicitly created to monetize privacy. What I search for is monetized. Who I know is monetized. Private companies will monetize what we perceive as public goods to our detriment.
It's become so terrible that I've given up on trying to secure Android anymore because it's become essentially impossible. This is the primary reason why my current smartphone is my last smartphone.
Don't act like your opinion is the only one that matters. You may not, but other people do care about their privacy.
"Here's the thing: Google promises that under normal circumstances, Gemini cannot read or summarize your WhatsApp messages. But, and this is a big but, with the "help" of the Google Assistant or the Utilities app, it may view your messages (including images), read and respond to your WhatsApp notifications, and more."
Doesn't matter what your opinion is on privacy, google doesn't give you the option to opt out.
- "regardless of whether your Gemini Apps Activity is on or off."
One problem with this sort of thing is that—sure, we can call privacy violation an opinion and admit that some people have dumb opinions like “I don’t need any privacy.” But unfortunately only one person needs to let the privacy violation bot into the conversation to violate everybody’s privacy, so it isn’t as if your opinion will really be respected.
Of course, the easy solution is that nobody has conversations that might need privacy anymore; people can just always be in public persona mode. Hopefully we don’t end up with a society made up of inauthentic lonely people as a result.
I think if I understand the article correctly it sounds like Google might also be reading the messages so it can respond for you. Regardless I think the other thing people might not be happy about is Gemini can still interact with apps regardless of if you have app activity turned on or off, as quoted from the linked email in the article:
What's changing
Gemini will soon be able to help you use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off
Because that way they can build profiles of you and use them to manipulate you into buying junk you don't need. That, in turn, makes the line go up and the share holders happy.
I wonder how much of this is actual advertising working (proven by independent A/B testing) and how much of it is big tech bullshitting their shareholders and customers. Even Veritasium had a video ~10 years ago, describing Facebook's way of reducing view counts to coerce advertisers to pay higher.
Gemini being able to read WhatsApp messages (when explicitly asked) and take actions can be convenient. If it does so without prompting or feeds the data back into their model in any way for training - that's a big no.
It's apparently obvious to you that "hey Gemini, can you message Mike that I love him?" means the text is first sent to Google and then back to your phone and then by your phone to Mike. This isn't the case for everyone, perhaps also because it's not necessarily that way: https://www.macworld.com/article/678307/how-to-use-siri-offl... I couldn't find whether tasks related to "reading your messages" (like text to speech while you're driving or so) is a thing Siri does, but it obviously talks to you and if you tell it to send a message then that works offline so evidently there is some access there without needing to first upload it to the assistant's vendor
> With Gemini Apps Activity turned off, their Gemini chats are not being reviewed or used to improve our AI models.
Indeed bizarre as the statement doesn't say much about data collection or retention.
More generally, I'm conflicted here -- I'm big on personal privacy but the power & convenience that AI will bring will probably be too great to overcome. I'm hoping that powerful, locally-run AI models will become a mainstream alternative.
Personally, I prefer AI to stay in its own corner. Let ChatGPT, Gemini, and the rest be something I open when I need them, like a website or an app. I'm not really into the whole "everything should have AI built into it" idea.
It kind of reminds me of how the internet used to be. Back then, you had to go to a specific room to use the family computer. The internet was something you visited. Now, tech is everywhere, from our pockets to our bathrooms. I’m not sure I want AI following that same path.
Agreed the privacy that keeping AI "in a corner" appeals to me too.
The fundamental catch here is that 80%+ of the future benefit will likely come from the very thing that erodes privacy: deep integration and context. Imagine if a Gemini had your entire life in its context (haha scary I know!), prompting would be so much more powerful.
That's the core, uncomfortable trade-off we're all facing now.
It's going the opposite direction. AI won't be inside each different thing, instead everything else will be nested under the AI. Like Gemini here. AI will have user-equivalent access to interact with any app. It will be the default and people will not mind it because it's convenient and if you have nothing to hide.
My approach has been to lock AI assistants (for me, that's just Apple intelligence as far as I can help it) out of integrations with the vast majority of apps, and especially chat and email apps.
At some point, some reverse engineer will publish a writeup either confirming or denying how local these models are, how much data (and maybe even what data) is being sent up to the mothership, and how these integrations appear to be implemented.
It's not perfect, and it only offers a point-in-time view of the situation, but it's the best we can do in an intensely closed-source world. I'd be happier if these companies published the code (regardless of the license) and allowed users to test for build parity.
Maybe at some point, Apple is/was trying to do everything locally but it appears they have recently decided to move away from that idea and use OpenAI.
I can understand why: you’re only using locally-run AI models every so often (maybe a few times a day), but when you use it, you still want it to be fast.
So it will need to be a pretty heavy AI chip in your phone to be able to deliver that, which spends most of the time idling.
Since compute costs are insane for AI, it only makes sense to optimize this and do the inference in the cloud.
Maybe at some point local AI will be possible, but they’ll always be able to run much more powerful models in the cloud, because it makes much more sense from an economics point of view.
Google also has AI models optimized to run on phones, they're just in a lot better of a position to actually build purpose-built LLMs for phones.
It's not clear to me why certain classes of things still end up farmed out to the cloud (such as this, or is it?). Maybe their LLM hasn't been built in a very pluggable fashion.
> they have recently decided to move away from that idea and use OpenAI.
... although, to be fair, they're negotiating with OpenAI to run the models in "secure enclaves", which should, assuming everything works right which is a huge assumption, keep Apple or anybody else from reaching inside and seeing what the model is "thinking about".
Assistant stuff. Like you bark "order a pepperoni pizza from Joe's Pizza" and it happens. You take a pic of your fridge and say "order stuff to stock it up to my usual levels". Or book a flight, or buy concert tickets or clothes, or get media recommendations, replan a trip while driving if you change your mind and add a stop somewhere. Ask to summarize group chat message floods. Put on some music. Control smart home gadgets.
It's hard to predict exactly though. I remember thinking in 2001 that nobody except the busiest businessmen would need a cell phone. A landline at home is perfectly enough and in special cases there are phone booths. And in 2011 I thought the same about smartphones. Why would I need email while walking in the street? Can't it wait until I'm home at the desktop? If I need computer stuff on the go, I can take a laptop. Similarly, I'm not quite sure how exactly it will go but probably in 10 years you'll need to have an AI agent to function in society. The legacy infrastructure decays if nobody uses it even if you'd prefer not to jump on the bandwagon. Today you often MUST have an app downloaded to do things, e.g. some museums require it, sometimes government services are much more tedious otherwise. Some restaurants only have a QR code and no physical menu. Often news items (from school, or local municipality) are only shared in social media. Etc. etc. I can easily imagine that there will be things you can't manually do in 2035, only by asking your AI agent to do it for you. And it will scan all your data to make sure that what you're doing is impeccable in intent and safety and permissibility (like an inverse captcha: you must be Gemini or another approved bot to do the action. As a human you have to jump a million hoops that maybe takes days of providing various details etc. And Gemini will be easy to spook and will be opinionated about whether you should really get to do that action or not.). And it will communicate behind your back with the AI of the other party to decide everything. Or who knows what. But it will be necessary to use.
Does Whatsapp expose these messages via an API? If yes, then it seems like this is not only on Google.
If no: Are they reading data from raw UI widgets? Are they intercepting input controls? Are they intercepting network traffic? That seems unlikely, given its probably end to end encrypted and the decryption happens within the scope of the Whatsapp process.
> If no: Are they reading data from raw UI widgets? Are they intercepting input controls?
Why not... they control the OS, it'd be trivial to add hooks to the "draw widget" command to intercept that it's about to draw a text widget for WhatsApp, and then ask it to log the text.
WhatsApp data is encrypted, however, the keys are on the device itself and accessible on Android. There are many third-party apps that support transferring WhatsApp data from one phone to another, and some even claim so between Android and iOS devices. As I understand, the chats are in some usual database format. So anyone having access to the device can read the data even without WhatsApp being there itself (as far as the data is there).
I don't think it's quite as simple as that. The keys are stored in a storage area that Android locks off as WhatsApp's alone; no other app can get to those keys.
At the very least you'd need to root your device, but even that might not be quite enough going by my memory of trying to export my chats once. I remember the only documented working path included something like installing a shady, modified APK of a legacy WhatsApp version with an outdated encryption method to a second device and then somehow getting the new app to write a backup in the legacy format, to then restore to the fake second device and decrypt. I quit there because the risk of actually losing my entire backup seemed too high. And that was about five years ago, so I'd assume if anything, it's even more difficult today.
>When granted, an app with accessibility permission can:
Read screen content (including text and buttons in other apps)
Detect user interactions (like taps, swipes, or gestures)
Navigate between apps and the system UI
Monitor app launches and foreground/background changes
Access and control other apps indirectly
Perform gestures or clicks on behalf of the use
No other app can get to that backup data though except the original one that made the backup. Not even the owner of the account is allowed access to it (which I'm almost sure is a GDPR violation)!
I'm not saying it's impossible that Google just grants their own app an (IMO indefensible) exception to this. But the potential shitstorm would be massive, so I assume they probably use some other way, such as screen recording or accessibility features.
This really annoys the shit out of me. First people work hard to enable E2E encryption on WhatsApp, then Google goes "lol we'll just upload your chats to Gemini cloud".
I'm not seeing any indication that Gemini can read your messages, though. You can compose messages and start calls, but I can't get it to read me any of my messages. In fact, I can't even get it to send messages to group chats, only to individual contacts.
The feature makes a lot of sense, of course. WhatsApp is to many countries across the globe what texting and calling is to Americans. If your smart assistant can't even interact with WhatsApp, it's basically useless for many people.
Edit: ah, that explains why I can't make Gemini read my messages to me, Google's own documentation (https://support.google.com/gemini/answer/15574928) says it can't:
If you connected Google Assistant to WhatsApp, it seems like data may flow that direction, but then you've already hooked WhatsApp into Google before so I don't think anyone will be surprised there.Does anyone know how I can make Gemini read messages? I can't even find the assistant settings necessary for that stuff to function.
Apple Intelligence has similar marketing. In last year's WWDC, there was the whole "Siri, when is my mom's flight landing?" segment (see https://developer.apple.com/videos/play/wwdc2024/101/ at 1h22m) that didn't generate any controversy. So for some reason people think Siri should rightfully be an OS-level feature but Gemini should not. Got it. I guess Apple's PR is just that much better than Google's.
I shouldn't need Google special-casing Gemini to allow LLMs to interact with my messages. I should be able to wire up Tasker to WhatsApp on one end, and to OpenAI or Anthropic models of my choice via API calls on the other end. Alas, Android is basically like iPhone now, just with more faux choice of vendors and less quality control.
There are some extremely useful features that you can implement with AI, but currently only at the OS level, not with normal app permissions-- namely live translation of audio streams that belong to another app (calls, video playback, etc.).
But I suppose you're still right; it would still be better if Android had an API for sharing app audio streams like this.
Google has Gemma? So they could also blow Apple out of the water by competing directly there.
If that's a slap on the wrist, then we can be sure that Google is doing it.
good luck. why do you think they bothered to create an OS in the first place? like, what did you think a data mining company would create an OS for?
Gemini uses the same APIs and permissions as any other Android app.
Current HN title: Google can now read your WhatsApp messages
Even aside from the false equivalence of Google and Gemini, the current HN title is pure clickbait.
The first archived version of this page containing the "can't do" list was published Nov 2024. The email is about a change "making it easier" to be rolled out July 2025 so I would not bet someone else's money on this page being up to date. We'll find out I guess.
https://web.archive.org/web/20241107174006/https://support.g...
My normal "Google's own documentation" experience is the other way round - to be told something is possible when it certainly isn't.
Deleted Comment
Deleted Comment
Crooked billionaires shouldn't enjoy the benefit of the doubt.
I don't believe Google has the tact to care as long as they look like a market competitor in something.
It was probably Apple being incompetent with their AI approach rather than being careful
Structurally Apple is in a disadvantage, in the AI race. And no amount of waiting, or polish is going to help them - unless they partner with OpenAI, Anthropic or Google.
Not the best example since Siri has been misunderstanding us for many, many years.
You really meant to send that I love you to Louis coworker, right? Not to "Love"? Too late
Why make up stuff like this? Siri confirms everything that sends data.
They have always been behind. Why would this time be any different?
That's too low of a probability for Apple to care. The probability that YOU would do it yourself by some random series of accidents is probably orders of magnitude higher than that.
Do you really think you're going to send 1,000,000 nudes to your wife without accidentally sending one to the wrong person!?
That seems like the wrong way to spin this hypothetical probability.
A quick search says there are 1.38B iPhone users worldwide. According to[0], 87.8% of 18+ year olds have sexted, so let's estimate that to mean 1.21B users. Even if we assume users only ever send one nude, that means 1,210 gaffes if you assume one in a million.
[0] https://www.womens-health.com/sexting-statistics
(Search: Steve Jobs predicted the future of AI)
I have no insider knowledge but to me on the outside it looks like the same old panicky hype-chasing we've all seen in other contexts. Some executives kept reading and hearing about AI AI AI!, and were terrified of being left behind. The many voices of reason within the company pointing out the correct risks and tradeoffs to consider were ignored while the over-confident voices blustered their way onto the roadmap.
(Google punishes viewers who make themselves less valuable to advertisers by giving them an entirely blank homepage.)
Hence why I open a new private browser for each session.
The era of Google providing costly features to users with no benefit to itself is coming to an end.
Deleted Comment
How about paying once, owning a specific version and that's it?
In fact, I don't believe the ad model would have gone away if everyone started paying for a subscription. The bottom tier would still be filled with ads.
Ideally, the market would solve this. The companies that are pushing annoying would lose customers to the companies that don't. But since we don't live in a ideal world, I honestly think regulations would be the only way. Something like "If a customer pays for subscription in any way, you can't show ads" - and then let the companies put a realistic price to their subscription tiers, which makes it worthwhile for them.
You make it sound as if those were the only two options available..
Deleted Comment
This happens even when people pay for the products. See for instance the enshittification of streaming "ad free" services.
Anyone running into this problem willingly opted in to having surveillance software on their device. Meta’s track record is not secret.
Deleted Comment
Maybe the problem is what you consider a privacy violation, other users consider a feature.
Don't act like your opinion is the only one that matters. You may not, but other people do care about their privacy.
"Here's the thing: Google promises that under normal circumstances, Gemini cannot read or summarize your WhatsApp messages. But, and this is a big but, with the "help" of the Google Assistant or the Utilities app, it may view your messages (including images), read and respond to your WhatsApp notifications, and more."
Doesn't matter what your opinion is on privacy, google doesn't give you the option to opt out. - "regardless of whether your Gemini Apps Activity is on or off."
Of course, the easy solution is that nobody has conversations that might need privacy anymore; people can just always be in public persona mode. Hopefully we don’t end up with a society made up of inauthentic lonely people as a result.
That's tech capitalism in a nutshell.
Dead Comment
Indeed bizarre as the statement doesn't say much about data collection or retention.
More generally, I'm conflicted here -- I'm big on personal privacy but the power & convenience that AI will bring will probably be too great to overcome. I'm hoping that powerful, locally-run AI models will become a mainstream alternative.
It kind of reminds me of how the internet used to be. Back then, you had to go to a specific room to use the family computer. The internet was something you visited. Now, tech is everywhere, from our pockets to our bathrooms. I’m not sure I want AI following that same path.
The fundamental catch here is that 80%+ of the future benefit will likely come from the very thing that erodes privacy: deep integration and context. Imagine if a Gemini had your entire life in its context (haha scary I know!), prompting would be so much more powerful.
That's the core, uncomfortable trade-off we're all facing now.
At some point, some reverse engineer will publish a writeup either confirming or denying how local these models are, how much data (and maybe even what data) is being sent up to the mothership, and how these integrations appear to be implemented.
It's not perfect, and it only offers a point-in-time view of the situation, but it's the best we can do in an intensely closed-source world. I'd be happier if these companies published the code (regardless of the license) and allowed users to test for build parity.
I can understand why: you’re only using locally-run AI models every so often (maybe a few times a day), but when you use it, you still want it to be fast.
So it will need to be a pretty heavy AI chip in your phone to be able to deliver that, which spends most of the time idling.
Since compute costs are insane for AI, it only makes sense to optimize this and do the inference in the cloud.
Maybe at some point local AI will be possible, but they’ll always be able to run much more powerful models in the cloud, because it makes much more sense from an economics point of view.
It's not clear to me why certain classes of things still end up farmed out to the cloud (such as this, or is it?). Maybe their LLM hasn't been built in a very pluggable fashion.
... although, to be fair, they're negotiating with OpenAI to run the models in "secure enclaves", which should, assuming everything works right which is a huge assumption, keep Apple or anybody else from reaching inside and seeing what the model is "thinking about".
What is that power? Honest question...
It's hard to predict exactly though. I remember thinking in 2001 that nobody except the busiest businessmen would need a cell phone. A landline at home is perfectly enough and in special cases there are phone booths. And in 2011 I thought the same about smartphones. Why would I need email while walking in the street? Can't it wait until I'm home at the desktop? If I need computer stuff on the go, I can take a laptop. Similarly, I'm not quite sure how exactly it will go but probably in 10 years you'll need to have an AI agent to function in society. The legacy infrastructure decays if nobody uses it even if you'd prefer not to jump on the bandwagon. Today you often MUST have an app downloaded to do things, e.g. some museums require it, sometimes government services are much more tedious otherwise. Some restaurants only have a QR code and no physical menu. Often news items (from school, or local municipality) are only shared in social media. Etc. etc. I can easily imagine that there will be things you can't manually do in 2035, only by asking your AI agent to do it for you. And it will scan all your data to make sure that what you're doing is impeccable in intent and safety and permissibility (like an inverse captcha: you must be Gemini or another approved bot to do the action. As a human you have to jump a million hoops that maybe takes days of providing various details etc. And Gemini will be easy to spook and will be opinionated about whether you should really get to do that action or not.). And it will communicate behind your back with the AI of the other party to decide everything. Or who knows what. But it will be necessary to use.
Does Whatsapp expose these messages via an API? If yes, then it seems like this is not only on Google.
If no: Are they reading data from raw UI widgets? Are they intercepting input controls? Are they intercepting network traffic? That seems unlikely, given its probably end to end encrypted and the decryption happens within the scope of the Whatsapp process.
Why not... they control the OS, it'd be trivial to add hooks to the "draw widget" command to intercept that it's about to draw a text widget for WhatsApp, and then ask it to log the text.
WhatsApp data is encrypted, however, the keys are on the device itself and accessible on Android. There are many third-party apps that support transferring WhatsApp data from one phone to another, and some even claim so between Android and iOS devices. As I understand, the chats are in some usual database format. So anyone having access to the device can read the data even without WhatsApp being there itself (as far as the data is there).
At the very least you'd need to root your device, but even that might not be quite enough going by my memory of trying to export my chats once. I remember the only documented working path included something like installing a shady, modified APK of a legacy WhatsApp version with an outdated encryption method to a second device and then somehow getting the new app to write a backup in the legacy format, to then restore to the fake second device and decrypt. I quit there because the risk of actually losing my entire backup seemed too high. And that was about five years ago, so I'd assume if anything, it's even more difficult today.
>When granted, an app with accessibility permission can:
Whatsapp has dark patterns that "guide" you to "archive" your chats on google drive.
I'm not saying it's impossible that Google just grants their own app an (IMO indefensible) exception to this. But the potential shitstorm would be massive, so I assume they probably use some other way, such as screen recording or accessibility features.
Deleted Comment