Google Now has done the same for me, told me how long it will take to get to a bar I frequent. My reaction was quite exactly "Oh that's neat, thanks!" and I went and had a great burger that night.
Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.
One tiny caveat being that, Google (and others as well, to be fair) will be able to indirectly collect data even on the privacy-aware part of the population who don't use Google much. The simplest example being, even if you don't use GMail, still some part of your emails inevitably end up in GMail inboxes. Now also consider this: just being a guest in a Google-stuffed house means you are under surveillance.
So no, it is not just my problem or your problem, it's everyone's.
Sure, and to illustrate your point, I have an email very similar to someone else's. I very frequently get their emails (invoices, church events, travel itineraries, purchase receipts). Google thinks they're _my_ trips, and updates me about flight times.
I think this example serves both our points. To your point, it's totally leaked this other person's info into my "google world" because I'm on gmail. On the other hand, that person is leaking his information directly to me just because of typos when he fills out online forms. Perfect privacy requires a lot of vigilance in a digital world, with or without google/gmail/hotmail/yahoo/etc.
If you visit a place of business you are potentially under surveillance. I'm sure there is a distinction but I'm failing to come up with it right now.
I was somewhat radical about privacy in the late 90s (only person I knew that read every EULA) and am still a supporter of the EFF but I don't really understand the issue here.
Google has top notch practices in my opinion regarding privacy. If anything I would be more worried about smaller companies with shadier practices and lower security standards holding my personal information. There have been seemingly illegal practices from companies like Sears with how personal data is collected and used. It's easy to throw shade at a big companies, write a sarcastic title, then get clicks.
Example: "Intuit’s TurboTax stores highly detailed financial data for millions of users who import their W2s, their banking data, info about their mortgages and more. Right now, all of this data is locked into TurboTax, but the company is now thinking about how it can do more with it by giving its users the option to share this data with reputable third parties." ... https://techcrunch.com/2016/09/22/intuit-wants-to-turn-turbo...
Perhaps we need to start normalizing encrypted email. Just like https everywhere is no longer considered necessarily "tin-foil hat" SOP, moving this direction for email needs to be socially normalized.
Going further, given that an encrypted email to Gmail will simply be unencrypted and then available to GMail, include in the protocol authorized (via both white and black list means) agents of the recipient. So, if you are hosting your own email but the intended recipient is expected to be not hosting their own email, the sender can blacklist "agents" such as Gmail and Yahoo! Mail, or blacklist all except for those chosen to be white-listed such as Proton Mail.
I'm not sure you are any less responsible for your own privacy despite the fact that companies like Google as well as others are making it more challenging. The example you mentioned seems easily fixed by using GPG.
Granted there may be a place for regulations to help us restrict what companies are able to do(perhaps making it easier for you to identify a region that is being recorded, right?), but, at some point society can't help the fact that you'd prefer if machines were unaware of your existence. That's just something you have to solve for yourself.
If you don't care for the products or services, you could always opt not to buy/use them. That includes avoiding areas where Google appliances are present. I'm fine with trading privacy for AI-driven convenience, though I already have a phone I like and an Echo, so I have no compelling reason to buy these particular products right now.
a non-insignificant proportion of hacker news seems to actually work at google and run to their defense at anything somewhat critical of their overlord.
I fear you will be unable to recognize when that burger was your choice and when it was a reaction. You probably won't notice. And that is harmless.
I also fear you will be unable to notice in which areas of life and information the distinction between choice and reaction is harmless and which it isn't.
Of course, I'm not talking about "You" you, but just people. Me as well. I feel we are widening the field of unconscious decisions and I see that as inherently bad - in my fellow humans as well.
I'm sorry but that just sounds like blind fear mongering. What you're saying is vague and doesn't really mean much.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.
To my mind, leading a simple life is enjoying a burger at a restaurant/bar I frequent already. Simplicity _is_ accepting that Google algorithmically noticed a trend and just helped me do things I already do.
That's the thing though. I reject the notion that you ever actually make a choice. I would posit that 100% of the actions you take are simply the deterministic reactions when the current world state is filtered through your brain. Then, after the fact, your brain gets busy inventing a reason that you took a particular action and calls it a "choice" when really you were just going to do what you were going to do anyway.
"I ordered this burger because I was hungry and it tastes good" vs "I ordered this burger because Google was able to successfully predict that I would be receptive to having burgers, or the idea of burgers, placed in my environment"
Given all the other points in life where, despite my awareness, I don't have much choice, how is an AI just directing me really any different?
My culture, education and skills limit what work I can do.
Our culture places limits on a vast number of experiences. On the road and the only thing is fast food? Welp, eating fast food. Live somewhere that only has one grocery store or cable provider?
I don't really see AI in the form Google is peddling as really all that much different. We're just 'more aware' that the world around us is really guiding us.
I may be somewhere new, and can only see the immediate surroundings without a lot of exploring. And let's be real, in the US, most cities are the same when it comes to restaurants/hotels and such. There are differences in culture but we don't usually see them if we're just visiting. Not in a way that matters.
Google will let me know that the things I prefer back home? there are equivalents nearby.
Fencing ourselves in is what we do. Who knows, perhaps a digital assistant would help us stick to our personal goals and decisions better. Rather than just having to accept what's there.
What I want mainly from Google is more and easier ways to customize my level of privacy. The article touches on the EFF's stance against incognito modes briefly, but it's an important one; I don't want lack of monitoring to be something I start a separate session for, with a logo of a creepy dude implying I should use this only for spying and pornography. I'd like to get as close as possible to an assistant that remembers relevant data on where I go and how long it takes, but ignores my browsing history to psychologically manipulate me into buying things--of course, that needs a different revenue model.
I think eventually they add so many capabilities and so many fine-grained controls and it becomes impossible to manage the UI or to find the right options. Even looking at the settings for Android privacy / settings, it's pretty hard to find anything.
This is by design, so that the majority of users are confused and leave the defaults as is, enabling Google to do whatever they like.
When it first told me it knew hiw long my commute would take, I realised it was creepy as all hell that the people (in another country with few protections on data) providing my phone software knew enough about me to tell where I worked and when I was going there.
And it annoys me that on maps, when you turn off all the spying capabilities there's no fallback to local history. You either share it with us or you get none.
Failing to provide local history is essentially one of the dark patterns to getting you to turn on their privacy collection. Most things Google requires it for could easily be done outside the cloud, but by making things depend on the cloud, and then telling everyone you can only do it with the cloud, you convince people that they need the cloud. When in reality, they never did.
GPS navigation devices with much less storage than a phone have been more than capable of what Google Maps offers for a long time. There's essentially no reason for it to do anything with the Internet except getting map updates.
It may be OK for you but there are at least three real concerns here:
1- There is no way to set your privacy level.
2- Things that Google/Siri/Alexa know about you are not limited with the name of the bar you go frequently. They know much more about you. And you don't know what they know. The sky is the limit here.
3- Things that they know are not limited with you personally. They know about you, your family, your friends and all their interactions. They know very much about the whole society.
1 - Sure there is, Google has fairly fine-grained tracking control. Not perfect, but as another commenter noted, this is a double-edged sword, as _too many_ controls can conversely hinder user control (see Facebook's privacy revamp)
2 - My point is that I personally am OK with Google's AI knowing more about me. I respect that others aren't. I'm not naive in my acceptance.
I don't know about Google, but I know that Siri and Alexa only collect and send data when you ask it to.
You can monitor the traffic of the Alexa and see that it is only sending data when you ask it to do something, and furthermore, Amazon gives you a log of everything you've said to it and it recorded.
"Hi Ubercore, Google and your health insurance company here. We are worried that you are frequenting a facility that serves too much alcohol and wings. We care."
I really wish this was up top. I can't believe that the top rated comment on HN is a thinly veiled "if you don't have anything to hide, you don't have anything to worry about".
My impression is that (in the main) younger people have lower privacy thresholds than older people. Not for everyone (of course). Just on average.
My impression also is that most early adopters of this kind of technology are younger people. (again: mostly)
So this brings up an interesting question about the future. As the young early adopters age, what will happen?
a) their privacy thresholds will also increase and they will have a "oh holy crap" moment in the future, where as a middle-aged or older person, who has lived a now much richer and problem-laden life, they will realize that google (and/or other co's) have what they consider now, as too much personal information about them,
or
b) they will keep their young-ish privacy thresholds as older people, and in general, across society, people will have lower thresholds than exist nowadays. In other words the world will change.
Conversely, there are a fair amount of older folk who are very much okay with government surveillance and a general lack of privacy (particularly our inherent rights) because, e.g., they have "nothing to hide"
My impression (in the main) is that younger and older people have different views on privacy. Older folk might be creeped out by Google knowing their schedule, but okay with the NSA or FBI or whomever reading their emails because "because terrorists" whereas younger folk are more likely to balk at the latter, but very much okay the former.
Do you think my opinion is accurate? I'm curious because to some extent I completely agree with you.
I don't think A will necessarily happen for most. We are a product of our experiences. If, as we grow up, we get comfortable using always-online technologies and never suffer any consequence from those experiences, I don't see what would motivate us to suddenly doubt these technologies. I am confident B is the most likely situation; that's how societies move forward so quickly with tech.
My experience has been quite the opposite in terms of convenience and relevance: I commute by car and train (for some parts of the same journey) and google Now, google Maps (etc) have been totally useless there: telling me about traffice jams when I'm in the train, not telling me about train delays, etc... it now somehow thinks my home is at the train station, it tells me the last bus home is leaving soon when I've been home for hours, also Google Now's insistence on bombarding the leftmost pane on my phone with the most click-bait articles ever, often about things I had a passing interest in months ago, is just laughable.
I would be glad to give Google some of my very precious privacy, implementing some countermeasures like multiple and burner identities as needed, if I thought they had any chance of actually providing real value. So far they have failed miserably, I am not sure the economics of providing a really useful service there for free just with marketing information as source of income work now, or for a very long time. You'd need strong AI to actually help my day to day life, with solid non obvious guessing based on many very local and specific factors. I guess as long as people kind of believe that this future is coming, they may tolerate the invasion and forget about the promises.
But as mentioned above, if you don't want Google to potentially track your behaviour and preferences, "don't use their services" encompasses "don't send email to anyone with a gmail address".
The sad part is that the user turned off Google Now because he didn't want google to know about the bar he visited. Google was tracking and recording his location before Google Now, he just didn't know it. It's still tracking it after he disabled Google Now.
Yeah, but that is also the equivalent of 24/7 surveillance of all locations you visit. Google will end up figuring out whom you sleep with, etc. from that information.
Pretty much your only privacy is in your head at that point.
I'm not sure that is a "threshold" of privacy but rather a "I am okay with 24/7 surveillance of all of my activity."
hypothetically: you express radical political ideas to your friend with the expectation of your statements remaining in confidence-- but google was listening. now, your feed recommendations steer you further down the path google thinks you were already on. you are ready to attend a protest and perform civil disobedience, as google now knows based off of your interest in what it has been suggesting to you. it suggests (as facebook does now regarding making events for birthday parties etc) that you and some other people form that protest, and, because it said so, you do it. except it's a trap. the police's google feed tells them that some undesirables have planned a protest, and you're imprisoned.
is this story unrealistic, or has it already occurred?
I remember the huge smile I got on my face the first time Google Now picked up on the fact that I went to the same bar every Wednesday evening.
One Wednesday afternoon, at work, I got a notification saying "Travel time to the Lion & Crown". The first thing that ran through my head was "oh my god, I'm living in the future".
I am actually quite uncomfortable that my stock Android is making suggestions on how long it should take for me to get home or work (when I have never explicitly mentioned that it is my home or work).
The problem is that I want to use Google Maps so what choice do I really have?
Sure I use a dedicated gmail for my phone but that really does not help much.
I would not want Google knowing I sometimes drink too much, or that I do so and get behind the wheel of a car. Easy inferences it could make, given the time I spent at the bar, and the purchases on my credit card. That could even have consequences for the cost of my auto insurance. Edit: and health insurance.
I actually thought the most interesting point this article raised - for me at least - is the implict branding associated with the "OK Google" command. All privacy concerns aside, if I'm going to have a "personal Google" I want to be able to thoroughly personalize it.
but I think the fact that it's different for everyone is being lost in articles like this.
I think that it is different for everyone is completely and utterly obvious. It's clear the author doesn't think it's OK, but that it's his/her opinion.
This location history is really bad though. I added a test gmail account to my device a few weeks ago, but didn't remember location tracking was a per-account setting - now Google has nice big logs to hand whoever wants them and I can only delete them on a day-by-day basis (from the Android app at least).
Extremely annoying. This sort of thing should not be acceptable, an honest mistake results in every place I've been being logged in such a way that anyone with access to my Google account, access to Google servers or with a subpoena can have my full location history in a matter of seconds.
This needs to be a big red option every time you add an account "we're gonna log everywhere you go and hand it over to whoever we feel like, you cool with that?". It'd be different if the log and analysis were done only on my device, but doing this on Google's servers is completely unacceptable by anyone with even the weakest standards of privacy.
just having a mobile device you're being tracked in Australia now the Government now also has access to this data from the network providers the only way to be free is to not carry these devices
What I think is interesting is that many of us nerds have probably innocuously fantasized about having a Star Trek-like AI assistant with us, but now that they're taking the first steps towards that, we're starting to realize that in order for it to do everything for us, it has to know everything about us, too.
Nobody was thinking about "the cloud" back in those days. Back then, your data, you programs all lived and ran on your own computer in your home. Most people didn't go online, and if you did, it was mostly to read and download data to use locally on your own computer. Connections were intermittent and slow. The idea that your own data would be stored online was almost unimaginable; even using network-depending applications like usenet or email involved downloading everything first before using it. Online applications were hardly even dreamed of.
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.
> Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
The trope-namer (Star Trek AI) was a ship-wide AI - when considering the ship sizes, it definitely is closer to the "cloud" model and not limited to a private instances on officers' bunker/bridge terminals/tricorders. Perhaps a hardcore Trekkie could answer this question: is there any canon that defines the AIs scope? Is restricted to just one ship, or could it possible be a Federation-wide presence with a presence/instances on ships?
I feel your message is the most important in this thread because it's the crux of the whole concern about privacy and the cloud.
Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks. The following is a utopian notion, but had private networks seen as much R&D as the public clouds, they would be significantly less cumbersome than today's clunky VPNs. Imagine all of your devices collaborate directly with one another and with you on your own secure private network—no central cloud servers needed. Your personal assistant is software running on a computer you own rather than a third-party's centralized server.
I still feel this ideal will eventually be realized, but for the time being, no large technology company is willing to take the necessary risks to buck the trend of centralization.
The biggest fiction propped by up centralization and cloud proponents is that it would be impossible to provide the kind of utility seen in Cortana, Siri, Google Assistant, Alexa, et al without a big public cloud. A modern desktop computer has ample computational capability to convert voice to text, parse various phrases, manage a calendar, and look up restaurants on Yelp. Absolutely nothing the public clouds provide strikes me as something my own computer would struggle to do (to be clear, I would expect a local agent would be able to reach out to third-party sites such as Yelp or Amazon at your command in order to execute your desires, but they would do so directly, not via an intermediary).
A few years back, when Microsoft was at the beginning of its Nadella renaissance, I had hoped it would be the first technology titan to disintermediate the cloud and make approachable and easily-managed personal private networks a thing. Microsoft's legacy of focusing on desktop computers would have made it well-situated to reaffirm your home computer as an important fixture in your multi-device life. They could have co-opted Sun's old bad tagline: "Your network is your computer." But they elected to just follow the now-conventional public cloud model, reducing everyone's quite-powerful home computer to yet another terminal of centralized cloud services. Disappointing, but I think it is ultimately their loss. I suspect a lot of money is on the table for someone to realize a coherent easy-to-use multi-device private network model that respects consumer privacy by executing its principal computation within the network.
> The idea that your own data would be stored online was almost unimaginable
Except that's what I did for many years using a computer only as a terminal for an AIX mainframe. My mail was there, I browsed what was the web, used gopher, wrote programs, all stored there.
On top of that, the cloud we have now is commercialized, opaque and constantly under pressure to comply with a government that many distrust, for what I would say very good reasons.
I would like to say that the cloud we have is a privacy concern because we don't know the full scope of data collected, nor what happens to it, nor do we own any of "our" data once it's in the cloud. But not every cloud would have to be that.
There's a perfect world where one wouldn't have to be paranoid about this stuff, but it's not what we have right now.
I think there's an additional nuance, that of Google knowing everything about us. If I hacked together my own home automation AI system it would need to know everything about me too and that worries me far less.
only if you hacked together your own search, maps, calendar, etc. as well.
Maybe someday that will be a realistic endeavor, but it would take a lot of effort to set up and maintain your own personal versions of all of google's services, and integrate them
Well the missing part is the dedication to ideals and to the greater good of all life that was supposed to be core to the federation. I realize Star Trek is fantasy, but the reason people are more at ease with the omnipresence of technology in Star Trek is because you see people living by these humanitarian and noble ideals. People fight and die to defend thems. The right to self and privacy and protection is held very high in the st universe, even if challenged.
When's the last time
Google risked itself or business or any tech ceo risked their livelihood for the sake of the greater good? The problem isn't necessarily the knowing everything part, it's who does what with it that's the problem. I can't really think of any company or person with influence in tech that'd be willing to dive onto that bombshell to protect us all.
The former CEO of Qwest, a massive telco, spent years in prison for insider trading. He says this is because he resisted the NSA's demands to tap Qwest's network and hand over customer data.
The big difference is the Star Trek computer wasn't using its data about Kirk to provide him "enhanced advertising experiences", there wasn't a big corporation controlling the computer and no government was accessing the computer's information.
A truly user-aligned AI assistant would be great. Ideally in the future these things will not be tied to indirect business models, but rather will be something you buy and all data/services will be under your control.
Capitan Kirk was a government employee. It's implied the government/StarFleet could access the ships logs.
In the Star Trek world they had no advertising because they were a communist society. Everyone dressed the same or slightly differently based on rank. It's interesting how the new movies play over that.
In Star Trek you couldn't choose your AI. In our world you can. At the start of their development most of them are targeted at selling you stuff - but the industry is young and who knows where it will go.
Yes. Also running the AI on local computer made sense because there is no incentive to run it on cloud. Since nothing is gained. Star Trek is set on post scarcity economic society. We will get there once basic income becomes the norm in all countries.
Has Star Trek even a currency? It always feels like a socialist/communist thing to me.
You cannot compare the world's biggest seller of advertisement space with the ST universe. The motivation's aren't aligned: Google/Alphabet want to make sales based on my information.
I agree that I found these oh so clever AI fantasies interesting in my youth, still do to a degree. But I always pictured the data being held inaccessible to humans in general ("Where's my wife right now?") and not in the hands of a golden few with no oversight.
Star Trek's world seems to be a 'utopia' of scientific-military governance. Most of the key players have a military rank, wear color coded uniforms, and appear to be under 24/7 surveillance (which is OK since this is a very nice and progressive scientific-military goverment and you know War On <scary-alien-specie> and all that. :)
In the original show there wasn't a currency but in order to have aliens who exhibit avarice and the worst part of capitalism they had to include a currency (latinum, I think it was called).
It doesn't though, does it? The only reason this is a problem is that Google's business is still advertising and they act like our problems can be solved by tools made to sell more ads. The moment I could buy an AI service for like $10 a month (it had to be good), I'd trust them with using my data responsibly.
The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
> The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
That's part of the Star Trek fantasy. But so is, "Computer, locate Commander Riker" and "Computer, use personal logs and personality profiles from compiled databases to create a personality simulation of Dr. Leah Brahms."
I think people also forget that the Star Trek AI was in a semi-militarized scenario where efficiency and information greatly outweighed individual privacy needs.
I think most fantasies are okay with the anthropomorphic AI assistant knowing everything about us, but don't involve the AI transmitting all of it's data back to "the cloud" where advertisers can mine this data or the NSA could listen in with a secret gag ordered wiretap. Probably wishful thinking, but maybe one day a privacy first company will dip their toes into this arena.
Google doesn't "spill their beans" to third parties-- what Google actually sells is the opportunity for third parties to be included in the advertisements Google is targeting to their users.
Google has a strong incentive to not allow their aggregated user data to leave Google-- the behavioral data Google collects is the reason why Google is valuable; if they start shipping that data off to third parties, suddenly the third parties don't need Google anymore.
(Same with Facebook-- they're not "selling" your data; they're selling the opportunity to target you based on your data, but the data itself is too valuable to Facebook to sell.)
It really is the big problem at the moment with the cutting edge of AI.
ML relies on large data-sets and if anyone tried to release a personal device it simply wouldn't even work, let alone compete with the mass surveillance google/ms/amazon are bringing to bear.
Unless the state-of-the-art in AI suddenly morphs, we seem to be stuck between giving up our privacy or having vaguely intelligent AI.
I personally fall heavily on the privacy side of stuff, but I can see the intellectual and commercial appeal of pretending it doesn't matter in order to get there.
I think we just envisioned a highly anthropomorphized AI: essentially, a very smart and entirely obedient person to serve as the perfect aide. The Star Trek dream emerged before computing technology was very far advanced, and well before the idea of constantly mobile wireless communication. We thought our AIs would be small and physical, easy for a single person to entirely own and unable to remember more details than a human; instead we got unfamiliar algorithms run on machines far away.
The fundamental issue comes down to one of trust and the real question is, do we trust Google to do the right ethical and moral things with our data that they are collecting en masse?
Until it started happening I always assumed it'd be powered by a central computer I had in a clean area of the attic or something. Not some DC somewhere.
"Knowing" in some sense everything about us is not the same as owning that information or trading in it. There are many other possible approaches to applying ML to our personal needs and data, so it is worth being careful about not conflating issues with a particular implementation and issues with the area as a whole.
> in order for it to do everything for us, it has to know everything about us, too
But does it? Does it have to know your birthday? (leave alone the fact that bdays are somehow part of a superkey for your identity).
Why should it know my residence, my spouse, or my CC# (with Apple's TouchID maybe it won't need to)?
Google's concept of AI is too creepy for me. It can be useful without being creepy. They're not even trying to make it less creepy.
Overlay on this the subtext that NSA and other tla's are monitoring all this (leave alone other countries). While I may trust Google, I don't trust them to not be forced to collude with the government.
Not to sound glib, but the idea of persistent data acquisition and aggregation has been pretty well known to be in the path for anyone seriously researching AGI or other human level AI systems.
I have to admit this is true -- I used to dream of a Star Trek like computer where you can just speak to it but I never imagined that such as system would be strife with privacy and security issues.
You could have realized that from watching Star Trek, as the computer in the Enterprise can always tell the captain where every crew member is, whether there are strangers on board, etc.
Bingo. The movie Her was awesome. But rewatch it. In every scene the AI does something cool, think what permissions it would need and what data it would have to have access to about you to accomplish the task. It gets scary pretty quickly.
So, at the risk of making myself ridiculous and branded a Luddite:
I've totally passed on the 'mobile revolution', I do have a cell phone but I use it to make calls and to be reachable.
This already leaks more data about me and my activities than I'm strictly speaking comfortable with.
So far this has not hindered me much, I know how to use a map, have a 'regular' navigation device for my car, read my email when I'm behind my computer and in general get through life just fine without having access 24x7 to email and the web. Maybe I spend a few more seconds planning my evening or a trip but on the whole I don't feel like I'm missing out on anything.
To have the 'snitch in my pocket' blab to google (or any other provider) about my every move feels like it just isn't worth it to me. Oh and my 'crappy dumb phone' gets 5 days of battery life to boot. I'll definitely miss it when it finally dies, I should probably stock up on a couple for the long term.
I'm not sure how much longer I'll continue with the mobile revolution. Pretty much everything I've seen so far that's being branded as AI and the future of mobile computing is just something that saves you from opening up an app. Instead of opening up google maps and searching for directions home, the directions now sometimes appears automatically. Instead of searching for a near-bye restaurant, one is displayed for you. I don't need to enter my flights in my calendar anymore. This isn't nearly as drastic of a change as the original innovations allowed by smartphones. I'm not sure It's worth the trade off anymore.
I'm even more Luddite than you are. I don't have a phone at all (landline or otherwise). You wanna reach me, you email me. The people in my life that care about me have come to accept this. For other things, I read paper maps, plan appointments ahead of time[1], memorise routes, and look up stuff online from my laptop when I find a place to sit down and wifi.
Reading stories like this makes me want to carry a personal tracking device even less.
---
[1] People tend to have fewer emergency reasons to cancel when they can't reach you 5 minutes before the appointment.
Hate to be the bearer of bad news, but your "crappy dumb phone" is already telling someone about your every move with a degree of accuracy ranging from an area around the closest cell tower to within a few feet.
The courts are still deciding when/whether that information requires a warrant.
Yes, I'm well aware of that. That's how Dudayev got himself killed by a missile.
But short of anybody wanting to aim a missile at me I figure that I'm better off with the courts in my country where such information does require a warrant at present (and without any indication that this will change), and without the company controlling those assets trying to 'mine' my profile in order to advertise to me more efficiently.
Not really a Luddite, I'd think you're more of a Pragmatist when it comes to the Personal vs. Espoused benefits of certain devices or whatnot.
A close friend is a longtime professional software developer. Always interested in mobile. We used to have extensive discussions bout why I preferred carrying a small flip-top notepad and a pen vs a phone or tablet or whatever with a stylus (many have come and gone over the years). In the use-case scenarios I put forward (small lists, secure disposal, privacy, 'battery life') my little notebook frequently was the best approach for me. He disagreed, but that was the point of chatting about our views.
It is a question of how much new technology can add to your life, rather than how much old technology hinders you. Your old phone will function as before. It has the same benefits as before. The new stuff is not going to change that.
The big change is that the new stuff offers the ability to do things in a more efficient way. While it seems to offer very little benefit for individual tasks, some people will see a dramatic benefit while using it for the multitude of tasks that clutter their life. Other people will benefit simply because it enables them to do things that they would not have done before.
None of this is meant to dismiss your points. Personally, I find all of this data mining creepy even when I am confident that they are collecting the data for my benefit and that they won't use the data to my detriment when they are using it for their own benefit. Yet many people don't share that world view. Those people will benefit from Google's services, while nothing is being introduced to hinder the lives of those who don't use those services.
I feel like an old fool fighting against its time, but to me all those new applicances are scary not because of privacy (have my data, I couldn't care less), but because of how they shape our world.
Most of the coolest memories I have were the product of something spontaneous, or mistakes, that become close to impossible with a computer and internet in your pocket 24/7.
Assessing what's around you, talking to strangers, actively looking for something without it instantly popping in suggestions after you've typed 4 characters, all those things have been a great source of circumstance-based, little everyday life adventures.
This is the difference between risking buying a random book, or browsing reviews and picking a 5 star one to download.
This is the difference between discovering a place you'd never thought existed while waiting for someone and poking your nose around, instead of standing there, frantically watching their dot on the map get closer to you.
This is the difference between the mesmerizing feeling of playing the first expansions of world of warcraft, versus the tiring experience of the super streamlined versions that followed. Yes, they are less frustrating, but they don't bring tear to your eyes when you thing about them, they just feel averagely satisfying.
A few minutes ago I got up to open the door for my cat, and in a few minutes she'll be back and I'll be interrupted again. I feel like those interruptions are precious. They keep you connected to reality. I could install an RFID cat door, hell I could make a voice activated one in a couple weekends, and I would not be annoyed anymore. I would also never have seen all the things I witness every time I get to that damn door.
If the twilight zone taught me anything, it's that humanity will always have a rebel. If you make life so safe and easy that free will is no longer necessary someone will demand free will.
For consumers this will be a choice between keeping their data private and having intelligent systems that perform better.
So far I haven't seen much, but based on my limited experience I believe customers are going to continue handing over their data to Google and Facebook in exchange for personalised services.
The truth is, the only times my smartphone has actually felt smart is when Google has been mining my information from various services (mainly Gmail and Calendar) and presented it to me at correct time, enhanced with other information they have gathered from web.
I don't think there will be any major backslash from consumers. The old comparison about boiling frog applies here.
There 50 million domestic workers in the world: living, breathing, naturally intelligence, autonomous human beings that people welcome into their homes. Not surprising, given that during almost the entirety of our evolutionary experience everyone one knew knew everything about one. The notion that people would object to a company knowing some things about one in order to place slightly more relevant ads is silly. It just doesn't seem that way in a message board echochamber because their are rewards for self righteous indignation, and not for making the commonplace observation that people willingly make tradeoffs without being victims of "false consciousness".
Almost everybody screens their cleaners and babysitters in some way. Either they are connected through well known friends and family, or for the rich they get checked out for criminality/dangerousness by professional services.
Except afaik, those workers aren't coordinated and reporting back to some central agency. If a housekeeper in New York happens to be snooping on their client, it doesn't mean one in Los Angeles is acting the same way. The damage is limited.
Sadly, I think you are right. I'm not sure what it would take to get more people to really care about privacy when they can have convenience instead. The Snowden leaks didn't do it in the US, even though it showed a government that was willing to break the rule of law to collect data on its citizens.
It's simple, regular people are going to get picked off by hackers, lawyers and other predators until much like the steadily rising smog around cities in the industrial revolution, we realize 'oh shit, this is an actual problem because the wind doesn't always blow this shit away'.
Meanwhile actual geeks and hackers will be fine, because we'll have used our intuitions about these things to choose privacy conscious alternatives to mainstream technology.
In addition to which it is increasingly the case that 'privacy' is regarded as an elite thing, and thus will ultimately be sought after by less educated classes. Like how green lawns used to be for the rich to show off that they didn't need to grow crops to survive and now everybody has them and doesn't know why.
Remember Hillary Clinton and the emails. Remember Colin Powell and 'why can't I use my pda in this highly secure area'. These people are the dinosaurs, and in the business world if you're not hack-resistant you're going to go bust.
tldr;
> I'm not sure what it would take to get more people to really care
Their interests get attacked or violated. That is what.
I have an open ended question-- mostly born out of ignorance; But why is this a bad thing? Isn't an artificial assistant that not only knows and understands us but anticipates our needs incredibly useful? In the process, sure they'll collect your info for better advertising, but short of Totalitarian Surveillance or Data Breach Concerns (The former is a bit of a reach if you live in the west, and they can survey you anyway if they really want to, the latter also seems somewhat unlikely)-- whats the issue here? Genuinely asking because I'm trying to understand.
Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
Totalitarian Surveillance is here. In the west. Secure document releases aside, it's too easy to do to imagine a state actor not doing it.
Data breaches of differing severities occur every day, at nearly every company. I would have thought Yahoo was big enough and smart enough to avoid it; but no. Not Yahoo, not Sony, not security contractors, not credit bureaus, not Apple (a'la celebrity photo leaks), not Google (stories abound of individual GMail accounts being hacked).
>Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
(Have worked at google in the past, may in the future, am not currently). You say this as though anyone at Google (or Microsoft or whatever) can go in and search for 'falcolas' and look through your GPS history.
I'm honestly not sure if there is a single individual at the company who had that power. I honestly think that the best thing Google could to is publicize their internal training and documents on personal information, because the regulations and such made me a lot more comfortable with giving Google the sort of amorphous entity my data, because no person is going to be looking at that data.
>, not Google (stories abound of individual GMail accounts being hacked).
One of these is not like the others, unless you're talking about something I'm not aware of. Hacking an individual GMail account requires guessing/taking someone's password, which is not an attack on Google's infrastructure (Unlike the yahoo, sony, apple, etc. examples), its an attack on a bad password.
I agree with you. To help convince people, I realize that we often imagine benevolent leadership, so it helps to give an example such as, "Imagine if you were a Muslim or illegal and Donald Trump were elected president. What could he do with your data?" E.g. find you, search your residence based on your purchasing and travel habits and send you home.
E.g. Wakes up at 5:30 am, travels to a construction site, lives in a house with a large number of people -> signals possible immigrant. Or this:
Detecting Islamic Calendar Effects on U.S. Meat Consumption: Is the Muslim Population Larger than Widely Assumed?
We have to think about data not just in terms of our relative safety, but in terms of what could happen in adverse circumstances. And not even just in terms of our own government, but foreign governments.
Sure, there are some trust issues, but just regarding your two first points:
A very limited number of Google employees have access to private user data (only when it's vital to their work) and they have strict policies in place (data does not leave the data centers etc.).
Which third parties are you referring to? As far as I know, Google does not give their users' private data to a third party.
> Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you?
You forgot: every single state which Google is subject to.
There are three levels of discomfort some people feel with this situation:
1. Concern that a single, third-party entity (Google, in this case) might peer into every aspect of our lives, and/or reverse-engineer an exhaustive catalog of our entire lives, by virtue of data collection.
2. Concern that many consumers will unwittingly opt into such control, unaware of the privacy they're relinquishing, and unable to make informed decisions about the possible applications and consequences of the tradeoff.
3. Concern that the custodian of all this personal data (Google) might use, sell, transmit, or turn over the data in ways we had not anticipated or believed we'd consented to.
Personally speaking, I understand these concerns but also understand the potential upside. I'm not 100% sure where I stand just yet. The aforementioned bullet points are presented without editorial comment; just trying my best to articulate what I believe to be the crux of people's concerns here.
The way you phrased the question sounds to me like "what are the shortcomings, leaving aside these terrible shortcomings?"
Having said that: Google is not in the business of making your life easier, but in the business of selling you ads. The data that Google collects about you is incredibly powerful, allowing them to go from a "simple" manipulation to sell you stuff you wouldn't otherwise buy, to full-scale blackmailing you if they see it fit (not saying that this is happening, but if they wanted to, who would stop them?).
It's putting too much power in the hands of a single, amoral entity (like all corporations). That's not good.
Not an ignorant question, but a reasonable one. I think the skeptics believe that society overall does not yet understand the tradeoff. And I agree with them there. I think on a deeper level, people are creeped out that data paints such a deterministic picture of our lives. Sure, I leave home the same time every day, and I leave work about the same time every day. That Google knows where I live and work without me ever explicitly saying shouldn't be a surprise, but not everyone enjoys seeing how their daily lives are so easily circumscribed, even with just passively-collected data.
I don't want a company that employs tens of thousands of people, along with a government in a foreign country, along with all the governments on the data route in between, and their employees, civil servants and assorted snoopers of all shades, to have access to the artificial assistant's communications and thoughts relating to me.
All these organisations are made out of people. People with power are inherently untrustworthy; they need enforcement mechanisms to be kept in line, and enforcement mechanisms need to be activated every now and then to stay in working order. That is, occasional abuses are required to keep abuse in line. The thin blue line wavers like a pendulum: it's how we know it's working.
Part of what I fear is that Totalitarian Surveillance is only a "bit of a reach in the west" because we put such a high value on privacy (and personal liberty) that we're willing to defend it. When that goes away then the "reach" will be far easier.
Sure it would be useful. Sell the assistant as a locally-installed app that guarantees personal data never leaves the LAN and will sell.
> sure they'll collect your info
Only if you let them. Demand better behavior from their software and business practices.
> Totalitarian Surveillance or Data Breach Concerns
What you seem to be missing is that the concern isn't about today's level of surveillance or today's data breach risk. Data generally persists indefinitely once it makes its way into a database or logfile.
To make a claim that these are low risk requires that at no time in the future will surveillance risk increase or data breaches become more common, ... or that the company will run into financial trouble and need to sell your data, ... or that a breach will be forced by a government (not necessarily your's or Google's), ... or that your data will be aggregated into other databases, increasing the "predictive" power and attack surface, ... or any of the other unknown ways your data could be used in the future.
Humans are already known to be terrible at assessing risk, especially when there is a very large separation between the cause and effect. Smoking today giving you cancer many years later is a traditional example. We already know data breaches happen, well meaning employees make mistakes or succumb to corruption, and external powers such as governments or organized crime occasionally take away your agency. Do you really want to claim that none of these risks will ever happen? Because that's the actual wager you're making when when you use Google's products.
For most of history, most humans lived under tyranny or domination if not outright slavery. It's only been the past few hundred years that this mostly stopped in some places.
Maybe we've turned a corner and will never go back to that. But I don't have confidence yet.
>Isn't an artificial assistant that not only knows and understands us but anticipates our needs incredibly useful?
No, not really. Restaurant recommendations and traffic reports are simply not that hard for me to find on Yelp or Waze myself. The "anticipation" here doesn't really help me in any material way.
Here's why I am afraid of Google. Google could have the best intentions, but its wife NSA that Google occasionally sleeps with doesn't. Everything you say to Google Home could possibly be recorded. Storage and Computing power for google is cheap. They can record everything you say in your home. Their algorithms can connect all sorts of information about you. If trump wants to create the next Muslim holocaust, Google and FB have the perfect information.
This is what Elon means when he says AI is like inviting the devil. We have this algorithm in our mushy brain. Its takes about 20 years to train and lives for about 80 years. Its communication bitrate is pretty low (mostly blabbering through mouth) and doesn't retain much information. Only patterns.
Now imagine this algorithm from the mushy brain is run on a silicon chip, with gigabit bitrate, retains almost everything indefinitely and can learn from entire history of humanity.
That algorithm would just need to deceive us until it was powerful enough to wipe us in one sweep.
Google already manipulates humans psychologically to click on their ads en-masse. Giving them more of your personal data is just feeding the devil.
The story of how Target discovers and targets recently pregnant women is a good example [1].
An interesting quote: “we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”
There's no reason to believe Google isn't doing the same thing. And I strongly suggest reading the original article[2], if only for the first two or three paragraphs.
"AI" is incredibly overhyped. Most of the features and applications I've seen can be relegated into the "that's neat" category, before they are turned off and never used again.
Google recently started telling me how heavy the traffic is on my commute because they've figured out I do it every day, and when I'm doing it. That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I wonder how much infrastructure, fancy pants machine learning and effort when it to just creating those useless alerts?
Google, as a problem, has already solved the problem they were created to solve: search the Internet. Now they need to find something for all those twiddling thumbs to do, so we get braindead features that tell me what I already know.
> That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I guess people have different experiences. Personally, I know how to get home from work, so I don't feel the need to turn on my GPS every time I drive home. So I appreciate getting notified when there's notable variances in drive times, without me having to look for it every day.
How often is it correct? I use Waze infrequently (mostly it's turned off in privacy) but it often gets things wrong (it's better than Apple's maps or Google Maps).
I need to get home on time to pickup kids, but mostly just leave a bit early...
I agree with you both. "turning on GPS" could just mean that while driving one has the view of the roads on to see which one's have traffic, not necessarily getting turn by turn directions to and from work every day.
Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.
So no, it is not just my problem or your problem, it's everyone's.
I think this example serves both our points. To your point, it's totally leaked this other person's info into my "google world" because I'm on gmail. On the other hand, that person is leaking his information directly to me just because of typos when he fills out online forms. Perfect privacy requires a lot of vigilance in a digital world, with or without google/gmail/hotmail/yahoo/etc.
I was somewhat radical about privacy in the late 90s (only person I knew that read every EULA) and am still a supporter of the EFF but I don't really understand the issue here.
Example: "Intuit’s TurboTax stores highly detailed financial data for millions of users who import their W2s, their banking data, info about their mortgages and more. Right now, all of this data is locked into TurboTax, but the company is now thinking about how it can do more with it by giving its users the option to share this data with reputable third parties." ... https://techcrunch.com/2016/09/22/intuit-wants-to-turn-turbo...
Going further, given that an encrypted email to Gmail will simply be unencrypted and then available to GMail, include in the protocol authorized (via both white and black list means) agents of the recipient. So, if you are hosting your own email but the intended recipient is expected to be not hosting their own email, the sender can blacklist "agents" such as Gmail and Yahoo! Mail, or blacklist all except for those chosen to be white-listed such as Proton Mail.
Granted there may be a place for regulations to help us restrict what companies are able to do(perhaps making it easier for you to identify a region that is being recorded, right?), but, at some point society can't help the fact that you'd prefer if machines were unaware of your existence. That's just something you have to solve for yourself.
Deleted Comment
I also fear you will be unable to notice in which areas of life and information the distinction between choice and reaction is harmless and which it isn't.
Of course, I'm not talking about "You" you, but just people. Me as well. I feel we are widening the field of unconscious decisions and I see that as inherently bad - in my fellow humans as well.
You could say that Plato wanted us to make easy things simple (link for distinction: https://www.infoq.com/presentations/Simple-Made-Easy).
I believe this to be a move in the opposite direction. We should have a care.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.
"I ordered this burger because I was hungry and it tastes good" vs "I ordered this burger because Google was able to successfully predict that I would be receptive to having burgers, or the idea of burgers, placed in my environment"
My culture, education and skills limit what work I can do.
Our culture places limits on a vast number of experiences. On the road and the only thing is fast food? Welp, eating fast food. Live somewhere that only has one grocery store or cable provider?
I don't really see AI in the form Google is peddling as really all that much different. We're just 'more aware' that the world around us is really guiding us.
I may be somewhere new, and can only see the immediate surroundings without a lot of exploring. And let's be real, in the US, most cities are the same when it comes to restaurants/hotels and such. There are differences in culture but we don't usually see them if we're just visiting. Not in a way that matters.
Google will let me know that the things I prefer back home? there are equivalents nearby.
Fencing ourselves in is what we do. Who knows, perhaps a digital assistant would help us stick to our personal goals and decisions better. Rather than just having to accept what's there.
I'm curious why you think this is bad. I don't necessarily think it is good but I also don't necessarily think it is actually happening
Maybe the illusion is that it was a choice . . .
This is by design, so that the majority of users are confused and leave the defaults as is, enabling Google to do whatever they like.
I never thought of that before. But, what a subtle way for Google to dissuade people from using a tool that could impact their revenue.
And it annoys me that on maps, when you turn off all the spying capabilities there's no fallback to local history. You either share it with us or you get none.
GPS navigation devices with much less storage than a phone have been more than capable of what Google Maps offers for a long time. There's essentially no reason for it to do anything with the Internet except getting map updates.
1- There is no way to set your privacy level.
2- Things that Google/Siri/Alexa know about you are not limited with the name of the bar you go frequently. They know much more about you. And you don't know what they know. The sky is the limit here.
3- Things that they know are not limited with you personally. They know about you, your family, your friends and all their interactions. They know very much about the whole society.
2 - My point is that I personally am OK with Google's AI knowing more about me. I respect that others aren't. I'm not naive in my acceptance.
3 - I don't really have a response here.
You can monitor the traffic of the Alexa and see that it is only sending data when you ask it to do something, and furthermore, Amazon gives you a log of everything you've said to it and it recorded.
My impression also is that most early adopters of this kind of technology are younger people. (again: mostly)
So this brings up an interesting question about the future. As the young early adopters age, what will happen?
a) their privacy thresholds will also increase and they will have a "oh holy crap" moment in the future, where as a middle-aged or older person, who has lived a now much richer and problem-laden life, they will realize that google (and/or other co's) have what they consider now, as too much personal information about them,
or
b) they will keep their young-ish privacy thresholds as older people, and in general, across society, people will have lower thresholds than exist nowadays. In other words the world will change.
My money is on a)
My impression (in the main) is that younger and older people have different views on privacy. Older folk might be creeped out by Google knowing their schedule, but okay with the NSA or FBI or whomever reading their emails because "because terrorists" whereas younger folk are more likely to balk at the latter, but very much okay the former.
Do you think my opinion is accurate? I'm curious because to some extent I completely agree with you.
Of course it also once told me how long it would take to get to an ex's house from my current girlfriend's place.
How about an 'OK, funny once' command?
(eugenics through large scale suggestions anyone? ;>)
I hope that there will be a day when Google and Facebook will combine forces and work on a sequel to "The Lives of Others" [0].
[0] http://www.imdb.com/title/tt0405094/
Pretty much your only privacy is in your head at that point.
I'm not sure that is a "threshold" of privacy but rather a "I am okay with 24/7 surveillance of all of my activity."
is this story unrealistic, or has it already occurred?
One Wednesday afternoon, at work, I got a notification saying "Travel time to the Lion & Crown". The first thing that ran through my head was "oh my god, I'm living in the future".
The problem is that I want to use Google Maps so what choice do I really have?
Sure I use a dedicated gmail for my phone but that really does not help much.
"Hey, it's a been a while. Why don't you go to ... today? Traffic conditions are favorable too."
Given that, and the existence of pervasive surveillance and data mining, the above is inevitable.
I think that it is different for everyone is completely and utterly obvious. It's clear the author doesn't think it's OK, but that it's his/her opinion.
Dead Comment
Extremely annoying. This sort of thing should not be acceptable, an honest mistake results in every place I've been being logged in such a way that anyone with access to my Google account, access to Google servers or with a subpoena can have my full location history in a matter of seconds.
This needs to be a big red option every time you add an account "we're gonna log everywhere you go and hand it over to whoever we feel like, you cool with that?". It'd be different if the log and analysis were done only on my device, but doing this on Google's servers is completely unacceptable by anyone with even the weakest standards of privacy.
In your Settings app Settings, scroll down and tap Google. Open a separate app called Google Settings Google Settings.
Tap Location and then Location History.
At the bottom of the screen, tap Delete Location History.
Also https://maps.google.com/locationhistory allows for delete all history.
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.
The trope-namer (Star Trek AI) was a ship-wide AI - when considering the ship sizes, it definitely is closer to the "cloud" model and not limited to a private instances on officers' bunker/bridge terminals/tricorders. Perhaps a hardcore Trekkie could answer this question: is there any canon that defines the AIs scope? Is restricted to just one ship, or could it possible be a Federation-wide presence with a presence/instances on ships?
Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks. The following is a utopian notion, but had private networks seen as much R&D as the public clouds, they would be significantly less cumbersome than today's clunky VPNs. Imagine all of your devices collaborate directly with one another and with you on your own secure private network—no central cloud servers needed. Your personal assistant is software running on a computer you own rather than a third-party's centralized server.
I still feel this ideal will eventually be realized, but for the time being, no large technology company is willing to take the necessary risks to buck the trend of centralization.
The biggest fiction propped by up centralization and cloud proponents is that it would be impossible to provide the kind of utility seen in Cortana, Siri, Google Assistant, Alexa, et al without a big public cloud. A modern desktop computer has ample computational capability to convert voice to text, parse various phrases, manage a calendar, and look up restaurants on Yelp. Absolutely nothing the public clouds provide strikes me as something my own computer would struggle to do (to be clear, I would expect a local agent would be able to reach out to third-party sites such as Yelp or Amazon at your command in order to execute your desires, but they would do so directly, not via an intermediary).
A few years back, when Microsoft was at the beginning of its Nadella renaissance, I had hoped it would be the first technology titan to disintermediate the cloud and make approachable and easily-managed personal private networks a thing. Microsoft's legacy of focusing on desktop computers would have made it well-situated to reaffirm your home computer as an important fixture in your multi-device life. They could have co-opted Sun's old bad tagline: "Your network is your computer." But they elected to just follow the now-conventional public cloud model, reducing everyone's quite-powerful home computer to yet another terminal of centralized cloud services. Disappointing, but I think it is ultimately their loss. I suspect a lot of money is on the table for someone to realize a coherent easy-to-use multi-device private network model that respects consumer privacy by executing its principal computation within the network.
Except that's what I did for many years using a computer only as a terminal for an AIX mainframe. My mail was there, I browsed what was the web, used gopher, wrote programs, all stored there.
I would like to say that the cloud we have is a privacy concern because we don't know the full scope of data collected, nor what happens to it, nor do we own any of "our" data once it's in the cloud. But not every cloud would have to be that.
There's a perfect world where one wouldn't have to be paranoid about this stuff, but it's not what we have right now.
I suspect that chances of would-be burglars or identity thieves breaking into Google data are pretty slim, in comparison to a home-installed system.
OTOH both Google and a private person can be strong-armed by a court order, or even a three-letter agency, to open up their AI knowledge vaults.
Maybe someday that will be a realistic endeavor, but it would take a lot of effort to set up and maintain your own personal versions of all of google's services, and integrate them
When's the last time Google risked itself or business or any tech ceo risked their livelihood for the sake of the greater good? The problem isn't necessarily the knowing everything part, it's who does what with it that's the problem. I can't really think of any company or person with influence in tech that'd be willing to dive onto that bombshell to protect us all.
February of 2001.
https://en.wikipedia.org/wiki/Joseph_Nacchio
The former CEO of Qwest, a massive telco, spent years in prison for insider trading. He says this is because he resisted the NSA's demands to tap Qwest's network and hand over customer data.
A truly user-aligned AI assistant would be great. Ideally in the future these things will not be tied to indirect business models, but rather will be something you buy and all data/services will be under your control.
In the Star Trek world they had no advertising because they were a communist society. Everyone dressed the same or slightly differently based on rank. It's interesting how the new movies play over that.
In Star Trek you couldn't choose your AI. In our world you can. At the start of their development most of them are targeted at selling you stuff - but the industry is young and who knows where it will go.
You cannot compare the world's biggest seller of advertisement space with the ST universe. The motivation's aren't aligned: Google/Alphabet want to make sales based on my information.
I agree that I found these oh so clever AI fantasies interesting in my youth, still do to a degree. But I always pictured the data being held inaccessible to humans in general ("Where's my wife right now?") and not in the hands of a golden few with no oversight.
The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
That's part of the Star Trek fantasy. But so is, "Computer, locate Commander Riker" and "Computer, use personal logs and personality profiles from compiled databases to create a personality simulation of Dr. Leah Brahms."
Google has a strong incentive to not allow their aggregated user data to leave Google-- the behavioral data Google collects is the reason why Google is valuable; if they start shipping that data off to third parties, suddenly the third parties don't need Google anymore.
(Same with Facebook-- they're not "selling" your data; they're selling the opportunity to target you based on your data, but the data itself is too valuable to Facebook to sell.)
ML relies on large data-sets and if anyone tried to release a personal device it simply wouldn't even work, let alone compete with the mass surveillance google/ms/amazon are bringing to bear.
Unless the state-of-the-art in AI suddenly morphs, we seem to be stuck between giving up our privacy or having vaguely intelligent AI.
I personally fall heavily on the privacy side of stuff, but I can see the intellectual and commercial appeal of pretending it doesn't matter in order to get there.
1. That information were secret (you and service/device/implant) but not a whole company and its third party interests.
2. It wasn't making money for a third party after the initial purchase price for the device, service, etc.
But does it? Does it have to know your birthday? (leave alone the fact that bdays are somehow part of a superkey for your identity).
Why should it know my residence, my spouse, or my CC# (with Apple's TouchID maybe it won't need to)?
Google's concept of AI is too creepy for me. It can be useful without being creepy. They're not even trying to make it less creepy.
Overlay on this the subtext that NSA and other tla's are monitoring all this (leave alone other countries). While I may trust Google, I don't trust them to not be forced to collude with the government.
How could it be otherwise?
I've totally passed on the 'mobile revolution', I do have a cell phone but I use it to make calls and to be reachable.
This already leaks more data about me and my activities than I'm strictly speaking comfortable with.
So far this has not hindered me much, I know how to use a map, have a 'regular' navigation device for my car, read my email when I'm behind my computer and in general get through life just fine without having access 24x7 to email and the web. Maybe I spend a few more seconds planning my evening or a trip but on the whole I don't feel like I'm missing out on anything.
To have the 'snitch in my pocket' blab to google (or any other provider) about my every move feels like it just isn't worth it to me. Oh and my 'crappy dumb phone' gets 5 days of battery life to boot. I'll definitely miss it when it finally dies, I should probably stock up on a couple for the long term.
Reading stories like this makes me want to carry a personal tracking device even less.
---
[1] People tend to have fewer emergency reasons to cancel when they can't reach you 5 minutes before the appointment.
The courts are still deciding when/whether that information requires a warrant.
But short of anybody wanting to aim a missile at me I figure that I'm better off with the courts in my country where such information does require a warrant at present (and without any indication that this will change), and without the company controlling those assets trying to 'mine' my profile in order to advertise to me more efficiently.
A close friend is a longtime professional software developer. Always interested in mobile. We used to have extensive discussions bout why I preferred carrying a small flip-top notepad and a pen vs a phone or tablet or whatever with a stylus (many have come and gone over the years). In the use-case scenarios I put forward (small lists, secure disposal, privacy, 'battery life') my little notebook frequently was the best approach for me. He disagreed, but that was the point of chatting about our views.
The big change is that the new stuff offers the ability to do things in a more efficient way. While it seems to offer very little benefit for individual tasks, some people will see a dramatic benefit while using it for the multitude of tasks that clutter their life. Other people will benefit simply because it enables them to do things that they would not have done before.
None of this is meant to dismiss your points. Personally, I find all of this data mining creepy even when I am confident that they are collecting the data for my benefit and that they won't use the data to my detriment when they are using it for their own benefit. Yet many people don't share that world view. Those people will benefit from Google's services, while nothing is being introduced to hinder the lives of those who don't use those services.
Most of the coolest memories I have were the product of something spontaneous, or mistakes, that become close to impossible with a computer and internet in your pocket 24/7.
Assessing what's around you, talking to strangers, actively looking for something without it instantly popping in suggestions after you've typed 4 characters, all those things have been a great source of circumstance-based, little everyday life adventures.
This is the difference between risking buying a random book, or browsing reviews and picking a 5 star one to download.
This is the difference between discovering a place you'd never thought existed while waiting for someone and poking your nose around, instead of standing there, frantically watching their dot on the map get closer to you.
This is the difference between the mesmerizing feeling of playing the first expansions of world of warcraft, versus the tiring experience of the super streamlined versions that followed. Yes, they are less frustrating, but they don't bring tear to your eyes when you thing about them, they just feel averagely satisfying.
A few minutes ago I got up to open the door for my cat, and in a few minutes she'll be back and I'll be interrupted again. I feel like those interruptions are precious. They keep you connected to reality. I could install an RFID cat door, hell I could make a voice activated one in a couple weekends, and I would not be annoyed anymore. I would also never have seen all the things I witness every time I get to that damn door.
So far I haven't seen much, but based on my limited experience I believe customers are going to continue handing over their data to Google and Facebook in exchange for personalised services.
The truth is, the only times my smartphone has actually felt smart is when Google has been mining my information from various services (mainly Gmail and Calendar) and presented it to me at correct time, enhanced with other information they have gathered from web.
I don't think there will be any major backslash from consumers. The old comparison about boiling frog applies here.
Even then, we have 'nanny-cam'.
Meanwhile actual geeks and hackers will be fine, because we'll have used our intuitions about these things to choose privacy conscious alternatives to mainstream technology.
In addition to which it is increasingly the case that 'privacy' is regarded as an elite thing, and thus will ultimately be sought after by less educated classes. Like how green lawns used to be for the rich to show off that they didn't need to grow crops to survive and now everybody has them and doesn't know why.
Remember Hillary Clinton and the emails. Remember Colin Powell and 'why can't I use my pda in this highly secure area'. These people are the dinosaurs, and in the business world if you're not hack-resistant you're going to go bust.
tldr;
> I'm not sure what it would take to get more people to really care
Their interests get attacked or violated. That is what.
Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
Totalitarian Surveillance is here. In the west. Secure document releases aside, it's too easy to do to imagine a state actor not doing it.
Data breaches of differing severities occur every day, at nearly every company. I would have thought Yahoo was big enough and smart enough to avoid it; but no. Not Yahoo, not Sony, not security contractors, not credit bureaus, not Apple (a'la celebrity photo leaks), not Google (stories abound of individual GMail accounts being hacked).
(Have worked at google in the past, may in the future, am not currently). You say this as though anyone at Google (or Microsoft or whatever) can go in and search for 'falcolas' and look through your GPS history.
I'm honestly not sure if there is a single individual at the company who had that power. I honestly think that the best thing Google could to is publicize their internal training and documents on personal information, because the regulations and such made me a lot more comfortable with giving Google the sort of amorphous entity my data, because no person is going to be looking at that data.
>, not Google (stories abound of individual GMail accounts being hacked).
One of these is not like the others, unless you're talking about something I'm not aware of. Hacking an individual GMail account requires guessing/taking someone's password, which is not an attack on Google's infrastructure (Unlike the yahoo, sony, apple, etc. examples), its an attack on a bad password.
E.g. Wakes up at 5:30 am, travels to a construction site, lives in a house with a large number of people -> signals possible immigrant. Or this:
Detecting Islamic Calendar Effects on U.S. Meat Consumption: Is the Muslim Population Larger than Widely Assumed?
https://mpra.ub.uni-muenchen.de/41554/
We have to think about data not just in terms of our relative safety, but in terms of what could happen in adverse circumstances. And not even just in terms of our own government, but foreign governments.
A very limited number of Google employees have access to private user data (only when it's vital to their work) and they have strict policies in place (data does not leave the data centers etc.). Which third parties are you referring to? As far as I know, Google does not give their users' private data to a third party.
You forgot: every single state which Google is subject to.
1. Concern that a single, third-party entity (Google, in this case) might peer into every aspect of our lives, and/or reverse-engineer an exhaustive catalog of our entire lives, by virtue of data collection.
2. Concern that many consumers will unwittingly opt into such control, unaware of the privacy they're relinquishing, and unable to make informed decisions about the possible applications and consequences of the tradeoff.
3. Concern that the custodian of all this personal data (Google) might use, sell, transmit, or turn over the data in ways we had not anticipated or believed we'd consented to.
Personally speaking, I understand these concerns but also understand the potential upside. I'm not 100% sure where I stand just yet. The aforementioned bullet points are presented without editorial comment; just trying my best to articulate what I believe to be the crux of people's concerns here.
Having said that: Google is not in the business of making your life easier, but in the business of selling you ads. The data that Google collects about you is incredibly powerful, allowing them to go from a "simple" manipulation to sell you stuff you wouldn't otherwise buy, to full-scale blackmailing you if they see it fit (not saying that this is happening, but if they wanted to, who would stop them?).
It's putting too much power in the hands of a single, amoral entity (like all corporations). That's not good.
The law, and the economic interest of all the rich shareholders that care about the company's reputation.
I don't want a company that employs tens of thousands of people, along with a government in a foreign country, along with all the governments on the data route in between, and their employees, civil servants and assorted snoopers of all shades, to have access to the artificial assistant's communications and thoughts relating to me.
All these organisations are made out of people. People with power are inherently untrustworthy; they need enforcement mechanisms to be kept in line, and enforcement mechanisms need to be activated every now and then to stay in working order. That is, occasional abuses are required to keep abuse in line. The thin blue line wavers like a pendulum: it's how we know it's working.
Deleted Comment
Edit: Interesting comment in another thread: https://news.ycombinator.com/item?id=12639530
Sure it would be useful. Sell the assistant as a locally-installed app that guarantees personal data never leaves the LAN and will sell.
> sure they'll collect your info
Only if you let them. Demand better behavior from their software and business practices.
> Totalitarian Surveillance or Data Breach Concerns
What you seem to be missing is that the concern isn't about today's level of surveillance or today's data breach risk. Data generally persists indefinitely once it makes its way into a database or logfile.
To make a claim that these are low risk requires that at no time in the future will surveillance risk increase or data breaches become more common, ... or that the company will run into financial trouble and need to sell your data, ... or that a breach will be forced by a government (not necessarily your's or Google's), ... or that your data will be aggregated into other databases, increasing the "predictive" power and attack surface, ... or any of the other unknown ways your data could be used in the future.
Humans are already known to be terrible at assessing risk, especially when there is a very large separation between the cause and effect. Smoking today giving you cancer many years later is a traditional example. We already know data breaches happen, well meaning employees make mistakes or succumb to corruption, and external powers such as governments or organized crime occasionally take away your agency. Do you really want to claim that none of these risks will ever happen? Because that's the actual wager you're making when when you use Google's products.
Maybe we've turned a corner and will never go back to that. But I don't have confidence yet.
No, not really. Restaurant recommendations and traffic reports are simply not that hard for me to find on Yelp or Waze myself. The "anticipation" here doesn't really help me in any material way.
This is what Elon means when he says AI is like inviting the devil. We have this algorithm in our mushy brain. Its takes about 20 years to train and lives for about 80 years. Its communication bitrate is pretty low (mostly blabbering through mouth) and doesn't retain much information. Only patterns.
Now imagine this algorithm from the mushy brain is run on a silicon chip, with gigabit bitrate, retains almost everything indefinitely and can learn from entire history of humanity.
That algorithm would just need to deceive us until it was powerful enough to wipe us in one sweep.
Google already manipulates humans psychologically to click on their ads en-masse. Giving them more of your personal data is just feeding the devil.
An interesting quote: “we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”
There's no reason to believe Google isn't doing the same thing. And I strongly suggest reading the original article[2], if only for the first two or three paragraphs.
[1] http://www.forbes.com/sites/kashmirhill/2012/02/16/how-targe...
[2] http://mobile.nytimes.com/2012/02/19/magazine/shopping-habit...
Google recently started telling me how heavy the traffic is on my commute because they've figured out I do it every day, and when I'm doing it. That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I wonder how much infrastructure, fancy pants machine learning and effort when it to just creating those useless alerts?
Google, as a problem, has already solved the problem they were created to solve: search the Internet. Now they need to find something for all those twiddling thumbs to do, so we get braindead features that tell me what I already know.
I guess people have different experiences. Personally, I know how to get home from work, so I don't feel the need to turn on my GPS every time I drive home. So I appreciate getting notified when there's notable variances in drive times, without me having to look for it every day.
I need to get home on time to pickup kids, but mostly just leave a bit early...