Readit News logoReadit News
canadian_voter · 9 years ago
I think the megapolitical perspective on violence in The Sovereign Individual[0] by James Dale Davidson and Lord William Rees-Mogg can be helpful here.

Unions were powerful because they could limit access to labour, which was required for production. Now they are increasingly powerless. Mass manpower was once required to wage war. Now is increasingly less useful in a world of high-tech warfare.

What prevents the powerful from going straight to the source of production and value? I'm not talking about off-shoring manufacturing to China, I'm talking about something more extreme, along the lines of a small cabal of wizards in a tower conjuring spells to extract energy directly from the wind and sun, and materials directly from the ground? The tech is far off, but the direction is clear. Maybe y'all think you're going to be one of those wizards. We'll see.

But what's going to happen to the rest of us? Are we all going to wake up some day as basic-income supported artists, happily chewing on organic granola and self-actualizing (or not) as we please? I think a study of history suggests it's not going to be that easy. Those union rights were hard fought. People died.

What happens when those at the top decide it's not worth keeping 8 billion people around just for kicks, when they can make do with ... 5 billion? 500 million? How many programmers, painters and yoga instructors do we need? On a planet with dwindling resources, tough decisions are going to get made.

So yeah, watch out for those robots. Especially the ones with the lasers on their heads. (That's a joke. But the rest?)

[0] https://www.amazon.com/Sovereign-Individual-Mastering-Transi...

eli_gottlieb · 9 years ago
>Unions were powerful because they could limit access to labour, which was required for production. Now they are increasingly powerless.

Given how little is actually automated right now, I'm always a little skeptical that automation drove the labor union extinct. It seems like politics, corruption, and mismanagement drove American unions near-extinct. A lot of countries still have very active trade-unions that take a strong hand in production and economics (Germany and Denmark, for instance).

AndrewKemendo · 9 years ago
Given how little is actually automated right now

Your time horizon is likely too short. Compare today to 1900. 117 years is basically nothing in history. We even still have someone that was alive then [1].

Compared to then, everything has some proportion of it's labor automated. In fact I struggle to know one profession that has not been impacted by some form of automation.

[1] https://en.wikipedia.org/wiki/Emma_Morano

e40 · 9 years ago
If you think about how many people, 300 years ago, were involved in farming.. it was a lot. How many today? In 1st world countries, very few. Farming has massive automation.
rm_-rf_slash · 9 years ago
One could argue that the shared prosperity created by automation/mechanization and distributed in part by unions raised people's living standards to the degree that they no longer believed that unions were necessary to maintain their lifestyle.

As corporations sent jobs to other countries and Reagan fired the air traffic controllers, people were complacent enough to let it happen while they succumbed to their own pleasures.

darpa_escapee · 9 years ago
Unions can and have negotiated for sharing the benefits of automation with workers and for compensation to workers whose jobs were automated away.
L_226 · 9 years ago
There's probably not much preventing a society similar to what you describe from arising. I've had some thoughts regarding this, in no particular order;

- Surplus non-elite human stock will be used for mass colonisation of solar system/extrasolar bodies where mortality is high (aside: and possibly beneficial, in terms of accelerated evolution due to reduced reproduction cycles).

- The elite/AI cabal at the top will subject the unknowing masses to physiological and psychological experimentation for their own ends. (this one is probably already happening :)

- A mass uprising of the disaffected non-elites will enforce some kind of Butlerian Jihad (it's just a union to keep the AIs out!).

- Elites will be benevolent overlords who take their stewardship of the species seriously, and we all live blissfully in massive space-elevator-anchored orbiting ring habitats with UBI and free soylent, while the mysteries of the universe are probed until the end of time.

smaddali · 9 years ago
"" - Elites will be benevolent overlords who take their stewardship of the species seriously, and we all live blissfully in massive space-elevator-anchored orbiting ring habitats with UBI and free soylent, while the mysteries of the universe are probed until the end of time. ""

Regarding the above, that has been the way of life in the past. Feudal lords took "care" of the folks under their dominion. I think the key is "Social/Economic Mobility". If the people who at the apex can perpetually be at top by using AI and stomping out any challengers then it would be a major problem.

jwatte · 9 years ago
Lifting bodies into space is the expensive part, unfortunately, so a mass approach won't work.

However, once the elites get too greedy, and the masses too marginalized, there will be riots and civil war, so the smart ones will go for the "grow the pie just enough" approach.

adrianN · 9 years ago
Until we have a space elevator there will be no mass exodus to space. Probably not even after we have one. Lifting stuff out of the gravivity well with rockets is just too expensive. It's much cheaper to breed on the target planet.
PavlovsCat · 9 years ago
> What prevents the powerful from going straight to the source of production and value?

Nothing. Certainly not this crowd. Which is why I expect the long term future to hold something akin to a Blitzkrieg that will actually be over in a flash. After the delete function has been prepared and perfected, the delete button gets pushed. Not by the naive fools who built it, mind you. Those will a.) hardly know what they're working on and b.) be gone first.

willholloway · 9 years ago
I have an extremely strong feeling that this is the most important comment ever posted to Hacker News.

We won't understand that it was until it's over, but then, looking back forensically over the waste and ash heap before us, we will see that PavlovsCat predicted all of this, and we should have listened.

rm_-rf_slash · 9 years ago
I only fear for the future of humanity in the presence of AI when I see how poorly humans treat each other. We design machines in our own image to solve the problems familiar to us; a manufactured brain is the fullest extent of this process.

If we as a society develop tools to remove people from production and leave them at the mercy of the modern jungle then the machines designed to do this will follow this path as well.

If we design machines to accomidate the needs of a large global population without regards to age, race, religion, productivity, or other factors that have historically been used to separate the "us" from the "them," then the machines will continue to solve the problems we designed them for.

We could, as a species, unanimously abandon all AI tomorrow, Dune-style, and the problems and paths of history would largely be the same. The strong will do as they please and the weak will suffer as they must.

We just need to be better to each other.

canadian_voter · 9 years ago
> We just need to be better to each other.

I believe that the fundamental problems of our time are ethical, not technological. If we can figure that part out, the technology should take care of itself.

I would love to live in a post-scarcity utopia where we all run around self-actualizing. I don't think we need to give up AI to get there -- in fact, I think technology will be the key that unlocks the gate.

But we have to have the wisdom to pass through it with style and bring as many people as we can on the way. Otherwise we might find ourselves fighting for our place in line and possibly even annihilate ourselves in the process.

geomark · 9 years ago
"We just need to be better to each other."

Yes, that is the answer. Sadly it seems unlikely since throughout history a large portion of the population remain cruel and uncaring. Many thousands of technologist in the US are involved in making the weapons and targeting systems that are killing people across the globe. Seems unlikely they are going to wake up and decide to treat others better.

ericjang · 9 years ago
agreed. my worry is that even if people decide to ethically develop AI, there will be those who disagree with those ethics and/or willingly ignore their conscience. I don't see a way out short of a political revolution / bloodshed.
blacksmith_tb · 9 years ago
The problem with a vision of robots manufacturing things for 'wizards' seems to me that it replaces the production side of our current system, but not the consumption side. If we are all made redundant, how will we buy their wares? Or, if you take the more sanguine view, why would they need to make them at all if no one would buy them? Either way, it would be a dramatically unbalanced arrangement...
canadian_voter · 9 years ago
They don't need anyone to buy their wares if they are self reliant and can produce everything they need. We're not talking about making more and faster cell-phones with less labour, we're talking about a small group of people making everything they need without the need for any labour at all.

That might not be a "handful" of people, but it's certainly a smaller number than our current population.

Who needs coal miners if we have all the cheap solar we can use? Who needs farmers if we automate farming? Who needs drivers, maids, schoolteachers, cooks, etc. if we can have the machines do it for us?

Who needs lawyers if our disputes are simply settled by some AI judge? Who needs cops if our streets are patrolled by security robots and we have a Minority Report style crime-prediction system.

In a way, it's paradise -- if you're at the top.

Oh, and I'm pretty sure they suggest democracy as we know it is dead. One person, one vote? Only in an age of massed human warfare. If you can take out a million or a billion people with high-tech biological warfare (and those people aren't producing anything anyway) what good are they and why do they deserve the vote?

Not saying it's right, but the logic of megapolitical violence is cruel and unyielding. It's quite the lens.

shams93 · 9 years ago
Exactly we are a consumer based economy. Robots don't wear blue jeans, the most important job we do is consuming.
nickthemagicman · 9 years ago
You're right. It's terrifying but holy shit haha! When a single drone can kill thousands of people you won't be able to revolt either.
ENGNR · 9 years ago
Centralised control... means single point of failure

The drone could also be hacked by a terrorist or enemy state

The real damage is rolling back education and destroying the media, destroying the will to fight if ever needed and basically guaranteeing that someone will set those drones up and take the whole pie

graycat · 9 years ago
AI for self-driving cars? I keep thinking that occasionally in driving we encounter:

(1) A situation where we need to stop the car and converse in our natural language, that is, for just our half of the conversation, do speech recognition and natural language understanding. So, the AI approach would be to get the list of the 100,000 most common conversations, tune some speech recognition to those, f'get about the actual language understanding, and, instead, for each of the 100,000 cases implement the most common resulting action or response in the data? Sorry -- in that case I'd rather not be in that car!

(2) A situation where the driver needs actually to have real human understanding of a situation that, really, has never occurred before and, thus, is not in any AI training set. E.g., the vehicle ahead is a pickup truck and has some liquid dripping out of the truck bed out back. Somehow the liquid doesn't look like water. Taking a whiff, it smells like gasoline. Hmm. That stuff could catch fire, move to ignite the stuff in the bed of the truck, and maybe something could go "Boom". So, what the heck to do? Sure, slow down, get back, well back of the truck, change lanes and move ahead of the truck, pull off the right side of the road and stop, etc. IIRC, so far such general deductive reasoning is beyond AI. IMHO, such reasoning requires real AI, or whatever we are calling that now, and we don't know how to program computers to do that now.

IMHO, first cut, for self-driving cars, the best chance would be to do some extensive re-engineering of the roads.

Here is a general point: We don't yet understand how general human intelligence works and, thus, don't know how to program it. So, we are having trouble evaluating the automation we now have and, thus, are vulnerable to overestimating how far to real AI the current work really is.

Besides, AI hype, if this is a case, and overestimating how much progress there is to real AI is a very old story: As I've heard, way back in the days of vacuum tube computers, IBM was pushing publicity about their "giant electronic human brains". Looks like IBM is still doing this.

xapata · 9 years ago
1. How about a vehicle AI that allows you to ask it to stop and start?

2. I'm not confident that I would be able to notice a truck leaking gasoline, let alone smell it while driving. I'm comfortable riding in a self-driving car that can't perform that recognition. And again, the rider can simply tell the vehicle to pull over, right?

pmalynin · 9 years ago
You just described the plot of The Time Machine by H G Wells.

I also recommend the movie (the original).

bbctol · 9 years ago
I laugh and/or cry every time someone on HN responds to fears of the rich entirely abandoning the lower classes with "but then they'll have no one to buy their products and make them money!" or "but you still need people to maintain this technology!" Complete misunderstanding of what's going on. One wealthy person would be happy to spend a small amount of time administering the vast automated factories that support their high-class lifestyle. To put it simply:

If: when technology/automation reach the stage where a group of only ~10,000 humans can fully support a luxurious and peaceful existence for ~10,000 humans, this technology is owned and controlled by the richest ~10,000 humans Then: Earth's population will plummet to those necessary, and no charity will be given to those who do not control this technology. Not through genocide, simply through starvation. Those who own the land will keep it.

mark_l_watson · 9 years ago
Unfortunately, you make great points. Most workers will not be needed and without real push back corporations and economic elites will not want to support them.

A little off topic, but the book "The Sovereign Individual" changed my outlook on the world when I read it almost 20 years ago.

filoeleven · 9 years ago
Two contrasting outcomes of automation are outlined in Manna, a 2003 essay by Marshall Brain. From the first half:

> Ultimately, you would expect that there would be riots across America. But the people could not riot. The terrorist scares at the beginning of the century had caused a number of important changes. Eventually, there were video security cameras and microphones covering and recording nearly every square inch of public space in America. There were taps on all phone conversations and Internet messages sniffing for terrorist clues. If anyone thought about starting a protest rally or a riot, or discussed any form of civil disobedience with anyone else, he was branded a terrorist and preemptively put in jail. Combine that with robotic security forces, and riots are impossible. The only solution for most people, as they became unemployed, was government handouts. Terrafoam housing was what the government handed out.

And from the second:

> Inventors would work on their inventions, using materials and equipment provided by the robots. Scholars would do their scholarly research, finally free to study whatever they like, using the infinite intellectual resources available on the network. Scientists would start pursuing their scientific goals using research facilities provided by the robots. [...] There are people who are experts in their various fields -- engine design, scrap booking, fusion reactors, needlepoint -- and they would love to pass their knowledge on to other people. They would write books, make videos or have live lectures and workshops for people to attend. People interested in the martial arts would practice them every day. People interested in video games would play them every day. People interested in gardening would garden every day. The majority of people have a talent and, if they had the time, they would cultivate that talent and use it.

The contrasting principles that drive the two societies are clear, and the second quote I chose doesn't convey how advanced their society has become by enabling every human to follow and develop their particular interests. Both visions are on the extreme ends of a spectrum, and while the most likely outcome in reality is closer to the middle, I'd like to try to push it up towards the utopian end.

In any case, the things Watson is doing are far more complicated than what Manna started out doing, which was replacing management in a fast food restaurant.

http://marshallbrain.com/manna1.htm

Edit: Here is a recent report from the White House on AI, too: https://www.whitehouse.gov/blog/2016/10/12/administrations-r...

maverick_iceman · 9 years ago
Queen Elizabeth refused to grant a patent for the sewing machine because she was afraid that it will be devastating for the livelihoods of her poor subjects. So ya, in spite of having a stellar pedigree, this sort of fear-mongering doesn't exactly have a reputation for successful forecasting.
ericjang · 9 years ago
I like your insight. Is that the main message conveyed in [0]? (I haven't read it)

What do you think is going to happen? Can society do something to prevent mass concentration of wealth into the 0.00001% of people via the march of technology and innovation, or is this an inevitable outcome?

canadian_voter · 9 years ago
It's a great book. Their analysis of violence as the organizing force of society was what stuck with me the most.

They also talk about how the core values people will need are trustworthiness, self-reliance, etc. That's where the title comes from. But it's anything but a self-help book.

> Can society do something to prevent mass concentration of wealth into the 0.00001% of people via the march of technology and innovation, or is this an inevitable outcome?

That's the ultimate question. Very much looking forward to finding out the answer! I'll let you know if I figure it out. ;)

stale2002 · 9 years ago
The top percent would probably just let everyone else have the scraps.

If it costs them little to nothing to keep the masses under control, then why wouldn't they?

dingbat · 9 years ago
> What happens when those at the top decide it's not worth keeping 8 billion people

those "people at the top" will (or should) be the first to go. these "wizards" will be unnecessary and redundant, offering little of value that could not be provided more efficiently by the true machine wizards.

ben_w · 9 years ago
I suspect the "wizards" are likely to be those who own the means of production.
pryelluw · 9 years ago
To save you time:

Author praises advances in AI by big tech. Complains about how he was served the wrong medication and how a robot would not have made the error. Closes by saying that robots will be better than doing things than humans.

Its a shitty post that does not even really take into account the current state of AI, how robots are prone to errors as well as humans due to faulty hardware, and well, the fact that some jobs are only trusted to humans. Even if the margin of error may be higher.

caconym_ · 9 years ago
> Its a shitty post that does not even really take into account the current state of AI, how robots are prone to errors as well as humans due to faulty hardware, and well, the fact that some jobs are only trusted to humans. Even if the margin of error may be higher.

Agreed. The author seems to be doing fairly irresponsible things with statistics, too: maybe "the statistical likelihood of dying from a self-driving car is like falling off a building and being struck by lightning on the way down" because there are so very few self-driving cars on the road, and they're all currently monitored by human operators? I think they'll eventually be safer than humans, but implying that we're already there is just wrong.

Jill_the_Pill · 9 years ago
Agreed. It got me thinking about the balance of workers doing a good job and those doing a poor job in my experience. Perhaps my standards are lower than the author's, but I would say I've encountered maybe 50:1 good workers to bad.
michael_h · 9 years ago
You have 50 seamless interactions, one bad one, then you go home and say, "You won't believe what happened to me at the pharmacy. I can't wait for robots to take over."
darpa_escapee · 9 years ago
A human can explain why it made an error. Good luck sussing out the "why?" when an AI makes an error.
andrewflnr · 9 years ago
A human can claim to tell you why it made an error. Good luck sussing out whether they're wrong, or lying.
dukky · 9 years ago
Counterexample: an AI can give you a full stacktrace and memory dump or allow you to attach a debugger when it makes an error.
joe_the_user · 9 years ago
Indeed,

The post is more or less the worst way to introduce the topic of automation.

What's needed or not for automation of various jobs is so different as a topic from increased abilities of AI in particular. In 1902, the first clerkless stores open in NYC known, as automats.

Modern technology naturally may make clerkless stores and other approaches more appealing but the potential has existed for a long time. And that means that the degree of automation and job loss one sees is a complex question, hinging on both social and technical questions.

https://en.wikipedia.org/wiki/Automat

pryelluw · 9 years ago
Yes, great point. Automation != AI. Its a common mistake.
zeroer · 9 years ago
The author kinda glosses over the fact that he himself didn't immediately recognize the wrong medication.
imron · 9 years ago
That wouldn't necessarily change the premise, and if anything would bolster it because the author isn't a robot either, and therefore is prone to the same human error he's complaining of.
eridius · 9 years ago
I don't think it's unreasonable to assume that he didn't actually look at the medication until it was time to take it. Pharmacies tend to give you your medication in a bag, so if you don't unpack the bag immediately, you're not even going to lay eyes on the medication.
AndrewKemendo · 9 years ago
Its a shitty post that does not even really take into account the current state of AI

Actually it does, if briefly:

"Put simply, AI can instantly identify all the troublesome gene sequences of a beagle’s genome to determine the likelihood of certain diseases but has struggled to identify a beagle in a picture."

But the author was making a broader point about how the "promise" or "eventuality" of AGI obviates most work. You can certainly disagree, which many do, but I personally am betting it's right.

pryelluw · 9 years ago
I did consider that point and agree to an extent. However, it seems to be written by someone that does not have enough knowledge or experience in the field. Someone who is merely repeating what could be considered a popular opinion. Which, actually, is my biggest issue with the post. Its unoriginal and brings nothing to the table. Typical venturebeat content. Words written with the only goal of generating traffic. Blogspam aimed at a smarter audience (you). It could have been so much better. Why not explore how AI could have prevented his medicine being served incorrectly? Why not investigate how a smart agent could use current state CV to make decisions and serve medications? It would have been valuable even if technically wrong. Its useful to understand how AI is understood by outsiders. But no, the author simply talks about the promise of AI. Which is fine, but brings nothing to the table. We've been talking about that promise for decades.

Will AI end up obviating most work? I don't know. But Id love to talk about that with people who are informed.

rootlocus · 9 years ago
> Put simply, AI can instantly identify all the troublesome gene sequences of a beagle’s genome to determine the likelihood of certain diseases

That sounds as much of an AI as finding the negative numbers in an array.

dexwiz · 9 years ago
I really think its the insurance industry that will push AIs into the mainstream. The author makes a good point that humans would rather trust another human with a 1% error rate than a robot with a .01% error rate, because the robot error is somehow scarier. But actuarial science doesn't care about which is scarier, it cares about the 1% v the .01%.

Malpractice insurance will force doctors to consult AIs. Auto insurance will force us to install automated driving systems. Home insurance will force us to install sensor systems and Echo-like assistants. Insurance costs will rise for those that refuse AIs, and make ignoring them financially irresponsible.

Insurance requirements put all sorts of pressures on industries. Anecdata here, but I know a very successful gynecologist who was forced to sell HIS practice, because the malpractice insurance ate all his profits. The possibility of a sexual misconduct case basically made it impossible for him to be a male gynecologist and carry the requisite insurance, even though he never had a suit filed against him.

meddlepal · 9 years ago
It seems tho insurance would shift from inidivudual/user to producer tho. Why would individuals need car insurance with a fully automated vehicle for example.

Deleted Comment

dexwiz · 9 years ago
Insurance is all about covering liability. Car manufacturers won't willingly take on all liability after automation. They will still try to shift some onto the consumer.

On the other hand, maybe individual auto insurance will eventually go away as automation takes hold. But automation will not happen overnight. The liability in an accident between a human and automated driver will likely be assigned to the human, meaning human based insurance premiums will rise, forcing more people to automation.

swiley · 9 years ago
There's still that 0.01% error that something you ultimately own and are therefore responsible for caused (unless they're used like taxi services.)
slyall · 9 years ago
I'm not sure. If an insurance company can make a profit by charging me $500/year (random number) for car insurance now why would I be forced by them to switch to a driverless car?

Sure the people with driverless cars only pay $100/year but my risk hasn't increased.

scarmig · 9 years ago
They'll accurately evaluate risk, as you point out, and many people will have the option of much, much cheaper insurance by using a driverless car. Most people hate driving anyway, so that's a nice cherry on top as they transition to driverless cars.

What happens when a majority of people have moved to driverless cars, though? The population of people who want cars that are human driven won't be representative of the general population: they'll be the joy drivers and the risk takers (there's a sexy marketing campaign in there somewhere...). Those people would represent a greater risk than the contemporary human driver, so they would pull up the costs of insurance. But that just drives the marginal human-driver toward automation, leading to a risk spiral until human cars become effectively luxury items.

After some disastrous wrecks, the government comes in, rightfully blames the deaths on the small minority of adrenaline junkies who want to drive cars without the guidance and protection of Google, and bans human driven cars from public roads. The next Larry Ellison has a fleet of human-driven cars to drive on her private automobile course, but day-to-day a human driven car is seen as often as a Bugatti.

DougN7 · 9 years ago
I would like to get a non-ACA blessed health insurance policy, but it will literally cost me $8000 more (in fines). $500 is no big deal, but make the cost difference significant and behavior is changed (with some grumbling).
qq66 · 9 years ago
The insurance industry doesn't necessarily want that world of no accidents. No accidents means nothing to insure.
neeleshs · 9 years ago
The insurance industry wants a world of no accidents, as long as the world does not know its accident-free
echelon · 9 years ago
I can't be alone in thinking this recent "robots are talking all the unskilled jobs" meme is a bit overblown. It feels premature, at best.

I'm skeptical about the robotic lawyer and pharmacist use cases the article calls out. These seem like really distant applications--nothing I gather we'll see in the next decade, anyway. I have a friend going to pharmacy school, and I wouldn't think to warn her about her career choice just yet. Mistakes in these fields simply cost too much.

What other careers are at immediate risk? How big of a working population will be put out of work?

I imagine trucking and freight is the industry most immediately at risk.

Autonomous vehicles are impressive, but there are so many social problems arising from failure modes we've yet to answer. I'm imagining the first fatal accident arising from an autonomous truck--the press will jump on automation like vultures. The lawsuits will be huge. I can't imagine the court of public opinion being kind to the shipping company whose machine kills an unsuspecting family on vacation.

I realize that doesn't prevent automation from happening, but I think it could bring any rollout to an immediate halt.

And though it's getting a bit off topic, I feel like the "no more car ownership" meme is utter hyperbole. I would be willing to bet money that private car ownership will continue to be a thing for a long time. How much would an hour and a half long Uber commute cost to make every day for workers living in a city without sufficient public transit, such as Atlanta? Are people that live or travel to rural areas going to participate in ride sharing? Increasing remote work and better housing options seem like they will be more pragmatic solutions to the plague of the commute.

As much as I'd like to see increased efficiency through AI and automation, it still seems much too early to count humans out. I guess I'll eat my words when I see it.

onion2k · 9 years ago
I imagine trucking and freight is the industry most immediately at risk.

I think it's jobs that are essentially a manual version of a computer program that will disappear first - the day-to-day work of lawyers and accountants is already being eaten by code, and that trend will continue until there are very few people in those industries. I'd go so far as to suggest there won't be any accountants in 20 years time, just accounting software and the people who run it.

Automating something mechanical is far harder than automating data processing.

cm2187 · 9 years ago
On the media jumping on it like vultures, I think plane accidents can give us hope.

When there is a plane crash, medias will be all over it for weeks. But at the same time they will repeat again and again that aircrafts are incredibly safe, and that you are more likely to have a crash on your way to the airport, etc.

The opposite is done for terrorism. Every time someone is killed in a terror attack, the message is "it could happen to you".

So not sure what makes them behave either way but it could still turn out to be OK.

On car ownership I agree. I don't really buy that all cars will be shared when fully automated.

For sentimental reasons, people like to own their car, they like to invest in it, they don't like to find a car that looks more like a mix of dumpster and toilet after 20 party-goers used it before you.

For a very practical reason, the same reason farmers own there heavy equipment when they could just rent it: because they all need it at the same time. The primary purpose for cars is commuting from home to work and everyone needs to do that at about the same time. So the only guarantee you will have to have a car available when you need it at peak period will be to own it.

What I think could happen is that when self driving cars have become mandatory in large cities, you won't really need to park the car near where you live or work. It could go park itself in some large underground car park 5-10 min away. Without parked cars on both sides of the street you increase the capacity of most cities (at least in Europe) massively.

That plus smoother driving could go a long way eliminating traffic jams.

onion2k · 9 years ago
The primary purpose for cars is commuting from home to work and everyone needs to do that at about the same time. So the only guarantee you will have to have a car available when you need it at peak period will be to own it.

I think a more likely outcome would be that employers have to become more flexible. If you have to be at work by 9am but you can't get there at that time then you can't do the job. If everyone is in that situation the employer won't be able to fill the position. Ergo, change will happen.

gohrt · 9 years ago
A robot pharmacist is a trivial application -- it's just a secure vending machine, essentially a fancy coffee machine.
throwaway729 · 9 years ago
In principle, pharmacists exercise a lot of professional judgement -- everything from preventing/reporting drug abuse to noticing when a set of prescribed medications shouldn't be taken together. They act as the safe join operation of the medical system, which forks between specialists, various offices, etc., who usually don't communicate well with one another.

In practice, you may be right -- I don't know enough about pharmacies to know.

visarga · 9 years ago
> I'm skeptical about the robotic lawyer and pharmacist use cases the article calls out. These seem like really distant applications

Machine learning is being applied in DNA research and chemistry right now, not over decades.

piotrkaminski · 9 years ago
You may well be right, but it's pretty common to initially overestimate the pace of progress ("it'll be ready in 15 years" for 100 years running) only to severely underestimate it later on ("it feels premature"). Given the recent evidence of unexpectedly early breakthroughs in AI ("it'll take another 10 years to beat humans at Go") I think it's not crazy to ask whether the inflection point has been reached.
eva1984 · 9 years ago
I think what AI really challenges is the value of man. Our current world relies on the assumption that more people means more productivity/consumption, so more growth and stronger economy. With the potential emergence of more capable machines, this assumption may no longer hold.

How to deal with the left behind society? Trying to find them new purposes of life? How can we make sure those new purposes won't be automated in the future? Or if large portion of the population facing the problem of being jobless, is population growth/immigration still a positive thing? What about education? If we know that most of the people won't be needed in the workforce any way, is education still something worth having, except for very basic common sense?

This all leads me to think that the future might be more static and less energetic one, yet it might be more affluent than ever. It could also be more equal than ever, that most people might be under the care of some really intelligent system to spend their whole life, without really being required to achieve anything, but it will be OK, and will become the new norm. Population might decline, but they will be gradually replaced by robots, until new equilibrium has been reached.

Fricken · 9 years ago
I think we're overestimating AI. The hype has been around in tech circles now for years, but now it's breached it's walls and is spilling out into mainstream discussions in economics and politics, and it's getting even more detached from reality.

And yet, I don't know of an instance where all this newfangled machine learning has been disruptive. The expert systems that will supposedly displace accountants and paralegals don't exist. Dextrous robots can barely screw the lid on a bottle, and only under controlled conditions. Autonomous vehicles reliable enough to be applied commercially don't exist. None of it exists. There's no garauntee it will.

The AI race is most definitely on, but nobody knows where the finish line is. And there's a long history of inventors grossly misunderguesstimating the location of finish lines, particularly in the field of AI, where every time we find a new piece of the puzzle we think 'This is it. This is the holy grail!'

Machines are better at pattern recognition, that's the big breakthrough, but there's so much more to taking humans out of the loop than just that.

stale2002 · 9 years ago
What? Autonomous vehicles are already deployed in the 10s of thousands on the road.

You can buy one right now. Tesla sells them.

Sure, you can say that these are not "full" self driving cars. But they are part of the way there and they are already being disruptive.

Imagine if the Tesla self driving cars got a bit better, and how many accidents they could prevent (they are already stopping accidents).

You don't need perfect AI to disrupt industries.

Fricken · 9 years ago
Partial autonomy isn't disruptive, you may want to look up what that word means in the economic sense. It isn't going to turn the logistics of transportation on it's head, or put anybody out of work, or cripple the marketshare of big carmakers. It's just a feature.
fourthark · 9 years ago
Yep, maybe not the fun kind, but lawyering and accounting take creativity. We have seen zero evidence of machine creativity.
falcor84 · 9 years ago
There are tons of examples of machine creativity. The highest profile one from the past year is probably AlphaGo's famous move 37.
mdpopescu · 9 years ago
NicoJuicy · 9 years ago
Everyone always says that we are over estimating current changes in the economy. We already had industrialisation and it went fine. I think they are wrong, we are on the verge of something totally new and all the optimists aren't realists.

Farming was automized, so we transformed to other jobs. No problem there ( thank god it was possible)

Industrialisation made it possible for a company export to a total new market and it made worldwide selling possible ==> New markets. No problem there

Computer appeared. Untill now, the effect wasn't big. They lines are all dedicated and require huge investments. Humans are still required for putting chocolates in a box and getting it out of the "container" ( although it could be automatised with a huge investment).

What's going to happen when those robots with an investment cost of 10.000$ to replace an employee. Can learn putting chocolates in a box with supervised learning and no other investment.

Who's going to get work and where is the money coming from if all jobs are slowly being replaced / automized. I think there is a lot of trouble on the road ahead. To what usefull jobs will all drivers/truckers/taxi's transform to for still being usefull. What jobs are next, accounting perhaps. And where will it stop?

And don't say, they have to learn something new. The people that are going to be replaced by total automation ( robots) in the workforce are (mostly) workers in the first place and i don't think a lot of them are smart enough, to do something new that won't be replaced by automation < 3 years when it begins.

I'm not trying to offend someone, but i'm pretty hopeless because i don't see a way out for a lot of people.

mamadrood · 9 years ago
I've had this discussion yesterday with a colleague of mine a bit older than me, he doesn't think that technology will kill jobs, he thinks that the government will probably prevent self driving cars and the like.

I think the opposite, I live in a country where transport is the main employer in the country, I think that in the next two decades all these jobs will be lost and transport company will have enough lobbying power to pass any laws they want.

We were talking about retirement age and I was wondering how we could push it later when there will be less and less jobs, and eventually we will have to admit that a big portion of the population won't be able / needed to work, and that I will probably be part of it.

cm2187 · 9 years ago
I think these fears are grossly exagerated. Our society has a remarkable capacity to preserve manual work way after it should have been automated.

I work a lot with the finance dept of a large bank where 90% of the work should have been automated years ago. No need for AI, just a little bit of coding and good communication between IT and the business. But the business is not in a rush to destroy its own jobs, and IT consistently misunderstand what the business actually does. (Also we often have to deal with "bottom of the basket" IT, but it will be the same with AI).

And with the people currently in charge, it won't change anytime soon.

So I think the time where AI will have automated every single manual job is nowhere near. Like software has not automated every single manual task yet, far from it. The only thing I can see AI taking over are the jobs that are standardised enough, and accessible enough for programmers to understand, and on a large enough scale that it makes sense to have a lots of ressources dedicated to training and tuning algos for someone to bother deploying AI.

pmyjavec · 9 years ago
This is true, the processes and technology for far greater efficiency in business, manufacturing and politics have been available for a long, long time. But why would a manager in a corporation leverage these things if it means losing an empire?

This is the same for so many areas of society, especially politics. It would be easy to build a system which replaced representitive politics, think micro-voting on particular issues by all citizens. This hasn't happened en-masse because politicians are going to have to remove themselves from their jobs first.

Robot domination might actually never happen, I mean even if we ever manage to create AGI, there is little reason to believe that it will:

* Want to stick around and serve humanity, because why would it?

* Care about us at all, ignore humans just do what it likes and forget about us entirely.

Edit: I forgot to add that if we built more welcoming, welfare bsaed societies where people are looked after when they're out of a job, then the adoption of automation and "robots" would probably be more accepted and less imposed upon us