Readit News logoReadit News
johnohara · 4 years ago
FTA: "Since the technology cannot be un-invented, the book calls on America to develop and shape the military applications of AI, rather than surrendering the field to countries that do not share its values."

The past 60 years have made me skeptical about what those values actually are.

alpineidyll3 · 4 years ago
...The recent history of the country bears witness to H Kissinger's values perhaps more than any other person. Unfortunately I don't think 'AGI shouldn't be used against humanity' is in there alongside Realpolitik. If we are safe from AGI it will be because the power majority believes anti-human AGI is ammoral, not because we get it first.
tuatoru · 4 years ago
They're expressed in the S&P 500, DJIA, NASDAQ, etc.

Edit: Going by the rule "your values are demonstrated by what you do, even when it costs you to do that", it's fairly easy to figure out what they are.

dr_dshiv · 4 years ago
Maybe one of the biggest risks of AI is reifying it, or treating it like a real thing. We may romantically desire AI as an agent with independence, but in my view, this is always a serious mistake. “AI thinking” causes executives to treat classes of technology like a magical black box add-on and causes engineers to remove human involvement. So long as human input is required, the intelligence isn’t artificial— and that leads to disasters like Zillow’s AI algorithm.

This is all a matter of design. We shouldn’t try to design artificial Intelligence, we should design intelligent systems (ones that support systemic success and wellbeing). ML, GANs, transformers, bots and other technologies are cool tools. Let’s not fool others or ourselves by calling them AI.

The clear and present danger is the creation of powerful intelligent systems that we can’t control and that decrease overall system success/wellbeing. Like corporations that aren’t beholden to broader societal interests. That’s the real “AI“ challenge we need to address. And, how do we enable the governing systems in our society to act intelligently? We desperately need more functional governance at scale.

mdp2021 · 4 years ago
> treating it like a real thing

Oracular devices are a real thing - you have dimes for a coin toss. Decision making algorithms of scarce transparency do exist. It (the contextual object) is a real thing: AI as an oracular device is a real thing.

> desire AI as an agent with independence

The point is that we do not desire that. It is an agent when entrusted, and it is independent when unchecked, which the black-box nature of some AI technologies can facilitate.

The issue is not with a misunderstanding of the term AI - which we have used for 65 years effortlessly, exactly «that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. [...] how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves» (McCarthy 1955) -, but with its real instances and very real potential adopters.

dr_dshiv · 4 years ago
> the term AI - which we have used for 65 years effortlessly, exactly «that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it…

Except that AI research rarely tries to replicate how people think; instead, it create systems that mimic the typical outcomes of human effort. The term AI was invented for grant applications. What’s the difference between AI, Automation and Cybernetics?

dougabug · 4 years ago
What you say seems undeniably true, however reifying fundamental concepts lies at or near the core of human natural intelligence. Perhaps more so than logic itself.

Work on AI is basically as old as computing itself (i.e. the pioneers of computing explored from early on connectionist models of computing, neural networks, cellular automata, reinforcement learning, LISP, etc). AI has been evolving for eight decades, vast resources have been channeled into it; and frankly, the successes are striking.

It would be almost impossible for us not to conceive of AI as a “thing.”

dr_dshiv · 4 years ago
It’s not so hard. Just a few years ago, say 2012, it was really unusual for academics in the field of AI to refer to any system as “AI” or “having AI” or “being an AI state.” Why? Because it was disingenuous. But then when IBM Watson was released, the dump trucks of money started showing up, and the self-policing stopped.

Look, I’m not critiquing the field of AI or funding it. Referring to AI research is no problem. My problem comes from naming the outputs of the field “AI.” Because we all know it is a moving target, it doesn’t really mean anything and it confuses non-experts to the point that they make poor decisions.

mjburgess · 4 years ago
I think it's just a matter of the penny dropping. When self-driving cars aren't here over the next few years; and when more big biz tries and fails with 'AI' -- we will enter another winter.

I have to say it will be schadenfreude from me -- given the absolutely reckless behaviour of the army of grant-chasing academics happy to inflate the bubble. When that funding suddenly disappears, it'll be there own fault.

varelse · 4 years ago
As skeptical as I am of the AGI and even full self-driving, successes like AlphaGo, AlphaFold 2, and Deepfakery from video to voice all demonstrate the technology can do things we never really could do before.

But in all of those cases humans are in the loop to either build the system at scale or to keep it from going off the rails in production. I see much more of that in the next decade. And I don't think there's going to be an AI winter but I do hope there's a reckoning for all the snake oilers of AI out there. I mean they're the same people who snake oil for crypto too, and they're just getting their game on for snake oiling the metaverse.

Maybe that will happen for AI VC, but as someone else said, VC is where good ideas go to die.

zackham · 4 years ago
I heard Schmidt on a couple podcasts[1][2] recently promoting this book, and I found them to be useful for understanding how AI is being discussed among the political and leadership class. I was surprised to see all the negativity here - I thought he made some interesting points, and I appreciated getting some insight into how decision-makers are thinking about this in terms of regulation and geopolitical risks.

[1] https://tim.blog/2021/10/25/eric-schmidt-ai/

[2] https://hiddenforces.io/podcasts/eric-schmidt-ai-human-futur...

jeffrallen · 4 years ago
Maybe the negativity is due to the fact that Eric chose an unindicted war criminal as his co-author?
zackham · 4 years ago
I understand the points being made, it's just not what I choose to focus on in the very limited time I'm going to spend engaging with this material. The purpose of my comment was to let any other tech-focused visitors to this site know that I did find some value on the periphery of this book, in case they are equally uninterested in hearing everyone's hot takes on a 98 year old who's already had books written about his life's negative impacts.
kranke155 · 4 years ago
You could say that about a lot of US officials. This obsession with Kissinger is amazing. Do we plan to indict Gorbachev for the Afghanistan invasion ?
tata71 · 4 years ago
Schmidt is painted as the same by Assange in "Google is Not What It Seems", no?
mensetmanusman · 4 years ago
This seems like an opinion.
xhkkffbf · 4 years ago
Huttenlocher is guilty of waging many faculty battles but I didn't realize they were indictable. Well, I guess that makes him unindicted.
nefitty · 4 years ago
I refuse to read any books written by people that eat meat. Killing living things is a crime, and supporting the industries involved in that is immoral.
tuatoru · 4 years ago
Any written summaries/reviews? Ain't nobody got time to listen to podcasts.

Edit: Transcript link in the first reference you gave.

>Eric Schmidt: About 12 years ago, I met him [Kissinger] at a conference called Bilderberg.

The Bilderberg Group[1] is the closest thing to a "secret cabal running the world" that we actually have. Not very secret, though.

1. https://en.wikipedia.org/wiki/Bilderberg_meeting

tata71 · 4 years ago
Try and attend.
MichaelMoser123 · 4 years ago
does that all mean that Eric Schmidt is running for office, or is he trying to get into a position of political influence with the Biden administration? (i mean, is it possible that he is using his book as a platform in this effort?)
AnimalMuppet · 4 years ago
If he's co-writing with Kissinger, I doubt he's doing it as an effort to suck up to the Biden administration.
boomboomsubban · 4 years ago
Really hard to believe that a CEO of a company with the motto "don't be evil" could write a book with Kissinger. It's unsurprising that the book's main push seems to be for more AI war research as a method of saving lives.
1cvmask · 4 years ago
Erik Schmidt and Sergey Brin are all part of the surveillance state and war machine. They will soon be awarded Nobel Prizes like the war criminal Henry Kissinger.

https://en.wikipedia.org/wiki/The_Trial_of_Henry_Kissinger

https://www.cnet.com/tech/services-and-software/emails-give-...

http://america.aljazeera.com/articles/2014/5/6/nsa-chief-goo...

darksaints · 4 years ago
The book was pretty thorough, but not thorough enough. It failed to cover how much he intervened in US politics to cover for, enable, and protect a military dictatorship that was carrying out a massive political cleansing in Argentina, in which over 30,000 people were "disappeared". He's about as evil as they come.
freeflight · 4 years ago
Not really that hard to believe, considering where the "don't be evil" company got part of its initial funding from [0] and what these funding outfits have been up to themselves since then [1].

[0] https://qz.com/1145669/googles-true-origin-partly-lies-in-ci...

[1] https://arstechnica.com/information-technology/2016/02/the-n...

dougabug · 4 years ago
Weaponizing AI on the face of it seems completely insane. The problem is that if we don’t, then we would presumably be at a tremendous strategic disadvantage to an adversary which did (more or less the logic that led to the development and proliferation of nuclear weapons).

I can’t see how militarized forms of AI don’t emerge as a consequence of significant progress in non-military AI, so perhaps all roads do eventually lead to SkyNet.

Deleted Comment

randcraw · 4 years ago
Great. Another reference to Skynet as the inevitable outcome of advancing automation. Despite all the facts to the contrary.

Will technology, including AI, inevitability be used to improve weaponry? Yes. Will AI inevitably lead to Skynet and Terminator robots? Hardly.

Today we can't build a self-driving car with more than level 2 autonomy, nor do honest experts believe one will happen soon. Today's autonomous mobile robots are incapable of even the most rudimentary human motions, and likewise, human-level robots are invisible on any 50 year time horizon, commercially or militarily. No AI-based tech has shown even the faintest sign of the level of AGI capabilities needed to control a robot army. Nor has any AI shown the potential for an emergent executive function or a desire to KILL ALL HUMANS.

To assume that present-day AI will likely self-assemble into a rebel robot army intent on destroying humanity… Why does anybody take this crap seriously? Or soberly reference it while hoping to be taken seriously?

It's time for all adults everywhere to stop imagining that pop scifi movies are a sensible foundation toward discussing how new tech can best serve its intended purpose. Scifi is meant to entertain, not inform. Given what we know today about AI, Skynet doesn't have a hope in hell in happening — not in terms of platform mobility nor in terms of cognition nor in terms of self-assembly. So PLEASE give all references to Skynet a rest.

boomboomsubban · 4 years ago
>more or less the logic that led to the development and proliferation of nuclear weapons

Sure. One country threw a ton of resources into a program, and as a result the technology spread to numerous other countries leading to something like seven countries capable of destroying the world.

Maybe the end result is inevitable. Racing towards it so we can kill others before they can kill us isn't smart.

almeria · 4 years ago
Did we ever have any reason to believe that slogan?
bob331 · 4 years ago
Kissinger is evil and schmidt has committed much evil. They are well suited

Dead Comment

Dead Comment

knorker · 4 years ago
What is about? How to overthrow democracies, and prolong the Vietnam by years in order to undermine your opponent's POTUS bid... But this time using AI?

Maybe how to select which terrorist group to give weapons to?

Kissinger can supply the training data.

mdp2021 · 4 years ago
Exactly that - also considering implicitly that humans can make literally questionable decisions - to automate them decisions outside human consideration is probably not a good idea.
Paul_S · 4 years ago
Ah, Henry Kissinger, wouldn't trust someone who backed a fraudster (theranos) and enabled her to scam even more people. Maybe his judgment isn't what it used to be.
ch4s3 · 4 years ago
When was Kissinger's judgement ever good?
almeria · 4 years ago
Depends what you mean by "good".

For the ruthlessly cynical and basically destructive ends toward which this man has devoted his life - his judgement is arguably quite effective.

That is, after all, how you get to be "America's preeminent living statesman".

peter303 · 4 years ago
The ongoing Holmes trial revealed all kinds of celebrity and private equity investors were snookered. Retired politicians are eager to jump onto the Silicon Valley gravy train as directors. George Bush Sr made a killing participating in the Global Crossing fiber cable venture.
MichaelMoser123 · 4 years ago
The wikipedia article on Kissinger mentions a Dr. Strangelove aspect, not mentioned in the linked review: https://en.wikipedia.org/wiki/Henry_Kissinger#Views_on_U.S._...

""" Computers and nuclear weapons In 2019, Kissinger wrote about the increasing tendency to give control of nuclear weapons to computers operating with Artificial Intelligence (AI) that: "Adversaries' ignorance of AI-developed configurations will become a strategic advantage".[195] Kissinger argued that giving power to launch nuclear weapons to computers using algorithms to make decisions would eliminate the human factor and give the advantage to the state that had the most effective AI system as a computer can make decisions about war and peace far faster than any human ever could.[195] Just as an AI-enhanced computer can win chess games by anticipating human decision-making, an AI-enhanced computer could be useful in a crisis as in a nuclear war, the side that strikes first would have the advantage by destroying the opponent's nuclear capacity. Kissinger also noted there was always the danger that a computer would make a decision to start a nuclear war that before diplomacy had been exhausted or the algorithm controlling the AI might make a decision to start a nuclear war that would be not understandable to the operators.[196] Kissinger also warned the use of AI to control nuclear weapons would impose "opacity" on the decision-making process as the algorithms that control the AI system are not readily understandable, destabilizing the decision-making process:

... grand strategy requires an understanding of the capabilities and military deployments of potential adversaries. But if more and more intelligence becomes opaque, how will policy makers understand the views and abilities of their adversaries and perhaps even allies? Will many different internets emerge or, in the end, only one? What will be the implications for cooperation? For confrontation? As AI becomes ubiquitous, new concepts for its security need to emerge.[196]

"""