Readit News logoReadit News
bitL · 7 years ago
I was recently "playing" with some radiology data. I had no chance to identify diagnoses myself with untrained eyes, something that probably takes years for a decent radiologist to master. Just by using DenseNet-BC-100-12 I ended up with 83% ROC AUC after a few hours of training. In 4 out of 12 categories this classifier beat best human performing radiologists. Now the very same model with no other change than adjusting number of categories could be used in any image classification task, likely with state-of-art results. I was surprised when I applied it to another, completely unrelated dataset and got >92% accuracy right away.

If you think this is a symptom of AI winter, then you are probably wasting time on outdated/dysfunctional models or models that aren't suited for what you want to accomplish. Looking e.g. at Google Duplex (better voice synchronization than Vocaloid I use for making music), this pushed state-of-art to unbelievable levels in hard-to-address domains. I believe the whole SW industry will be living next 10 years from gradual addition of these concepts into production.

If you think Deep (Reinforcement) Learning is going to solve AGI, you are out of luck. If you however think it's useless and won't bring us anywhere, you are guaranteed to be wrong. Frankly, if you are daily working with Deep Learning, you are probably not seeing the big picture (i.e. how horrible methods used in real-life are and how you can easily get very economical 5% benefit of just plugging in Deep Learning somewhere in the pipeline; this might seem little but managers would kill for 5% of extra profit).

khawkins · 7 years ago
AI winters are a result of a massive disparity between the expectations of the general public and the reality of where the technology currently sits. Just like an asset bubble, the value of the industry as a whole pops as people collectively realize that AI, while not being worthless, is worth significantly less than they thought.

Understand that in pop-sci circles over the past several years the general public is being exposed to stories warning about the singularity by well respected people like Stephen Hawking and Elon Musk (http://time.com/3614349/artificial-intelligence-singularity-...). Autonomous vehicles are on the roads and Boston Dynamics is showing very real robot demonstrations. Deep learning is breaking records in what we thought was possible with machine learning. All of this progress has excited an irrational exuberance in the general public.

But people don't have a good concept of what these technologies can't do, mainly because researchers, business people, and journalists don't want to tell them--they want the money and attention. But eventually the general public wises up to the unfulfillment of expectations, and drives their attention elsewhere. Here we have the AI winter.

varelse · 7 years ago
I'd clarify that there is a specific delusion that any data scientist straight out of some sort of online degree program can go toe to toe with the likes of Andrej Karpathy or David Silver with the power of "teh durp lurnins'." And the predictable disappointment arising from the craptastic shovelware they create is what's finally creating the long overdue disappointment.

Further, I have repeatedly heard people who should know better, with very fancy advanced degrees, chant variants of "Deep Learning gets better with more data" and/or "Deep Learning makes feature engineering obsolete" as if they are trying to convince everyone around them as well as themselves that these two fallacious assumptions are the revealed truth handed down to mere mortals by the 4 horsemen of the field.

That said, if you put your ~10,000 hours into this, and keep up with the field, it's pretty impressive what high-dimensional classification and regression can do. Judea Pearl concurs: https://www.theatlantic.com/technology/archive/2018/05/machi...

My personal (and admittedly biased) belief is that if you combine DL with GOFAI and/or simulation, you can indeed work magic. AlphaZero is strong evidence of that, no? And the author of the article in this thread is apparently attempting to do the same sort of thing for self-driving cars. I wouldn't call this part of the field irrational exuberance, I'd call it amazing.

kinsomo · 7 years ago
> But eventually the general public wises up to the unfulfillment of expectations, and drives their attention elsewhere. Here we have the AI winter.

And more importantly, business and government leaders wise up and turn off the money tap.

Florin_Andrei · 7 years ago
> AI winters are a result of a massive disparity between the expectations of the general public and the reality of where the technology currently sits.

I think they also happen when the best ideas in the field run into the brick wall of insufficiently developed computer technology. I remember writing code for a perceptron in the '90s on an 8 bit system, 64 k RAM - it's laughable.

But right now compute power and data storage seem plentiful, so rumors of the current wave's demise appear exaggerated.

cyberpunk0 · 7 years ago
> AI winters are a result of a massive disparity between the expectations of the general public and the reality of where the technology currently sits.

A symptom of capitalism and marketing trying to push shit they don't understand

jeffreyrogers · 7 years ago
I don't think the claim is that AI isn't useful. It's that it's oversold. In any case, I don't think you can tell much about how well your classifier is working for something like cancer diagnoses unless you know how many false negatives you have (and how that compares to how many false negatives a radiologist makes).
bitL · 7 years ago
There are two sides to this:

- how good humans are in detecting cancer (hint: not very good) and if having an automated system even as a "second opinion" next to an expert might not be useful?

- there are metrics for capturing true/false positives/negatives one can focus on during learning optimization

From studies you might have noticed that expert radiologists have e.g. F1-score at 0.45 and on average they score 0.39, which sounds really bad. Your system manages to push average to 0.44, which might be worse than the best radiologist out there, but better than an average radiologist [1]. Is this really being oversold? (I am not addressing possible problems with overly optimistic datasets etc. which are real concerns)

[1] https://stanfordmlgroup.github.io/projects/chexnet/

braindongle · 7 years ago
Is there a prevailing approach to thinking about (accounting for?) false negatives in ground truth data? I'm new to this area, and the question is relevant for my current work. By definition, you simply don't know anything about false negatives unless you have some estimate of specificity in addition to your labeled data, but can anything be done?
ujal · 7 years ago
I don't get the sentiment of the article either. I can't speak for researchers but software engineers are living through very exciting times.

  State of the art in numbers:
  Image Classification - ~$55, 9hrs (ImageNet)
  Object Detection - ~$40, 6hrs (COCO)
  Machine Translation - ~$40, 6hrs (WMT '14 EN-DE)
  Question Answering - ~$5, 0.8hrs (SQuAD)
  Speech recognition - ~$90, 13hrs (LibriSpeech)
  Language Modeling - ~$490, 74hrs (LM1B)
"If you think Deep (Reinforcement) Learning is going to solve AGI, you are out of luck" --

I don't know. Duplex equipped with a way to minimize his own uncertainties sounds quite scary.

varelse · 7 years ago
Duplex was impressive but cheap street magic: https://medium.com/@Michael_Spencer/google-duplex-demo-witch...

Microsoft OTOH quietly shipped the equivalent in China last month: https://www.theverge.com/2018/5/22/17379508/microsoft-xiaoic...

Google has lost a lot of steam lately IMO. Facebook is releasing better tools and Microsoft, the company they nearly vanquished a decade ago, is releasing better products. Google does remain the master of its own hype though.

placebo · 7 years ago
My thoughts on AGI (at least in the sense of being indistinguishable from interaction with a human) are the same as my thoughts on extraterrestrial life: I'll believe it only when I see it (or at least when provided with proof that the mechanism is understood). This extrapolation on a sample size of one is something I don't understand. How is the fact that machine learning can do specific stuff better than humans different in principle than the fact that a hand calculator can do some specific stuff better than humans? On what evidence can we extrapolate from this to AGI?

We haven't found life outside this planet, and we haven't created life in a lab, therefore n=1 for assessing probability of life outside earth (which means we can't calculate a probability for this yet). Likewise, we haven't created anything remotely like animal intelligence (let alone human) and we have no good theory regarding how it works, so n=1 for existing forms of general intelligence.

Note that I'm not saying there can be no extraterrestrial life or that we will never develop AGI, just that I haven't seen any evidence at this point in time that any opinions for or against their possibility are anything more than baseless speculation.

sqrt17 · 7 years ago
If the dollar amounts refer to the training cost for the cheapest DL model, do you have references for them? A group of people at fast.ai trained an ImageNet model for 26$, presumably after spending a couple hundered on getting everything just right: http://www.fast.ai/2018/04/30/dawnbench-fastai/
timr · 7 years ago
"Just by using DenseNet-BC-100-12 I ended up with 83% ROC AUC after a few hours of training."

OK, but 83% ROC/AUC is nothing to be bragging about. ROC/AUC routinely overstates the performance of a classifier anyway, and even so, ~80% values aren't that great in any domain. I wouldn't trust my life to that level of performance, unless I had no other choice.

You're basically making the author's case: deep learning clearly outperforms on certain classes of problems, and easily "generalizes" to modest performance on lots of others. But leaping from that to "radiology robots are almost here!" is folly.

bitL · 7 years ago
Yeah, but the point here was that radiologists on average fared even worse. 83% is not impressive, but better than what we have right now in real-world with real people, as sad as it is. Obviously, best radiologists would outperform it right now, but average ones, likely stressed under heavy workload might not be able to beat it. And of course, this classifier probably works on certain visual structures better than humans and other ones easier detectable by humans would slip through.

There is also higher chance that next state-of-art model would push it significantly over 83% or best human radiologist at some point in the future, so it might not be very economical to train humans to become even better (i.e. dedicate your life to focus on radiology diagnostics only).

ASalazarMX · 7 years ago
> Just by using DenseNet-BC-100-12 I ended up with 83% ROC AUC after a few hours of training

Of course! Using DenseNet-BC-100-12 to increase ROC AUC, it was so obvious!

Deleted Comment

imant · 7 years ago
Would you mind sharing which other, unrelated dataset you have used the model on?
bitL · 7 years ago
I can't unfortunately, proprietary stuff being plugged into existing business right now.

Deleted Comment

flamedoge · 7 years ago
Next winter will probably going to be going over that 92% across all domains.
bitL · 7 years ago
Possibly, but will it be called AI winter, if e.g. average human has 88% accuracy and best human 97%?
machinelearning · 7 years ago
Yea this sounds extremely unlikely unless the other dataset has a fairly easy decision boundary. The kind of cross-domain transfer learning you seem to think deep neural networks have is nothing I've observed before in my formal studies of neural network
mathattack · 7 years ago
How much of this can we pin on IBM's overhype of Watson?
eanzenberg · 7 years ago
ROC AUC is fairly useless when you have disparate costs in the errors. Try precision-recall.
bitL · 7 years ago
I mentioned F1 in some later comment.
joe_the_user · 7 years ago
This is a deep, significant post (pardon pun etc).

The author is clearly informed and takes a strong, historical view of the situation. Looking at what the really smart people who brought us this innovation have said and done lately is a good start imo (just one datum of course, but there are others in this interesting survey).

Deepmind hasn't shown anything breathtaking since their Alpha Go zero.

Another thing to consider about Alpha Go and Alpha Go Zero is the vast, vast amount of computing firepower that this application mobilized. While it was often repeated that ordinary Go program weren't making progress, this wasn't true - the best, amateur programs had gotten to about 2 Dan amateur using Makov Tree Search. Alpha Go added CNNs for it's weighting function and petabytes of power for it's process and got effectiveness up to best in the world, 9 Dan professional, (maybe 11 Dan amateur for pure comparison). [1]

Alpha Go Zero was supposedly even more powerful, learned without human intervention. BUT it cost petabytes and petabytes of flops, expensive enough that they released a total of ten or twenty Alpha Go Zero game to the world, labeled "A great gift".

The author convenniently reproduces the chart of power versus results. Look at it, consider it. Consider the chart in the context of Moore's Law retreating. The problems of Alpha Zero generalizes as described in the article.

The author could also have dived into the troubling question as of "AI as ordinary computer application" (what does testing, debugging, interface design, etc mean when the app is automatically generated in an ad-hoc fashion) or "explainability". But when you can paint a troubling picture without these gnawing problems appearing, you've done well.

[1] https://en.wikipedia.org/wiki/Go_ranks_and_ratings

tim333 · 7 years ago
>Deepmind hasn't shown anything breathtaking since their Alpha Go zero

They went on to make AlphaZero, a generalised version that could learn chess, shogi or any similar game. The chess version beat a leading conventional chess program 28 wins, 0 losses, and 72 draws.

That seemed impressive to me.

Also they used loads of compute during the training but not so much during play.(5000 TPUs, 4TPUs).

Also it got better than humans in those games from scratch in about 4 hours whereas humans have had 2000 years to study them so you can forgive it some resource usage.

felippee · 7 years ago
It's not like humanity really needs another chess playing program 20 years after IBM solved that problem (but now utilizing 1000x more compute power). I just find all these game playing contraptions really uninteresting. There are plenty real world problems to be solved of much higher practicality. Moravec's paradox in full glow.
carlmr · 7 years ago
>Also it got better than humans in those games from scratch in about 4 hours whereas humans have had 2000 years to study them so you can forgive it some resource usage.

Most humans don't live 2000 years. And realistically don't spend that much of their time or computing power on studying chess. Surely a computer can be more focused at this and the 4h are impressive. But this comparison seems flawed to me.

pleasecalllater · 7 years ago
> The chess version beat a leading conventional chess program 28 wins, 0 losses, and 72 draws.

In a not equal fight, and the results are still not published. I'm not claiming that AlphaZero wouldn't win, but that test was pure garbage.

kenjackson · 7 years ago
And didn’t they just do all of this? It’s not like 5 years have passed. Does he expect results like this every month?
pX0r · 7 years ago
> Also it got better than humans in those games from scratch in about 4 hours whereas humans have had 2000 years to study them so you can forgive it some resource usage.

Few would care. Your examiner doesn't give you extra marks on a given problem for finishing your homework quickly.

adynatos · 7 years ago
oh wow, it can play chess. can it efficiently stack shelves in warehouses yet?
bryanrasmussen · 7 years ago
there is no human that has studied any of those games for 2000 years. So I think you mean 4 hours versus average human study of 40 years.
tensor · 7 years ago
I'm sure the same could be said for early computer graphics before the GPU race. You don't need Moore's Law to make machine learning fast, you can also do it with hardware tailored to the task. Look at Google's TPUs for an example of this.

If you want an idea of where machine learning is in the scheme of things, the best thing to do is listen to the experts. _None_ of them have promised wild general intelligence any time soon. All of them have said "this is just the beginning, it's a long process." Science is incremental and machine learning is no different in that regard.

You'll continue to see incremental progress in the field, with occasional demonstrations and applications that make you go "wow". But most of the advances will be of interest to academics, not the general public. That in no way makes them less valuable.

The field of ML/AI produces useful technologies with many real applications. Funding for this basic science isn't going away. The media will eventually tire of the AI hype once the "wow" factor of these new technologies wears off. Maybe the goal posts will move again and suddenly all the current technology won't be called "AI" anymore, but it will still be funded and the science will still advance.

It's not the exciting prediction you were looking for I'm sure, but a boring realistic one.

digitalzombie · 7 years ago
> Funding for this basic science isn't going away.

What make this 3rd/4th boom in AI different?

The other AI winter, the funding for these science went from well funded to little funding.

I'm skeptical, with respect of course, on your statement because it doesn't have anything to back that up other than it produce useful technologies. Wouldn't this statement imply that the other previous AI which experience AI Winter (expert system, and whatever else) didn't produce useful enough technologies to have funding?

I'm currently on the camp of there is going to be an AI Winter III coming.

> None_ of them have promised wild general intelligence any time soon.

The post talk about Andrew Ng wild expectation on other things such as radiologist tweet. While it's not wild general intelligence. What I think the main article and also I am thinking is the outrageous speculation. Another one is the tesla self driving, it doesn't seem to be there yet and perhaps we're hitting the point of over promise like we did in the past and then AI winter happen because we've found the limit.

lithander · 7 years ago
> BUT it cost petabytes and petabytes of flops, expensive enough that they released a total of ten or twenty Alpha Go Zero game to the world

Training is expensive but inference is cheap enough for Alpha Zero inspired bots to beat human professionals while running on consumer hardware. DeepMind could have released thousands of pro-level games if they wanted to and others have: http://zero.sjeng.org/

norswap · 7 years ago
Bleh, no it isn't.

I am 100% in agreement with the author on the thesis: deep learning is overhyped and people project too much.

But the content of the post is in itself not enough to advocate for this position. It is guilty of the same sins: projection and following social noises.

The point about increasing compute power however, I found rather strong. New advances came at a high compute cost. Although it could be said that research often advances like that: new methods are found and then made efficient and (more) economical.

A much stronger rebuttal of the hype would have been based on the technical limitations of deep learning.

dpwm · 7 years ago
> A much stronger rebuttal of the hype would have been based on the technical limitations of deep learning.

I'm not even sure how you'd go about doing that. You could use information theory to debunk some of the more ludicrous claims, especially ones that involve creating "missing" information.

One of the things that disappoints me somewhat with the field, which I've arguably only scratched the surface of, is just how much of it is driven by headline results which fail to develop understanding. A lot of the theory seems to be retrofitted to explain the relatively narrow result improvement and seems only to develop the art of technical bullshitting.

There are obvious exceptions to this and they tend to be the papers that do advance the field. With a relatively shallow resnet it's possible to achieve 99.7% on MNIST and 93% on CIFAR10 on a last-gen mid-range GPU with almost no understanding of what is actually happening.

There's also low-hanging fruit that seems to have been left on the tree. Take OpenAI's paper on parametrization of weights, so that you have a normalized direction vector and a scalar. This makes intuitive sense for anybody familiar with high-dimensional spaces since nearly all of the volume of a hypersphere lies around the surface. That this works in practice is great news, but leaves many questions unanswered.

I'm not even sure how many practitioners are thinking in high dimensional spaces or aware of their properties. It feels like we get to the universal approximation theorem and just accept that as evidence that they'll work well anywhere and then just follow whatever the currently recognised state of the art model is and adapt that to our purposes.

alexandercrohde · 7 years ago
> A much stronger rebuttal of the hype would have been based on the technical limitations of deep learning.

Who's to say we won't improve this though? Right now, nets add a bunch of numbers and apply arbitrarily-picked limiting functions and arbitrarily-picked structures. Is it impossible that we find a way to train that is orders of magnitude more effective?

jacksmith21006 · 7 years ago
Overhyped? There are cars driving around Arizona without safety drivers as I type this.

The end result of this advancement to our world is earth shattering.

On the high compute cost. There is an aspect of that being true but we have also seen advancement in silicon to support. We look at WaveNet using 16k cycles through a DNN and offering at scale and competitive price kind of proves the point.

nopinsight · 7 years ago
The brain most likely has much more than a petaflop of computing power and it takes at least a decade to train a human brain to achieve the grandmaster level on an advanced board game. In addition, as the other comment says, they learn from hundreds or thousands of years of knowledge that other humans have accumulated and still lose to AlphaZero with mere hours of training.

Current AIs have limitations but, at the tasks they are suited for, they can equal or exceed humans with years of experience. Computing power is not the key limit since it will be made cheaper over time. More importantly, new advances are still being made regularly by DeepMind, OpenAI, and other teams.

https://www.quora.com/Roughly-what-processing-power-does-the...

Unsupervised Predictive Memory in a Goal-Directed Agent

https://arxiv.org/abs/1803.10760

felippee · 7 years ago
Sure, but have you heard about Moravec's paradox? And if so, don't you find it curious that over the 30 years of Moore's law exponential progress in computing almost nothing improved on that side of things, and we kept playing fancier games?
empath75 · 7 years ago
What, no progress six months after achieving a goal thought impossible even just few years ago? Pack it up boys it’s all over but the crying.
raducu · 7 years ago
I was thinking just that when reading the paragraphs about the uber accident. There's absolutely nothing indicating that future progress is not possible, precisely because of how absurd it seems right now.
wslh · 7 years ago
Retrospectively it might sound that the Japanese were partially right in pursuing "high performance" computing with their fifth generation projects [1] but the Alpha Zero results are impressive beyond the computing performance achieved. It was a necessary element but not the only one.

[1] https://mobile.nytimes.com/1992/06/05/business/fifth-generat...

visarga · 7 years ago
> petabytes and petabytes of flops

Why not petaflops of bytes then?

YeGoblynQueenne · 7 years ago
>> Makov Tree Search

You mean Monte Carlo Tree Search, which is not at all like Ma(r)kov chains. You're probably mixing it up with Markov decision processes though.

Before criticising something it's a good idea to have a solid understanding of it.

Deleted Comment

gremlinsinc · 7 years ago
We very well might be in a deep-learning 'bubble' and the end of a cycle... but I don't think this time around it's really the end for a long-while, but more likely a pivot point.

The biggest minds everywhere are working on AI solutions, and there's also a lot in medical/science going on to map brains and if we can merge neuroscience with computer science we might have more luck with AI in the future...

So we could have a draught for a year or two, but there will be more research, and more breakthroughs. This won't be like the AI winters of the past where it lay dormant for 10+ years, I don't think.

nmca · 7 years ago
Moore's law (or at least, the diminishing one) is not relevant here because these are not single threaded programs. Google put 8x on their TPUv2 -> v3 upgrade; parallel matrix multiplies at reduced precision are a long way away from any theoretical limits, as I understand it.
jacksmith21006 · 7 years ago
Totally agree but why on earth down voted?

The first generation TPUs used 65536 very simple cores.

In the end you have so many transistors you can fit and there are options on how to arrange and use.

You might support very complex instructions and data types and then four cores. Or you might only support 8 bit ints, very, very simple instructions and use 65536 cores.

In the end what matters is the joules to get something done.

We can clearly see that we have big improvements by using new processor architectures.

nopinsight · 7 years ago
A different take by Google’s cofounder, Sergey Brin, in his most recent Founders’ Letter to investors:

“The new spring in artificial intelligence is the most significant development in computing in my lifetime.”

He listed many examples below the quote.

“understand images in Google Photos;

enable Waymo cars to recognize and distinguish objects safely;

significantly improve sound and camera quality in our hardware;

understand and produce speech for Google Home;

translate over 100 languages in Google Translate;

caption over a billion videos in 10 languages on YouTube;

improve the efficiency of our data centers;

help doctors diagnose diseases, such as diabetic retinopathy;

discover new planetary systems; ...”

https://abc.xyz/investor/founders-letters/2017/index.html

An example from another continent:

“To build the database, the hospital said it spent nearly two years to study more than 100,000 of its digital medical records spanning 12 years. The hospital also trained the AI tool using data from over 300 million medical records (link in Chinese) dating back to the 1990s from other hospitals in China. The tool has an accuracy rate of over 90% for diagnoses for more than 200 diseases, it said.“

https://qz.com/1244410/faced-with-a-doctor-shortage-a-chines...

felippee · 7 years ago
Hi, author here:

Well first off: letters to investors are among the most biased pieces of writing in existence.

Second: I'm not saying connectionism did not succeed in many areas! I'm a connectionist by heart! I love connectionism! But that being said there is disconnect between the expectations and reality. And it is huge. And it is particularly visible in autonomous driving. And it is not limited to media or CEO's, but it made its way into top researchers. And that is a dangerous sign, which historically preceded a winter event...

nopinsight · 7 years ago
I agree that self-driving had/have been overhyped over the previous few years. The problem is harder than many people realize.

The difference between the current AI renaissance and the past pre-winter AI ecosystems is the level of economic gain realized by the technology.

The late 80s-early 90s AI winter, for example, resulted from the limitations of expert systems which were useful but only in niche markets and their development and maintenance costs were quite high relative to alternatives.

The current AI systems do something that alternatives, like Mechanical Turks, can only accomplish with much greater costs and may not even have the scale necessary for global massive services like Google Photos or Youtube autocaptioning.

The spread of computing infrastructure and connectivity into the hands of billions of global population is a key contributing factor.

tigershark · 7 years ago
Hi, why in your analysis you spoke only about the companies that are not doing so well in self driving leaving out waymo success story? They are already have been hauling passengers without a safety pilot since last October. I guess without the minimum problem otherwise we would have heard plenty in the news like it happened for Tesla and Uber accidents. Is it not too convenient to leave out the facts that contradict your hypothesis?
ptero · 7 years ago
Making cars that drive safely no current, busy roads is a very difficult task. It is not surprising that the current systems do not do that (yet). It is surprising to me how well they still do. The fact though that my phone understands my voice and my handwriting and does on the fly translation of menus and simple requests is a sign of a major progress, too.

AI is overhyped and overfunded at the moment, which is not unusual for a hot technology (synthetic biology; dotcoms). Those things go in cycles, but the down cycles are seldom all out winters. During the slowdowns best technologies still get funding (less lavish, but enough to work on) and one-hit wonders die, both of which is good in the long run. My friends working in biology are doing mostly fine even though there are no longer "this is the century of synthetic biology" posters at every airport and in every toilet.

ehsankia · 7 years ago
How can something be biased when it's listing facts?

Those are actual features that are available today to anyone, that were made possible by AI. Do you think it would be possible to type "pictures of me at the beach with my dog" without AI in such as short time frame? Or to have cars that drive themselves without a driver? These are concrete benefits of machine learning, I don't understand how that's biased.

jacksmith21006 · 7 years ago
" letters to investors are among the most biased pieces of writing in existence. "

Maybe true but they are words that are about things which are either true or not true. Has nothing to do where the words were shared. Saying they are on an investment letter so not relevant seems very short sighted.

But just looking at the last 12 months it is folly to say we are moving to a AI winter. Things are just flying.

Look at self driving cars without safety drivers or look at something like Google Duplex but there are so many other examples.

Dead Comment

wrycoder · 7 years ago
When I saw the Google demo of a CNN using video to split a single audio stream of two guys talking over each other, I became a believer.
felippee · 7 years ago
Hey, a small advice for the future: never build your belief entirely on a youtube video of a demo. In fact, never build your belief based on a demo, period.

This is notorious with current technology: you can demonstrate anything. A few years ago Tesla demonstrated a driverless car. And what? Nothing. Absolutely nothing.

I'm willing to believe stuff I can test myself at home. If it works there, it likely actually works (though possibly needs more testing). But demo booths and youtube - never.

Deleted Comment

sgt101 · 7 years ago
A rigorous evaluation with particular focus on where it doesn't work would be better.
fratlas · 7 years ago
Do you have a video of this?
acdha · 7 years ago
> understand images in Google Photos

This is one of the areas I’m most enthusiastic about but … it’s still nowhere near the performance of untrained humans. Google has poured tons of resources into Photos and yet if I type “cat” into the search box I have to scroll past multiple pages of results to find the first picture which isn’t of my dog.

That raises an interesting question: Google has no way to report failures. Does anyone know why they aren’t collecting that training data?

chillydawg · 7 years ago
They collect virtually everything you do on your phone. They probably notice that you scroll a long way after typing cat and so perhaps surmise the quality of search results was low.
lispm · 7 years ago
> understand

what is this 'understand'?

baxtr · 7 years ago
Well, the way I see it: mostly, these are “improvements”, huge ones, but still. They ride the current AI tech wave, take it an optimize apps with it.

For most things, that people dream of and do marketing about need another leap forward, which we haven’t seen yet (it’ll come for sure)

ehsankia · 7 years ago
Almost anything that has to do with image understanding is entirely AI. Good luck writing an algorithm to detect a bicycle in an image. This also includes disease diagnostic as most of those have to do with analyzing images for tumors and so on.

Also, while a lot of these can be seen as "improvements", in many cases, that improvement put it past the threshold of actually being usable or useful. Self-driving cars for example need to be at least a certain level before they can be deployed, and we would've never reached that without machine learning.

buvanshak · 7 years ago
>caption over a billion videos in 10 languages on YouTube;

Utterly useless. And I don't think it is improving.

kettlecorn · 7 years ago
This is less useless than you think. Captioning video could allow for video to become searchable as easily as text is now searchable. This could lead to far better search results for video and a leap forward in the way people produce and consume video content.
ericd · 7 years ago
I disagree, even with the high error rate, it provides a lot of context. Also, a lot of comedy.
etaioinshrdlu · 7 years ago
I find the auto captions pretty useful.

Dead Comment

dekhn · 7 years ago
I'm a scientist from a field outside ML who knows that ML can contribute to science. But I'm also really sad to see false claims in papers. For example, a good scientist can read an ML paper, see claims of 99% accuracy, and then probe further to figure out what the claims really mean. I do that a lot, and I find that accuracy inflation and careless mismanagement of data mars most "sexy" ML papers. To me, that's what's going to lead to a new AI winter.
fmap · 7 years ago
I'm in the same situation and it's really worrying.

Deep learning is the method of choice for a number of concrete problems in vision, nlp, and some related disciplines. This is a great success story and worthy of attention. Another AI winter will just make it harder to secure funding for something that may well be a good solution to some problems.

mtgx · 7 years ago
You hear Facebook all the time saying how it "automatically blocks 99% of the terrorist content" with AI to the public and governments.

Nobody thought to ask: "How do you know all of that content is terrorist content? Does anyone check every video afterwards to ensure that all the blocked content was indeed terrorist content?" (assuming they even have an exact definition for it).

shmageggy · 7 years ago
Also, how do they know how much terrorist content they aren't blocking (the 1%), since they by definition haven't found it yet?
varelse · 7 years ago
I'm 100% convinced it can block 99% of all terrorist content that hasn't been effectively SEOed to get around their filters because that's just memorizing attributes of the training set data. Unfortunately, the world isn't a stationary system like these ML models (usually) require. I still get spam in my gmail account, nowhere near as much as I do elsewhere, but I still get it.
fantasticsid · 7 years ago
> Does anyone check every video afterwards to ensure that all the blocked content was indeed terrorist content?

They might not, but they could sample them to be statistically confident?

jaggednad · 7 years ago
Yes, they have a test set. They don’t check every video, but they do check every video in a sample that is of statistically significant size.
imh · 7 years ago
FYI This post is about deep learning. It could be the case that neural networks stop getting so much hype soon, but the biggest driver of the current "AI" (ugh I hate the term) boom is the fact that everything happens on computers now, and that isn't changing any time soon.

We log everything and are even starting to automate decisions. Statistics, machine learning, and econometrics are booming fields. To talk about two topics dear to my heart, we're getting way better at modeling uncertainty (bayesianism is cool now, and resampling-esque procedures aged really well with a few decades of cheaper compute) and we're better at not only talking about what causes what (causal inference), but what causes what when (heterogeneous treatment effect estimation, e.g. giving you aspirin right now does something different from giving me aspirin now). We're learning to learn those things super efficiently (contextual bandits and active learning). The current data science boom goes far far far far beyond deep learning, and most of the field is doing great. Maybe those bits will even get better faster if deep learning stops hogging the glory. More likely, we'll learn to combine these things in cool ways (as is happening now).

Jach · 7 years ago
Honestly as much as it is slightly irritating to see deep learning hogging all the glory, there's a lot of money being sloshed around and quite a bit of it is spilling over to non-deep learning too. Which is great. An AI winter may be coming, though I think it's at minimum several years off, since big enterprises are just getting started with the most hyped things. If the hype doesn't return on its promises enough for sustained investment (that's a rather big if since the low hanging fruit aren't yet all picked) then the companies and funding will eventually recede, maybe even trigger another winter, but just as it takes a while to ramp up, it will also take a while to course correct. In the meantime all the related areas get better funding and attention (and chance to positively contribute to secure further investment) that they'd otherwise not have since we'd still be stuck in the low funding model from the last winter.
dx034 · 7 years ago
I think the problem is the definition of AI. It appears most in the field define it as a superset of ML, encompassing all kinds of statistical methods and data analysis. For the general public, AI is a synonym for deep learning. When large companies speak about AI they always mean deep learning, never just a regression (probably also because many don't see a regression as intelligent). So AI in the public's perception could face a winter but much of the domain of machine learning would be unaffected.
xamuel · 7 years ago
>For the general public, AI is a synonym for deep learning

I'd contend for the general public, AI is a synonym for machines like: HAL; The Terminator; Star Trek's "Data"; the robots in the film "AI"; and so on.

We're nowhere remotely in the vicinity of that, and no-one even has any plausible ideas about how to start.

A random person outside of tech probably doesn't even know what deep learning is. They might have heard of it somewhere in passing.

digitalzombie · 7 years ago
Bayesian can be seen as a subset of deep learning or hell a superset.

AI is a superset and Machine learning is a subset of AI and most funding is in deep learning. Once Deep Learning hit the limit I believe there will be an AI winter.

Maybe there will be hype around statistic (cross fingers) which will lead to Bayesian and such.

eli_gottlieb · 7 years ago
>Bayesian can be seen as a subset of deep learning or hell a superset.

eh-hem

DIE, HERETIC!

eh-hem

Ok, with that out of my system, no, Bayesian methods are definitely not a subset of deep learning, in any way. Hierarchical Bayes could be labeled "deep Bayesian methods" if we're marketing jerks, but Bayesian methods mostly do not involve neural networks with >3 hidden layers. It's just a different paradigm of statistics.

johnmoberg · 7 years ago
How can Bayesian stuff be seen as a subset or superset of deep learning?
MichaelMoser123 · 7 years ago
Forget about self driving cars - the real killer application of deep learning is mass surveillance - there are big customer for that (advertising, policing, political technology - we better get used to the term) and its the only technique that can get the job done.

I sometimes think that there really was no AI winter as we got other technologies that implemented the ideas: SQL Databases can be seen as an application of many ideas in classical AI - for example its a declarative language for defining relations among tables; you can have rules in the form of SQL stored procedures; actually it was a big break (paradigm shift is the term) in how you deal with data - the database engine has to do some real behind the scenes optimization work in order to get a workable representation of the data definition (that is certainly bordering on classical AI in complexity).

these boring CRUD applications are light years ahead in how data was handled back in the beginning.

dx034 · 7 years ago
The BBC recently requested information about the use of facial recognition from UK police forces. Those that use facial recognition reported false positive rates of >95%. That led some to abandon the systems, others just use it as one form of pre-screening. Mass surveillance with facial recognition is nowhere near levels where it can be used unsupervised. And that's even before people actively try to deceive it.

For advertising, I'm also not sure if there's been a lot of progress. Maybe it's because I opted out too much but I have the feeling that ad targeting hasn't become more intelligent, rather the opposite. It's been a long time that I've been surprised at the accuracy of a model tracking me. Sure, targeted ads for political purposes can work very well but are nothing new and don't need any deep learning nor any other "new" technologies.

Where I really see progress is data visualisation. Often dismissed it can be surprisingly hard to get right and tools around that (esp for enterprise use) have developed a lot over recent years. And that's what companies need. No one's looking for a black-box algorithm to replace marketing, they just want to make some sense of their data and understand what's going on.

nmca · 7 years ago
Aha, yeah I saw this in the news - pretty classic use of people horribly misunderstanding statistics and/or misrepresenting the facts. Let's say one person out of 60 million has sauron's ring. I use my DeepMagicNet to classify everyone, and get 100 positive results. Only one is the ringbearer, so I have a 99% error rate. Best abandon ship.
jopsen · 7 years ago
> Those that use facial recognition reported false positive rates of >95%.... Mass surveillance with facial recognition is nowhere near levels where it can be used unsupervised.

Is this deep neural networks with the latest technologies?

While yes, deep learning isn't going to solve everything, we'll probably see significant changes in the products available as this technology discovered the past few years makes into the real world.

Most scanners that do OCR and most forms of facial recognition isn't using deep neural networks with transfer learning, YET.

This is not to say that discoveries will continue, winter is probably coming :)

wqnt · 7 years ago
95% false positive rate is extremely good for surveillance, as the cost of false positive is low (wasted efforts by the police). That means for every 20 people the police investigate, one is a target.
dqpb · 7 years ago
I disagree. Even if there are a number of big customers for mass surveillance, self driving cars fundamentally changes the platform of our economy for everyone.
_Tev · 7 years ago
That sounds like US-centric PoV. For large portion of (most?) Europeans self-driving won't change much in their lives, definitely not anything major.
rch · 7 years ago
> Deepmind hasn't shown anything breathtaking since their Alpha Go zero.

Didn't this just happen? Maybe my timescales are off, but I've been thinking about AI and Go since the late 90s, and plenty of real work was happening before then.

Outside a handful of specialists, I'd expect another 8-10 years before the current state of the art is generally understood, much less effectively applied elsewhere.

twtw · 7 years ago
I had the same response. AlphaZero was published like 5 months ago. Saying they've reached the end of the line because they haven't matched AlphaZero in six months is lame.
Maybestring · 7 years ago
It also marked the end of a major multi-year project. With Deepmind moving that team to focus on other problems I wouldn't expect immediate results.
ehsankia · 7 years ago
Also why does every single result has to be breathtaking? Here's a quick example, at IO they announced that their work on Android improved battery life by up to 30%. That's pretty damn impressive.
felippee · 7 years ago
> Also why does every single result has to be breathtaking?

If you build the hype like say Andrew Ng it better be. Also if you consume more money per month than all the CS departments of a mid sized country, it better be.

dx034 · 7 years ago
Because it's the only time we see it in action. The speech recognition of my Amazon Echo is still subpar (and it feels like it's getting worse each week) and ad targeting also hasn't really improved. Of all those claims that came with deep learning, Go was the only time where you really saw a result. I'm not sure which version of Android will bring the improved battery life (and which manufacturers) but I wouldn't be surprised if the 30% were a bit optimistic.

I get that a lot of services we use on a daily basis make use of deep learning to accomplish tasks. But I don't really see what has fundamentally changed over the past 5 years in the way I use services. Siri was introduced 7 years ago and while we have clearly made progress in voice recognition, it's nowhere close to what many had hoped.

mastrsushi · 7 years ago
Warning 23 year old CS grad angst ridden post:

I'm very sick of the AI hype train. I took a PR class for my last year of college, and they couldn't help but mention it. LG Smart TV ads mention it, Microsoft commercials, my 60 year old tech illiterate Dad. Do any end users really know what it's about? Probably not, nor should that matter, but it's very triggering to see something that was once a big part of CS turned into a marketable buzzword.

I get triggered when I can't even skim through the news without hearing Elon Musk and Steven Hawking ignorantly claim AI could potentially takeover humanity. People believe them because of their credentials, when professors who actually teach AI will say otherwise. I'll admit, I've never taken any course in the subject myself. An instructor I've had who teaches the course argues it doesn't even exist, it's merely a sequence of given instructions, much like any other computer program. But hey, people love conspiracies, so let their imagination run wild.

AI is today what Big Data was about 4 years ago. I do not look highly on any programmer that jumps bandwagons, especially for marketability. Not only is it impure in intention, it's foolish when their are 1000 idiots just like them over-saturating the market. Stick with what you love, even if it's esoteric. Then you won't have to worry about your career value.

Jach · 7 years ago
You can finish a CS undergrad without taking any AI course? Or just haven't taken one yet? It's very helpful to go through even a tiny bit of AI: A Modern Approach to cut through a lot of the hype. What annoys me is that when people say "Machine Learning" these days they almost invariably mean deep learning, ignoring all the rest of AI.

> I can't even skim through the news without hearing Elon Musk and Steven Hawking ignorantly claim AI could potentially takeover humanity.

Have you considered that their claims may not in fact be ignorant, just the reporting around them? For some details perhaps you would start with this primer from a decade ago, section 4 & 5 if you're in a hurry: http://intelligence.org/files/AIPosNegFactor.pdf

Or if you want a professor's opinion, from one of the co-authors to the previously mentioned AI:AMA check out some of the linked pointers on his home page: http://people.eecs.berkeley.edu/~russell/

mastrsushi · 7 years ago
I know right, the school I went to wasn't exactly the best.

I skimmed through sections 4 & 5, optimization processing was difficult to understand.

When I was in elementary school, I remember pitying the mentally disabled children, knowing their financial success was destined, so I connected with the g-factor definition. I really think general intelligence is more of a sense of all clusters awareness, whether it be social or cognitive. I've met tons of great students in Math courses who simply cannot converse with the general public. I've also met tons of people on the streets of my city who would have a difficult time understanding high school algebra.

As for Section 5, I do think the rise of AI over humanity is completely in our grasp. I really should take a course on the subject before I sound like the people that I'm criticizing for ignorance, but from a general perspective, I cannot see it outside of our control. As Eliezer said, we can make predictions, but only time will clear the fog.

dx034 · 7 years ago
> What annoys me is that when people say "Machine Learning" these days they almost invariably mean deep learning, ignoring all the rest of AI.

But that's not people's fault. Companies only say AI if they mean deep learning. I've yet to hear a company advertising AI if they accomplished it with a linear regression. Maybe experts should stop talking about AI and use specific terms instead (Deep Learning in the case of this article).

danielbarla · 7 years ago
Your post has two different points, and I think they should be separated. Yes, there's an AI hype train, and it's pretty tiring. I'm starting to wonder when the first AI-enhanced shoes are going to come out. Or an AI enhanced blockchain, right?

That said, the rest of the rant about AI is less solid. Sure, AI today is fairly boring and run of the mill data processing / optimisation stuff (I sort of know, though I only have an MSc in the topic). Much of the promise of near-future AI is that we can go from human designed, or shall we say human-bootsrapped AI to self-bootstrapping kind. The fact that AlphaGo pretty much does this (in a very limited capacity), and accomplished something which classical programming and game "AI" couldn't, should show us that we're pretty close to this type of AI being highly effective. How exactly the future unfolds from there is anyone's guess, but outright calling it ignorant is... pretty ignorant, IMHO.

kgwgk · 7 years ago
> I'm starting to wonder when the first AI-enhanced shoes are going to come out.

Wait no more:

The first connected cycling insole with artificial intelligence: https://www.digitsole.com/store/connected-insoles-cycling-ru...

The World's First Intelligent Sneaker: https://www.kickstarter.com/projects/141658446/digitsole-sma...

mastrsushi · 7 years ago
I never claimed AI was boring or run of the mill, just that it's not in my current interest. It's when I hear Hawking make claims like this, I call ignorance.

"Computers can, in theory, emulate human intelligence, and exceed it,' he said. 'Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

I feel like I'm in a high school stoner circle, yeeesh!

Naomarik · 7 years ago
I'm in same boat, except now I have to deal with both this and blockchain.

It's pretty clear to everyone pushing this is being dishonest. Sometimes I wonder if they intentionally being dishonest or if they just don't know what they don't know. Very few people are doing useful true machine learning, and the applications are very specific with its own set of quirks. It's just to make some quick money and get out.

What somehow is never being said with all this hype is that you need to hire good software devs to make a great solution to a problem. All these buzzword driven ideas tend to confuse a bunch of people and die out after wasting everyone's money. In my region I keep seeing a bunch of business suits pushing X idea being solved with Y with no justification or engineers behind them.

Yesterday I have found out I have to have yet another meeting to explain why "blockchain" does not translate to a full solution given to a very superficial poorly thought out problem.

The only reason I've had a voice in my region with all this noise is because I've made things people see that actually work.

empath75 · 7 years ago
I work for a consultancy and I was brought in to talk about ‘our blockchain strategy’, because I was one of the few people at the company who actually read the white papers and had been invested in it. This was towards the end of the last bull run. I think they expected me to tell them to go all in on it, but I essentially said it was a bunch of bullshit hype and a solution in search of a problem and then I didn’t get invited to any more meetings. A few weeks later they announce we’re going to start selling block chain solutions right as the crypto market crashes.

Meanwhile in the real world they can’t even figure out the basics like containers and Cicd which we’re actually dealing with in our actual contract.

sanxiyn · 7 years ago
> Very few people are doing useful true machine learning, and the applications are very specific with its own set of quirks.

I worked on nudity detector in 2017. Deep learning works, and is useful. Although you are right it's very specific and quirky.

I found "How HBO's Silicon Valley built Not Hotdog" article very interesting, because it's basically the same problem. They found MobileNet better than SqueezeNet and ELU better than ReLU. You know what? We found SqueezeNet better than MobileNet and ReLU better than ELU for our problem and data. Who know why.

https://medium.com/@timanglade/how-hbos-silicon-valley-built...

raducu · 7 years ago
But this is normal with any new technology -- there used to be car movie places and people thought of putting miniature nuclear reactors in everyday appliances, it doesn't mean there's a car winter coming or a nuclear winter coming(pun intended).

It's absolutely human nature to think of crazy ways of using things in new ways, probably most of those ways don't work out in the end.

partycoder · 7 years ago
The history of humanity is full of examples of people using technology to prevail over other groups of people. Applications of AI and ML will be no exception: computer vision, game theory, autonomous systems, material design, espionage, cryptography... You name it.

Supraintelligent AI is not required to cause severe problems.

mastrsushi · 7 years ago
I'm sure they have, but none of those fields have been exploited for marketability, at least nowhere near this degree.
akvadrako · 7 years ago
AI taking over is the biggest threat facing humanity, but I don't think Hawking ever claimed it was imminent; it's likely 1000+ years away.

It should be obvious why a superior intelligence is something dangerous.

pfisch · 7 years ago
"it's likely 1000+ years away."

That seems like a pretty high number when you consider the exponential rate of technological advancement.

1000 years in the future is probably going to be completely unrecognizable to us given the current rate of change in society/tech.

Deleted Comment

isoprophlex · 7 years ago
The day we create a superior intelligence will be the greatest day of humanity. It will be fantastic if there's something beyond humanity. A logical, evolutionary conclusion to us.

All our Darwinian ancestors never had the capacity for intellectual fear of their superior successors...

Hopefully we as a species can get beyond our fear.

Scea91 · 7 years ago
It would be great if people feared climate change the same way they fear AI takeover.
tome · 7 years ago
The biggest threat facing humanity is 1000+ years away?
edejong · 7 years ago
AI is already taking over humanity. The news you read, the products you buy, the advice you take, the friends you meet are all partially or fully supported by machine-learned algorithms.

Personally, I do not doubt AI-based methods are changing our language, our communication patterns and our transport infrastructure.

The problem with the statement "AI will take over humanity" is actually in:

- What exactly is AI? There are many definitions. Most researchers adopt the weakest forms, whereas the general public adopts the strongest form.

- What exactly is 'take-over'? Does this mean: in control? Like a dictator is in control over a country? Or: adopting us as slaves? As a gradual change, when does it 'take-over'? At 50%? Does this need to be a conscious action by an AI actor, or would an evolutionary transition suffice?

- What exactly is humanity? I would go for the definition: "the quality or state of being human", but most people probably read in it: "the human race". In the former case, technology is a part of the quality of being human. In Heideggerian fashion, we become the technology and the technology becomes us. Technology, and AI as part of it, has been taking over humanity since we started permanently adjusting our environments.

richardbatty · 7 years ago
> People believe them because of their credentials, when professors who actually teach AI will say otherwise

While some experts like Andrew Ng are sceptical of AI risk, there are lots of others like Stuart Russell who are concerned.

Here is a big list of quotes from AI experts concerned about AI risk: http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...

foobarbecue · 7 years ago
I mean at least you don't have to worry about Steven Hawking bothering you any more...

Deleted Comment

to_bpr · 7 years ago
>I get triggered

You should be more considerate than to be throwing around this term.