After stable diffusion/midjourney, the community is a little leery of deep learning I've noticed. But I'm trying to carve out a space using neural networks in a different way) anyway.
After stable diffusion/midjourney, the community is a little leery of deep learning I've noticed. But I'm trying to carve out a space using neural networks in a different way) anyway.
From that point of view what’s happening in physics today is no surprise, but it is a bit depressing: we’ve probably passed the level of complexity where models are useful and are now adding detail that make them less so. I guess you can see it as a form of overfitting, like when less scrupulous AI researchers use the test set for validation.
So in the sense that people get excited about science (see AI lately), I think these models are useful, even if they are pretty mis-specified in the grand scheme of things. I can't speak to Sabine's specific frustrations, but it sounds a little bit like Gary Marcus's concerns about neural networks. In my day-to-day I definitely value a useful model over an exact one.
most of us are observers when it comes to this tech. sure, there is some quick tutorial to build a "learning model" somewhere or read an article to grasp what is being discussed...
but when it comes to contributing something remotely significant, we got nothing. whether it is due to lack of resources (giant datasets and computing power) or lack of knowledge and experience to even theorize something plausible.
this is even more frustrating knowing how important of a role this tech will play in the future.
being a passive observer is not enough.
I'm looking forward to 2018 AI summary.
For me this year in DL was a lot narrower. it mostly revolved around systems becoming more human in capability. I think most of this is in the article already:
We can now hopefully say that ImageNet has been solved.
AlphaZero tackled structured games with a single algorithm [1].
Tacotron2 produced completely passible TTS [2]
GANs improved dramatically and saw some good theory to back it up [3] [4].
We also started to care more about how well our models do in general, which to me shows maturity:
Adversarial examples showed their teeth, and hopefully convinced everyone to care about robust models [5] [6]. Reinforcement learning algorithms were shown to have poor transferability [7].
In 2018 I hope to see a new kind of CNN. Residual style networks are the norm now, since they mostly solve problems with gradient flow. But take away all the skip connections and we're mostly left with a vgg-style linear net with box filters. I'd be really excited to see a network with image-sized conv filters that could adapt their shape (and therefore representational power) to a given feature or signal.
Hopefully 2018 is the year where people stop calling AI a one-trick-pony. I don't hear it as often these days, but I think its time to put that phrase to rest.
[1] https://arxiv.org/abs/1712.01815
[2] https://research.googleblog.com/2017/12/tacotron-2-generatin...
[3] https://arxiv.org/abs/1710.10196
[4] https://arxiv.org/abs/1701.04862
[5] https://arxiv.org/abs/1707.07397
At one time there apparently was this: https://mono.frm.fm/en/shop/ But the price was crazy. There's not much out there in terms of appropriately priced digital picture frames.