Readit News logoReadit News
simlevesque · 8 years ago
The live demo is super fun (I really hope this does not kill it) : https://affinelayer.com/pixsrv/
logicallee · 8 years ago
Thanks for the link - you're right, it was fun!
Agathe · 8 years ago
Indeed! I made a few mutant cats and weird shoes...
tonmoy · 8 years ago
Serious question, can graphics rendering in games be replace with a technology like these? Then the developer will train a model to convert a simple low poly count 3d image to a very beautiful looking image, and gamer's PC will use those weights on the fly to make the game look beautiful. No need for expensive GPUs (as long as NN predictions are easy)
Houshalter · 8 years ago
>No need for expensive GPUs

These techniques require expensive GPUS. Even with top of the line hardware it takes a few seconds per frame. Perhaps something like this could be used to assist animators and modelers though.

jacquesm · 8 years ago
But you'd only have to render the level once, after that it's all textures to be slapped on polygons. And those 'expensive GPUs' are pretty much there in that context since they were designed for games to begin with.

In fact, let's stimulate this to make sure GPUs get even faster and have more memory.

GP is wrong in that GPUs won't be needed but that doesn't make it a bad idea as such.

bitL · 8 years ago
Yes, indeed, it's an animator/modeler's dream ;-)
Iv · 8 years ago
Yes, I expect some things like that to happen. It would probably start as an improved cell-shading or anime mode rendering but could end up with more realistic rendering.

However in terms of realism, we are already at a quality that will be hard to beat.

rasz · 8 years ago
this is how dreaming works
billy2201 · 8 years ago
There is an implementation in TensorFlow too: https://github.com/vanhuyz/CycleGAN-TensorFlow
alphapapa · 8 years ago
This is almost scary if you think about it, because this is in its infancy. In 5, 10, 15 years, will any photo be able to be authenticated? Will this be doable in real-time at high resolution, on live video feeds? Will we be able to believe anything but our own eyes?
jorgemf · 8 years ago
Infancy? Convolutional neural networks have like 20 years. It is just recently when we were able to use bigger models and more data in GPUs when it becomes something main stream and a lot of people are doing things with it. So it is more like in its maturity. Now it is a field growing slowly but steady. Don't expect big things but small increment also every year (which by the way are awesome stuff)
taneq · 8 years ago
It will always be a race between recording technology and image generation technology. Low-rez, low-dimensional images (small black-and-white photos, say) will be easy to fake rapidly. High-res higher-dimensional images (full 3D movies of an area at sub-millimeter precision, say) will be much harder to fake within a reasonable time frame. You'll never be certain, but the higher the quality of the recording, the more confident you can be that it's legitimate.

(Ian M. Banks touches on this in his book, Player of Games, where a drone blackmails a character by making a recording of that character misbehaving that is high-enough resolution that it couldn't be faked in the time available.)

rcarmo · 8 years ago
The cat demo is hilarious. But kind of spooky if you consider that cats are still recognisable as cats :)

https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/raw/...

WhitneyLand · 8 years ago
The only thing not exciting about watching unfold the AI renaissance than may eventually break the turing barrier, is that the research demos are shit for quality.

I admit it's exciting to zoom in on that little zebra video, but the RealVideo sizing makes it hard to characterize artifacts.

Some papers include dozens of images at 32x32 and I wonder if people really stop to look at them.

A small price to pay I guess. Most first worldish problem ever?

r0muald · 8 years ago
As always, I would prefer more focus on training your own models rather than running prebaked ones.
tombert · 8 years ago
This is crazy...I wonder how long it'll take for this to be a feature in something like After Effects.