Serious question, can graphics rendering in games be replace with a technology like these? Then the developer will train a model to convert a simple low poly count 3d image to a very beautiful looking image, and gamer's PC will use those weights on the fly to make the game look beautiful. No need for expensive GPUs (as long as NN predictions are easy)
These techniques require expensive GPUS. Even with top of the line hardware it takes a few seconds per frame. Perhaps something like this could be used to assist animators and modelers though.
But you'd only have to render the level once, after that it's all textures to be slapped on polygons. And those 'expensive GPUs' are pretty much there in that context since they were designed for games to begin with.
In fact, let's stimulate this to make sure GPUs get even faster and have more memory.
GP is wrong in that GPUs won't be needed but that doesn't make it a bad idea as such.
Yes, I expect some things like that to happen. It would probably start as an improved cell-shading or anime mode rendering but could end up with more realistic rendering.
However in terms of realism, we are already at a quality that will be hard to beat.
This is almost scary if you think about it, because this is in its infancy. In 5, 10, 15 years, will any photo be able to be authenticated? Will this be doable in real-time at high resolution, on live video feeds? Will we be able to believe anything but our own eyes?
Infancy? Convolutional neural networks have like 20 years. It is just recently when we were able to use bigger models and more data in GPUs when it becomes something main stream and a lot of people are doing things with it. So it is more like in its maturity. Now it is a field growing slowly but steady. Don't expect big things but small increment also every year (which by the way are awesome stuff)
It will always be a race between recording technology and image generation technology. Low-rez, low-dimensional images (small black-and-white photos, say) will be easy to fake rapidly. High-res higher-dimensional images (full 3D movies of an area at sub-millimeter precision, say) will be much harder to fake within a reasonable time frame. You'll never be certain, but the higher the quality of the recording, the more confident you can be that it's legitimate.
(Ian M. Banks touches on this in his book, Player of Games, where a drone blackmails a character by making a recording of that character misbehaving that is high-enough resolution that it couldn't be faked in the time available.)
The only thing not exciting about watching unfold the AI renaissance than may eventually break the turing barrier, is that the research demos are shit for quality.
I admit it's exciting to zoom in on that little zebra video, but the RealVideo sizing makes it hard to characterize artifacts.
Some papers include dozens of images at 32x32 and I wonder if people really stop to look at them.
A small price to pay I guess. Most first worldish problem ever?
These techniques require expensive GPUS. Even with top of the line hardware it takes a few seconds per frame. Perhaps something like this could be used to assist animators and modelers though.
In fact, let's stimulate this to make sure GPUs get even faster and have more memory.
GP is wrong in that GPUs won't be needed but that doesn't make it a bad idea as such.
However in terms of realism, we are already at a quality that will be hard to beat.
(Ian M. Banks touches on this in his book, Player of Games, where a drone blackmails a character by making a recording of that character misbehaving that is high-enough resolution that it couldn't be faked in the time available.)
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/raw/...
I admit it's exciting to zoom in on that little zebra video, but the RealVideo sizing makes it hard to characterize artifacts.
Some papers include dozens of images at 32x32 and I wonder if people really stop to look at them.
A small price to pay I guess. Most first worldish problem ever?