This could possibly enable higher quality instant render previews for 3D designers in web or native apps using on-device transformer models.
Note the timings above were on an A100 with an unoptimized PyTorch version of the model. Obviously the average user's GPU is much less powerful, and for 3D designers it might be still powerful enough to see significant speedups over traditional rendering. Or for a web-based system it could even connect to A100s on the backend and stream the images to the browser.
Limitations are that it's not fully accurate especially as scene complexity scales, e.g. with shadows of complex shapes (plus I imagine particles or strands), so the final renders will probably still be done traditionally to avoid any of the nasty visual artifacts common in many AI-generated images/videos today. But who knows, it might be "good enough" and bring enough of a speed increase to justify use by big animation studios who need to render full movie-length previews to use for music, story review, etc etc.
In raytracing, error scale with the square root of sample count. While it is typical to use very high sample count for the reference, real world sample count for offline renderer is about 1-2 orders of magnitude lower than in this paper.
I call it disingenuous because it is very usual for a graphic paper to include a very high sample count reference image for quality comparison, but nobody ever do timing comparison with it.
Since the result is approximate, a fair comparison would be with other approximate rendering algorithm. Modern realtime path tracer + denoiser can render much more complex scenes on consumer GPU in less than 16ms.
That's "much more complex scenes" part is the crucial part. Using transformer mean quadratic scaling on both number of triangles and number of output pixels. I'm not up to date with the latest ML research, so maybe it is improved now? But I don't think it will ever beat O(log n_triangles) and O(n_pixels) theoretical scaling of a typical path tracer. (Practical scaling wrt pixel count is sub linear due to high coherency of adjacent pixels)
By passing this is not mentioned at all in the article. Is this because they're trivial to bypass for experienced people, or because they want to hide their method from the dev?
I remember when you can just change your DNS provider to bypass censorship. Nowadays, browsers and OS provide safe DNS by default, and thus censors had mostly switched to DPI based method. As this cat and mouse game continue, inevitably these governments will mandate spyware on every machine.
These privacy enhancements invented by westerner only work for western citizens threat model.
This alongside people smuggling in starlink is making censorship useless.
That's said, it will not come to that. They'll just mandate spyware installation.
“Hi all,
I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAI.
My six-and-a-half years with the OpenAI team have been an extraordinary privilege. While I'll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.
There's never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAI o1 mark the beginning of a new era in interaction and intelligence – achievements made possible by your ingenuity and craftsmanship. We didn't merely build smarter models, we fundamentally changed how AI systems learn and reason through complex problems. We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge AI research intuitive and accessible, developing technology that adapts and evolves based on everyone's input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAI stands at the pinnacle of AI innovation.
I'm stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we've built.
I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we've pushed the boundaries of scientific understanding in our quest to improve human well-being.
While I may no longer be in the trenches with you, I will still be rooting for you all. With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.
Mira
”