Extracting sufficient coherency from path tracing in order to be able to get good SIMD utilization is a surprisingly difficult problem that much research effort has been poured into, and Moonray has a really interesting solution!
This paper is the first place I've found a production use of Knights' Landing / Xeon Phi, the Intel massively-multicore Atom-with-AVX512 accelerator system, outside of HPC / science use cases.
This is exactly why I jumped into the comments. I was hoping someone had some relevant implementation details that isn't just a massive GitHub repo (which is still awesome, but hard to digest in one sitting).
> Extracting sufficient coherency from path tracing in order to be able to get good SIMD utilization is a surprisingly difficult problem
Huh, I'd have assumed SIMD would just be exploited to improve quality without a perf hit, by turning individual paths into ever so slightly dispersed path-packets likely to still intersect the same objects. More samples per path traced...
If you only ever-so-slightly perturb paths, you generally don't get anywhere near as much of a benefit from monte carlo integration, especially for things like light transport at a global non-local scale (might plausibly be useful for splitting for scattering bounces or something in some cases).
So it's often worth paying the penalty of having to sort rays/hitpoints into batches to intersect/process them more homogeneously, at least in terms of noise variance reduction per progression.
But very much depends on overall architecture and what you're trying to achieve (i.e. interactive rendering, or batch rendering might also lead to different solutions, like time to first useful pixel or time to final pixel).
Is there any comparisons to GPU-accelerated rendering? It seems most people are going that direction rather than trying to optimize for CPUs these days, especially via AVX instructions.
CPUs are still king at the scale Dreamworks/Pixar/etc operate at, GPUs are faster up to a point but they hit a wall in extremely large and complex scenes. They just don't have enough VRAM, or the work is too divergent and batches too small to keep all the threads busy. In recent years the high-end renderers (including MoonRay) have started supporting GPU rendering alongside their traditional CPU modes, but the GPU mode is meant for smaller scale work like an artist iterating on a single asset, and then for larger tasks and final frame rendering it's still off to the CPU farm.
I talked to some folks who worked there, years ago, and was surprised they didn't use GPUs. I got the impression that the software was largely based on code that dated back to the 1990s.
Quality 3d animation software is available to anyone with Blender. If someone gets this renderer working as an addon (which will obviously happen) artist will get a side by side comparison of what their work looks like with both cycles and a professional studio product, for free.
This is win, win, win for Blender, OSS and the community.
This. Pixar’s Renderman has been an “option” for a while. It was out of band though. The cycles team will look at the theory behind what’s going on in renderers like this and will make the tech work inside cycles. Maybe someone will port this as another render option but really the sauce is their lighting models and parallel vectorization which could improve cycles already abysmally slow render times.
Surprised nobody has mentioned this, but it looks like it implements the render kernels in ISPC^, which is a tool that exposes a CUDA-like SPMD model that runs over the vector lanes in the CPU.
Vectorization is the best part of writing Fortran. This looks like it makes it possible to write fortran-like code in C. I wonder how it compares to ifort / openMP?
OpenMP starts with a serial model, the programmer tags the loop they want to run in parallel with a directive, and the compiler tries to vectorize the loop. This can always fail, since it relies on a computer program reasoning about an arbitrary block of code. So you have to really dig into the SIMD directives and understand their limitations in order to write performant code that actually does get vectorized.
ISPC starts with an SPMD programming model, and exposes only operations that conform to the model. "Vectorization" is performed by the developer - it's up to them to figure out how to map their algorithm or problem to the vector lanes (just like in CUDA or OpenCL). So there's no unexpectedly falling off the vectorization path - you start on it, and you can only do stuff that stays on it, by design.
It's an offline 3D rendering software that turns a scene description into a photorealistic image. Usually such a description is for a single frame of animation.
Offline being the opposite of realtime. I.e. a frame taking possibly hours to render whereas in a realtime renderer it must take fractions of a second.
Maybe think of it like a physical camera in a movie. And a very professional one for that. But then a camera doesn't get you very far if you consider the list of people you see when credits roll by. :]
Similarly, at the very least, you need something to feed the renderer a 3D scene, frame by frame. Usually this is a DCC app like Maya, Houdini etc. or something created in-house. That's where you do your animation. After you created the stuff you want to animate and the sets where that lives ... etc., etc.
Moonray has a Hydra USD delegate. That is an API to send such 3D scenes to a renderer. There is one for Blender too[1]. That would be one way to get data in there, I'd reckon.
In the most casual sense, a renderer is what "takes a picture" of the scene.
A scene is made of objects, light sources, and a camera. The renderer calculates the reflection of light on the objects' surfaces from the perspective of the camera, so that it can decide what color each pixel is in the resulting image.
Objects are made up of a few different data structures: one for physical shape (usually a "mesh" of triangles); one for "texture" (color mapped across the surface); and one for "material" (alters the interaction of light, like adding reflections or transparency).
People don't write the scene data by hand: they use tools to construct each object, often multiple tools for each data structure. Some tools focus on one feature: like ZBrush for "sculpting" a mesh object shape. Other tools can handle every step in the pipeline. For example, Blender can do modeling, rigging, animation, texturing and material definition, rendering, post-processing, and even video editing; and that's leaving out probably 95% of its entire feature set.
If you are interested at all in exploring 3D animation, I recommend downloading Blender. It's free software licensed under GPLv3, and runs well on every major platform. It's incredibly full-featured, and the UI is excellent. Blender is competitive with nearly every 3D digital art tool in existence; particularly for animation and rendering.
It's by and large mathematical software, like all renderers. So it isn't interactive in a manner like software that allows moving a character model and sequencing frames to make an animation. It's a kind of a 'kernel' in some sense for animation and 3D modelling software.
The source files contain the algorithms/computations needed to solve various equations that people involved in Computer Graphics research have came up with to simulate various physical - optical phenomena (lighting, shadows, water reflections, smoke, waves) in the most efficient (fast) and and usually photorealistic sense for a single image (static scene) already created (character/landspace models, textures) in a program.
Since there are various different techniques for the simulation of one specific phenomenon, it's interesting to peek into the tricks used by a very large animation studio.
I have no experience with moonray, but it being a render, the answer would be.. No.
The renderer is only one piece of the entire animated movie production pipeline.
Modeling -> Texturing ~ rigging /Animation -> post processing effects -> rendering - > video editing
That's a simplified view of the visual part of producing a short or long cgi film
It is a lot of knowledge to aquire so a production team is likely made of specialists and sub specialists (lighting?) working to a degree together.
The best achieving software, especially given its affordability is likely Blender. Other tools lile cinema4d, Maya and of course 3d smax are also pretty good all in one products that cover the whole pileline, although pricey.
Start with modeling, then texturing, then animation. Etc. Then dive into the slice that attracts you the most. Realistically you aren't going to ship a professional grade film so you may as well just learn what you love, and who knows perhaps one day become a professional and appear in the long credit name list at the end of a Disney/Pixar, Dreamworks hit.
> Modeling -> Texturing ~ rigging /Animation -> post processing effects -> rendering - > video editing
In animation (and VFX), editing comes at the beginning. Throwing away frames (and all the work done to create them) is simply too expensive. Handles (the extra frames at the beginning and start of a shot) are usually very small. I'd say <5 frames.
Also modeling & texturing and animation usually happen in parallel. Later, animation and lighting & rendering usually happen in parallel as well.
MoonRay is a renderer that creates photorealistic images of computer-generated 3D scenes, using a technique called Monte Carlo ray tracing. MoonRay can be used as part of an animation project, but it is not an animation tool itself. Instead, it is a rendering engine that produces the final images that make up the animation.
To create an animated movie using MoonRay, you would need to use other tools to create the 3D models, textures, and animations that make up the scenes in your movie. Some examples of these tools include Autodesk Maya, Blender, and Cinema 4D. These tools allow you to create and manipulate 3D models, animate them, and add textures and lighting to create the final look of your scenes.
In addition to these 3D modeling and animation tools, you would also need to have a basic understanding of computer graphics and animation principles. This includes concepts such as keyframe animation, camera movement, lighting, and composition.
Once you have created your 3D scenes, you can use MoonRay to render them into high-quality images that can be used in your final animated movie. MoonRay can render images on a single computer, or it can be used with cloud rendering services to speed up the rendering process.
In summary, MoonRay is a rendering engine that produces photorealistic images of 3D scenes created using other 3D modeling and animation tools. To create an animated movie using MoonRay, you would need to use additional tools to create the scenes and have a basic understanding of computer graphics and animation principles.
i'm curious: what is the incentive for dreamworks to open-source this? surely having exclusive access to a parallel renderer of this quality is a competitive advantage to other studios?
I can imagine a few reasons why they'd do this, but some of it may just be 'why not'. Studio Ghibli has done the same thing with their animation software and it hasn't turned into a disaster for them. Making movies, especially movies that people will pay to watch is hard, and any serious competitors already have their own solutions. If people use moonray and that becomes a popular approach, competitors who don't use it are at a disadvantage from a hiring perspective. Also, DreamWorks controls the main repo of what may become a popular piece of tooling. There's soft power to be had there.
The competitive advantage is in storytelling, not necessarily visual fidelity. People will watch a somewhat worse looking movie with a better story than a better looking movie with a worse story. And honestly, can anyone really tell slightly worse graphical quality these days when so many animated movies already look good?
The exception, of course, is James Cameron and his Avatar series. People will absolutely watch something that looks 10x better because the visual fidelity itself is the draw, it's the main attraction over the story. This is usually not the case in most movies however.
The rendering in the Avatar movies is at the cutting edge. But quite apart from the very uninteresting storytellying there's something there that just doesn't work for me visually - I don't know if it's the uncanny valley effect of the giant skinny blue people with giant eyes or what, but I'd definitely rather watching something creative and painterly like the Puss in Boots movie, or even something like the Last of Us with really well integrated CG visuals and VFX that aren't necessarily top of the line, but well integrated and support a good story.
At this point every studio has their own renderer, Pixar has RenderMan, Illumination has one from MacGuff, Disney has their Hyperion, and Animal Logic has Glimpse.
> surely having exclusive access to a parallel renderer of this quality is a competitive advantage to other studios?
The renderer is an important of the VFX toolkit, but there are more than a few production-quality renderers out there, some of them are even FOSS. A studio or film's competitive advantage is more around storytelling and art design.
Unreal is eating everyone's lunch. If they cannot get anyone else to contribute to their renderer, it will wind up getting shelved for Unreal with a lot of smaller animation studios already using Unreal instead of more traditional 3D Rendering solutions like Maya.
I'm not really sure if they are competing with Unreal. Large studios will probably never use real time rendering for the final render unless it achieves the same quality. Dreamworks have built a renderer specifically for render farms (little use of GPUs, for example) which means they are not targeting small studios at all, rather something like Illumination Entertainment or Sony (think Angry Birds movie).
It has a Hydra render delegate so that is nice. Does Blender support being a Hydra client yet? It would be nice to have it supported natively in Blender itself. If it did, one could easily switch renderers between this and others.
I understand Autodesk is going this way with its tooling.
> It would be nice to have it supported natively in Blender itself. If it did, one could easily switch renderers between this and others.
Blender in general is setup to work with different renderers, especially since the work of Eevee which is the latest renderer to be added. Some part of the work on integrating Eevee also put some groundwork for making it easier in the future to add more of them.
Most probably this renderer would be added as a addon (if someone in the community does it), rather than in the core of Blender.
I had the same question. There exists a USD addon for Blender that support Hydra, so probably you could get that to work with a bit of trial and error!
Is anybody else intrigued by the mention of multi-machine and cloud rendering via the Arras distributed computation framework.?
Is this something new? The code seems to be included as sub-modules of OMR itself, and all the repos[1][2][3] show recent "Initial Commit" messages, so I'm operating on the assumption that it is. If so, I wonder if this is something that might prove useful in other contexts...
I can maybe add a bit of context to this. I worked on Moonray/Arras at DWA about 8-9 years ago.
Arras was designed to let multiple machines work on a single frame in parallel. Film renderers still very much leverage the CPU for a lot of reasons, and letting a render run to completion on a single workstation could take hours. Normally this isn’t a problem for batch rendering, which typically happens overnight, for shots that will get reviewed the next day.
But sometimes it’s really nice to have a very immediate, interactive workflow at your desk. Typically you need to use a different renderer designed with a more real-time architecture in mind, and many times that means using shaders that don’t match, so it’s not an ideal workflow.
Arras was designed to be able to give you the best of both worlds. Moonray is perfectly happy to render frames in batch mode, but it can also use Arras to connect dozens of workstations together and have them all work on the same frame in parallel. This basically gives you a film-quality interactive lighting session at your desk, where the final render will match what you see pixel for pixel because ultimately you’re using the same renderer and the same shaders.
Neat! Parallelizing a single frame across multiple machines was something I'd wanted to try back when I was working on RenderMan. It used to be able to do it back in the REYES days via netrender, but was something we lost with the move to pathtracing on the RIS architecture.
Could you go into a bit more detail on how the work is distributed? Is it per tile (or some other screen-space division like macro-tiles or scan-lines)? Per sample pass? (Surely it's not scene distribution like the old Kilauea renderer from Square!) Dynamic or static scheduling? Sorry, so many questions. :-)
http://www.tabellion.org/et/paper17/MoonRay.pdf
Extracting sufficient coherency from path tracing in order to be able to get good SIMD utilization is a surprisingly difficult problem that much research effort has been poured into, and Moonray has a really interesting solution!
And for this use case, it makes perfect sense!
They probably have a mountain of intrinsics written, but it doesn't seem like AMD or Intel are going to replicate the Xeon Phi again.
It feels like intel kinda hates AVX512 on the cpu side (or wants to upsell you for it) so I’m wondering if they turned those cards into their GPUs
Thank you!
Huh, I'd have assumed SIMD would just be exploited to improve quality without a perf hit, by turning individual paths into ever so slightly dispersed path-packets likely to still intersect the same objects. More samples per path traced...
So it's often worth paying the penalty of having to sort rays/hitpoints into batches to intersect/process them more homogeneously, at least in terms of noise variance reduction per progression.
But very much depends on overall architecture and what you're trying to achieve (i.e. interactive rendering, or batch rendering might also lead to different solutions, like time to first useful pixel or time to final pixel).
Pixar did a presentation on bringing GPU rendering to Renderman, which goes over some of the challenges: https://www.youtube.com/watch?v=tiWr5aqDeck
This is win, win, win for Blender, OSS and the community.
Renderman for Blender:
https://rmanwiki.pixar.com/pages/viewpage.action?mobileBypas...
ISPC is also used by Disney's Hyperion renderer, see https://www.researchgate.net/publication/326662420_The_Desig...
Neat!
^ https://ispc.github.io/
Vectorization is the best part of writing Fortran. This looks like it makes it possible to write fortran-like code in C. I wonder how it compares to ifort / openMP?
ISPC starts with an SPMD programming model, and exposes only operations that conform to the model. "Vectorization" is performed by the developer - it's up to them to figure out how to map their algorithm or problem to the vector lanes (just like in CUDA or OpenCL). So there's no unexpectedly falling off the vectorization path - you start on it, and you can only do stuff that stays on it, by design.
e.g: can I make an animation movie using only moonray? what other tools are needed? and what knowledge do I (we) need to do that?
Offline being the opposite of realtime. I.e. a frame taking possibly hours to render whereas in a realtime renderer it must take fractions of a second.
Maybe think of it like a physical camera in a movie. And a very professional one for that. But then a camera doesn't get you very far if you consider the list of people you see when credits roll by. :]
Similarly, at the very least, you need something to feed the renderer a 3D scene, frame by frame. Usually this is a DCC app like Maya, Houdini etc. or something created in-house. That's where you do your animation. After you created the stuff you want to animate and the sets where that lives ... etc., etc.
Moonray has a Hydra USD delegate. That is an API to send such 3D scenes to a renderer. There is one for Blender too[1]. That would be one way to get data in there, I'd reckon.
Hope that makes sense.
[1] https://github.com/GPUOpen-LibrariesAndSDKs/BlenderUSDHydraA...
A scene is made of objects, light sources, and a camera. The renderer calculates the reflection of light on the objects' surfaces from the perspective of the camera, so that it can decide what color each pixel is in the resulting image.
Objects are made up of a few different data structures: one for physical shape (usually a "mesh" of triangles); one for "texture" (color mapped across the surface); and one for "material" (alters the interaction of light, like adding reflections or transparency).
People don't write the scene data by hand: they use tools to construct each object, often multiple tools for each data structure. Some tools focus on one feature: like ZBrush for "sculpting" a mesh object shape. Other tools can handle every step in the pipeline. For example, Blender can do modeling, rigging, animation, texturing and material definition, rendering, post-processing, and even video editing; and that's leaving out probably 95% of its entire feature set.
If you are interested at all in exploring 3D animation, I recommend downloading Blender. It's free software licensed under GPLv3, and runs well on every major platform. It's incredibly full-featured, and the UI is excellent. Blender is competitive with nearly every 3D digital art tool in existence; particularly for animation and rendering.
The source files contain the algorithms/computations needed to solve various equations that people involved in Computer Graphics research have came up with to simulate various physical - optical phenomena (lighting, shadows, water reflections, smoke, waves) in the most efficient (fast) and and usually photorealistic sense for a single image (static scene) already created (character/landspace models, textures) in a program.
Since there are various different techniques for the simulation of one specific phenomenon, it's interesting to peek into the tricks used by a very large animation studio.
The renderer is only one piece of the entire animated movie production pipeline.
Modeling -> Texturing ~ rigging /Animation -> post processing effects -> rendering - > video editing
That's a simplified view of the visual part of producing a short or long cgi film
It is a lot of knowledge to aquire so a production team is likely made of specialists and sub specialists (lighting?) working to a degree together.
The best achieving software, especially given its affordability is likely Blender. Other tools lile cinema4d, Maya and of course 3d smax are also pretty good all in one products that cover the whole pileline, although pricey.
Start with modeling, then texturing, then animation. Etc. Then dive into the slice that attracts you the most. Realistically you aren't going to ship a professional grade film so you may as well just learn what you love, and who knows perhaps one day become a professional and appear in the long credit name list at the end of a Disney/Pixar, Dreamworks hit.
In animation (and VFX), editing comes at the beginning. Throwing away frames (and all the work done to create them) is simply too expensive. Handles (the extra frames at the beginning and start of a shot) are usually very small. I'd say <5 frames.
Also modeling & texturing and animation usually happen in parallel. Later, animation and lighting & rendering usually happen in parallel as well.
MoonRay is a renderer that creates photorealistic images of computer-generated 3D scenes, using a technique called Monte Carlo ray tracing. MoonRay can be used as part of an animation project, but it is not an animation tool itself. Instead, it is a rendering engine that produces the final images that make up the animation.
To create an animated movie using MoonRay, you would need to use other tools to create the 3D models, textures, and animations that make up the scenes in your movie. Some examples of these tools include Autodesk Maya, Blender, and Cinema 4D. These tools allow you to create and manipulate 3D models, animate them, and add textures and lighting to create the final look of your scenes.
In addition to these 3D modeling and animation tools, you would also need to have a basic understanding of computer graphics and animation principles. This includes concepts such as keyframe animation, camera movement, lighting, and composition.
Once you have created your 3D scenes, you can use MoonRay to render them into high-quality images that can be used in your final animated movie. MoonRay can render images on a single computer, or it can be used with cloud rendering services to speed up the rendering process.
In summary, MoonRay is a rendering engine that produces photorealistic images of 3D scenes created using other 3D modeling and animation tools. To create an animated movie using MoonRay, you would need to use additional tools to create the scenes and have a basic understanding of computer graphics and animation principles.
The exception, of course, is James Cameron and his Avatar series. People will absolutely watch something that looks 10x better because the visual fidelity itself is the draw, it's the main attraction over the story. This is usually not the case in most movies however.
[0]https://www.youtube.com/watch?v=Ozd4JqquG3k&t=117
The renderer is an important of the VFX toolkit, but there are more than a few production-quality renderers out there, some of them are even FOSS. A studio or film's competitive advantage is more around storytelling and art design.
https://techcrunch.com/2016/04/28/comcast-to-acquire-dreamwo...
Perhaps they've moved on to a new renderer.
Why do you think this? Nobody in film or vfx is using Unreal for final rendering, Unreal is built for games not offline path tracing.
It has a Hydra render delegate so that is nice. Does Blender support being a Hydra client yet? It would be nice to have it supported natively in Blender itself. If it did, one could easily switch renderers between this and others.
I understand Autodesk is going this way with its tooling.
https://projects.blender.org/BogdanNagirniak/blender/src/bra...
Blender in general is setup to work with different renderers, especially since the work of Eevee which is the latest renderer to be added. Some part of the work on integrating Eevee also put some groundwork for making it easier in the future to add more of them.
Most probably this renderer would be added as a addon (if someone in the community does it), rather than in the core of Blender.
https://github.com/GPUOpen-LibrariesAndSDKs/BlenderUSDHydraA...
Is this something new? The code seems to be included as sub-modules of OMR itself, and all the repos[1][2][3] show recent "Initial Commit" messages, so I'm operating on the assumption that it is. If so, I wonder if this is something that might prove useful in other contexts...
[1]: https://github.com/dreamworksanimation/arras4_core
[2]: https://github.com/dreamworksanimation/arras4_node
[3]: https://github.com/dreamworksanimation/arras_render
Arras was designed to let multiple machines work on a single frame in parallel. Film renderers still very much leverage the CPU for a lot of reasons, and letting a render run to completion on a single workstation could take hours. Normally this isn’t a problem for batch rendering, which typically happens overnight, for shots that will get reviewed the next day.
But sometimes it’s really nice to have a very immediate, interactive workflow at your desk. Typically you need to use a different renderer designed with a more real-time architecture in mind, and many times that means using shaders that don’t match, so it’s not an ideal workflow.
Arras was designed to be able to give you the best of both worlds. Moonray is perfectly happy to render frames in batch mode, but it can also use Arras to connect dozens of workstations together and have them all work on the same frame in parallel. This basically gives you a film-quality interactive lighting session at your desk, where the final render will match what you see pixel for pixel because ultimately you’re using the same renderer and the same shaders.
Could you go into a bit more detail on how the work is distributed? Is it per tile (or some other screen-space division like macro-tiles or scan-lines)? Per sample pass? (Surely it's not scene distribution like the old Kilauea renderer from Square!) Dynamic or static scheduling? Sorry, so many questions. :-)