Readit News logoReadit News
mbf1 · 5 years ago
There's a really important point to grok: AAA titles are more about asset management and art than they are about coding. There is no silver bullet to simplifying the creation of games and artwork. Roblox is as close as you get today because it takes a ton of the work out of making many kinds of games and starts children with enough templates and community created free art to get rapidly started. Teens and young adults who started at Roblox have made some very impressive games: World // Zero, Arsenal, and Bee Swarm Simulator all come to mind.

If your game is close to a starting template, it makes it fast to create something fun and let you focus on iterating with players. The further you are, the more effort will be involved. At some point, the effort becomes equal or greater to the other platforms, however, most kids learning on Roblox don't have the skills to start with Unity or Unreal Engine.

Taking a step back and shifting markets back to GPU hardware, NVIDIA CUDA is Roblox of the GPGPU world - a single stop shop for really great templates and tools to get you 90% of the way to your scientific goal. That last 10% can actually be more like 90% if you're in an area where the platform is missing something (this is universally true for all platforms).

The computing industry is full of tradeoffs and people re-learning and re-creating patterns to solve similar problems to those that were solved 5, 10, 15, and 20 years prior.

reader_mode · 5 years ago
> There's a really important point to grok: AAA titles are more about asset management and art than they are about coding

If that was true there wouldn't be so much logic and code bugs in AAA titles.

From what I've seen game industry has terrible software engineering practices - why have automated testing when your model is crunch to release and then leave a skeleton crew fixing the bugs after you shipped.

Also being stuck in C++ doesn't help either, an ecosystem with bizarrely the most complicated frameworks I've ever seen (eg. boost) and yet the worst tooling out of anything I've used (with comparable adoption rate).

jaaron · 5 years ago
You're misunderstanding what the parent poster was saying: when working on a large AAA game, content management (art, game design, etc.) is as much a bottleneck as engineering efforts. And a number of those bugs you're concerned about are rooted in content too, not lower level engine bugs.

I've worked about half of my career in the game industry. I've practiced TDD and written automated tests (and frameworks) for desktop, web and mobile apps. Some of those have been in the medical industry where the testing is crucial. I say this to make it clear that I'm familiar with solid software engineering practices.

With that in mind, games are the hardest software I've encountered for writing automated tests. It's just notoriously difficult to do in an effective manner. It's not impossible, but it's incredibly difficult.

Kapura · 5 years ago
Extremely bad take, from somebody who has clearly not worked in AAA games, but believes all the things that gamers post on reddit. A few points:

1) you can only test-driven-develop so much in games, and that line usually stops at the engine level because the game itself is in flux so much. Automated testing is confined to making sure that checkins build on every platform. Game dev engineering is fundamentally different than programming in other fields because the goal posts move constantly.

2) If an engineer writes code expecting there to be no more than 1024 physics objects in the system, tells the designers and artists this, but then they turn around put in 2000 colliding pieces of silverware on a dinner table "because it needs to feel like a big feast" is that an engineering problem, or an art and asset management problem? Because something like 80% of my bugs are shit like this.

3) Professional game codebases use their own styles (i hesitate to say dialects) of C++ that do the things we need them to do the ways we need to do them. We don't use anybody else's framework; all of the bonus stuff we're doing lives in macros that can be inspected if an issue arises. But, please don't push your language purism on anybody else. What a tired argument to have.

flohofwoe · 5 years ago
You'll have to search very long to find a C++ game code-base that uses boost, game devs are not that stupid ;)

Also, Unity games are usually written in C# (I think it's quite safe to say that - overall - most games are not written in C++ but in C#), yet I've seen no data so far which would indicate that Unity games have any less problems than games written in C++ during production and after release (if anything, the opposite seems to be true, not for technological reasons, but because Unity is so much more beginner-friendly).

I'm no fan of C++ either, but blaming a programming language for bugs and quality problems without any counter examples at hand is a bit ridiculous.

senko · 5 years ago
> From what I've seen game industry has terrible software engineering practices.

You'd be surprised. Here's a talk from Croteam on how they test their games and engine: https://m.youtube.com/watch?v=YGIvWT-NBHk

I'd wager all major engines are exhaustively tested.

Trouble is, there's combinatorial explosion of game state, user input, assets, scripted behaviour and engine, so there's a huge area to cover.

meheleventyone · 5 years ago
> If that was true there wouldn't be so much logic and code bugs in AAA titles.

Those are pretty different concerns tackled by different groups of people on AAA projects. The art requirements can be an order of magnitude larger than the gameplay side of things. That doesn’t make the gameplay side of things easy.

thom · 5 years ago
Feels like the inevitable rise of AI powered content generation will at least free up _some_ resources at some point, right?
somehnrdr14726 · 5 years ago
During the indie dev renaissance in the 2010's, procedurally generated content received a lot of investment and attention. (Today's marketers would call this AI generated content.)

The idea was that your small indie team could keep up with the big content demand since you had algorithms generating seemingly evergreen content for your players.

What happens in practice is that the players begin playing your game at a meta level, learning how the generative process itself works; effectively removing the benefit of procgen content. As an example, consider Spelunky which claims it can generate millions of unique caverns to explore. If you assess that claim visually, it's true. But watch the streamers play and you will see that they 'speak' the algorithm. By observing the shape of a cavern in one area of a map, they know the algorithm had to make a specific concession elsewhere in the map. So this content isn't really procedural for them anymore.

Even if the "AI" content generators get more intelligent, it won't free up resources. Mechanically interesting content, as demonstrated by the indie games of the last decade, is just a different form of handmade content. A designer hand made the procgen algorithms. Players ultimately bond with the designer(s), no matter what meta level they build the content at. In a hand-built game you might start to get a sense for where the designers hide treasure chests. In a procgen game you learn the algorithms themselves and how to predict and abuse them.

There's another form of game content, story content, which remains an edge for humans. Algorithms, or "AI" if you must, can't compete here. It would be the same as waiting for AI that can write award winning movie scripts.

The marriage of story content and mechanical content is a superior game-making formula to the procgen approach. So the _some point_ you refer to is probaby far away still.

belugacat · 5 years ago
Jevon’s Paradox [0] suggests that it won’t; the freed up resources will just be used for other things.

Tooling for 3D modeling/texturing/rigging/etc is significantly more complex and powerful than it was 20 years ago, yet Pixar doesn’t need fewer artists for a movie today compared to Toy Story - in fact quite the opposite.

AI techniques useful to artists will get folded in the tooling and enable artists to make even more detailed/complex games & movies, but that doesn’t mean the AAA games of 2030 will require fewer artists.

However, talented small teams will likely be able to leverage them to create things that would have been inconceivable from a small team a decade ago.

0. https://en.m.wikipedia.org/wiki/Jevons_paradox

mediaman · 5 years ago
Tools to make art content have gotten dramatically better in the last 10 years. Systems such as Substance, which create algorithmic textures to replace manual texture creation in Photoshop, have resulted in at least a 10x increase in productivity.

Naturally, one may wonder if this has resulted in the labor market disappearing for artists. But it hasn't. The demand for high quality art skyrocketed because the tools economically allowed for it. (In some ways the market for artists has gotten smaller, because there is less appetite for low-skill Photoshop monkeys, but highly skilled technical artists are much more in demand, because we are starting to see dramatic differences in productivity between artists just like we have seen in engineering.)

So, yes, we will see AI in content generation, but it isn't going to be in the form of replacing art bottlenecks. It will manifest in new tooling for artists, who will become more productive, and there will be even fewer artists who have good mastery of these tools, and the demand of quality will increase further. Which would lead to similar bottlenecks to today, though I believe artists will be paid better, and it will become even tougher to break in.

HellDunkel · 5 years ago
It probably already is. Texture delighting is already on the horizon combined with infinite texture scaling this will be a great help for artists.
ReactiveJelly · 5 years ago
So that demand can expand to fill all available space? Yes.
Andrex · 5 years ago
I wonder if that's a little "be careful what you wish for"...

Dead Comment

lmm · 5 years ago
> In fact, it's easier for AAA to become relatively irrelevant (compared to the overall market size - that expands faster in other directions than in the established AAA one) - than for it to radically embrace change.

This has already happened. AAA is now a small and ever-shrinking fraction of the overall games market.

Will raytracing the best engine that a 30-person team can build in 2 years any simpler? No, because "the best engine that a 30-person team can build in 2 years" is at a particular level of complexity by definition. Will it make the gap between the best engine that a 30-person team can build in 2 years and the best engine that one guy in his bedroom can build much smaller? Yes, yes it will.

Custom game engines are already dinosaurs; most of the money in games is elsewhere. I'm sure they'll continue to be produced, just like you can still pay a lot of money for a mainframe today. Mainframes were never defeated, not exactly - they can still do things that commodity hardware can't do. But they just became irrelevant.

smt88 · 5 years ago
> AAA is now a small and ever-shrinking fraction of the overall games market.

I don't know how you can know this. No one has:

- a widely agreed-upon definition of AAA (budget? team size? total work-hours that went into it?)

- an estimate of how much revenue AAA games generate from subscriptions, DLC, and any other add-ons

Without both of those, how can you even guess at AAA games' collective market share? Citation definitely needed.

> Custom game engines are already dinosaurs; most of the money in games is elsewhere. I'm sure they'll continue to be produced, just like you can still pay a lot of money for a mainframe today.

What does raytracing have to do with the popularity of custom engines?

Whether I'm planning to build a custom engine or license an existing engine, aren't we discussing whether my job is easier with raytracing than without it?

Put another way: it seems like the article is comparing reusable engines without raytracing to reusable engines without raytracing, as well as comparing one-off engines without raytracting to one-off engines with raytracing.

I don't think being able to use raytracing is going to move any dev team from one option to the other. If they had the desire and resources to build a custom engine, they'll probably still do that (against all logic).

> Mainframes were never defeated, not exactly - they can still do things that commodity hardware can't do. But they just became irrelevant.

Off topic, but this is not a good analogy. The "cloud" is just a network of mainframes, which means mainframes are arguably the dominant form of computing (and will become more dominant as thin clients, like Stadia, rise in popularity).

Mainframes were not necessarily purpose-built, unique machines. They were instead defined by their sharing model, which is where the term "personal computing" comes from -- as a contrast to the mainframe/server model.

lmm · 5 years ago
> What does raytracing have to do with the popularity of custom engines?

Raytracing is being used as a synecdoche for technological advances in rendering. The point is that as off-the-shelf rendering improves, the advantages of a vertically integrated engine diminish.

> Off topic, but this is not a good analogy. The "cloud" is just a network of mainframes, which means mainframes are arguably the dominant form of computing (and will become more dominant as thin clients, like Stadia, rise in popularity).

> Mainframes were not necessarily purpose-built, unique machines. They were instead defined by their sharing model, which is where the term "personal computing" comes from -- as a contrast to the mainframe/server model.

That's a very essentialist perspective; there are a lot of ways mainframe computing differs from typical personal computing and no single one of them is definitive. As a programmer, a mainframe offered you a reliable computing environment, because they're built to be highly available from the hardware up. That's very much not the case with the cloud, which follows a worse-is-better approach where your jobs can be terminated whenever and you're expected to handle it. While shared computing resources may be making a comeback, I don't think that kind of ground-up high availability will, so while the cloud has some aspects in common with mainframes, it's very much not the same thing.

asdfasgasdgasdg · 5 years ago
> Custom game engines are already dinosaurs; most of the money in games is elsewhere.

It's interesting you say that because a lot of the biggest games that get released are on custom engines. I'm most familiar with FPSes, but as I understand it, the cod games, the battlefield games, destiny, the halo games, cs-go, valorant, cyberpunk etc. are all on "custom" engines -- i.e. engines that are not provided commercially by third parties. I'm actually having trouble thinking of a big name fps that is on unreal or unity.

(I don't know how big these games are compared to mobile games, but I can say for sure that they are big enough that studios seem to be investing more and more on them, rather than less as you would expect if they were economically insignificant.)

EugeneOZ · 5 years ago
I can give one hint: Epic :)

p.s.: I share your opinion

ekianjo · 5 years ago
> AAA is now a small and ever-shrinking fraction of the overall games market.

This entirely depends on how you define AAA.

Rapzid · 5 years ago
Here is a list of the top 10 selling games from 2018. Nearly all of them used in-house engines:

* Red Dead Redemption 2

* Call of Duty: Black Ops 4

* NBA 2K19

* Madden NFL 19

* Super Smash Bros. Ultimate*

* Marvel’s Spider-Man

* Far Cry 5

* God of War 2018

* Monster Hunter: World

* Assassin’s Creed: Odyssey

Further, what I would call the poster triplets for ray tracing Control, Metro Exodus, and CP 2077 all use in-house engines.

lmm · 5 years ago
> Here is a list of the top 10 selling games from 2018. Nearly all of them used in-house engines:

Sure - those are the AAA games we're talking about (and even then, a lot of those are long-running franchises that reuse a significant chunk of engine code between multiple installments). The biggest sales numbers will keep coming from that segment for a long time. But it's a shrinking segment in terms of overall revenue (much of which is no longer up-front sales), and even more so if you look at profit; each year the line where it makes sense to use a custom engine gets higher, and a bigger share of money coming in is from games built on commodity engines.

EugeneOZ · 5 years ago
> Custom game engines are already dinosaurs; most of the money in games is elsewhere

Not sure if it's true. Look at the profits of GTA, Cyberpunk, Witcher.

charrondev · 5 years ago
Profits of cyberpunk? They’ve been delisted from one platform and an open refund policy on 2 platforms because of how buggy it is.

If anything that tells me they may have been better off investing that developer time into the game instead of a custom engine.

Dirlewanger · 5 years ago
They may be ever-shrinking, but their share of the revenue pie is growing.
hyperpallium2 · 5 years ago
I think the key benefit of raytracing is that of dynamic lighting in general: artists don't need to pre-bake lighting, they can just do it like live-action lighting, with faster iteration times. Game studios hire cinematographers etc. A revolution in AAA art workflow.

We're definitely in the hybrid zone for this console generation, though PS5 lead Mark Cerny said he'd been surprised to see some full raytracing (surely tech demos). Maybe PC's can do it, esp with 3090 and following years', but AAA seems mostly console-first (though e.g. cyberpunk 2077 is PC-first). Cross-generation games still need the old workflow, so you don't save until a whole studio can go fully next-gen exclusive... so, perhaps 2-3 years after PS6 gen launch in 7 years: 2030. Or the gen after that.

Couple of boundary counterpoints: most game engines aren't written in assembly despite it being faster; most CGI movies are fully raytraced (though looking at Soul, they seemed to chose a less realistic Special World that was easier to render).

pixel_fcker · 5 years ago
> though looking at Soul, they seemed to chose a less realistic Special World that was easier to render

Interestingly my wife said the same thing when watching it but I think the opposite is true. Everything in the great before is volumetric, so not easy to render at all.

2OEH8eoCRo0 · 5 years ago
As a total outsider in this space that's my takeaway. Ray tracing seems to "solve" lighting rather than have a multitude of different hacks for different situations.
jayd16 · 5 years ago
Its an altruism that by definition the key benefit of realtime ray tracing is the real time, as you say. We already have baked/non-realtime ray tracing.

As for cinematographers or some other change...it's all the same. Studios already think about lighting and framing and everything else in the scene. This won't be a revolution in that aspect of game design.

nightcracker · 5 years ago
The authors' argument fundamentally boils down to 'people want cutting-edge graphics and that always involves squeezing the last bit of performance from hardware with complex tricks'.

And while true for now, I think the title is plain wrong in the longer term. There will come a time in which hardware is sufficiently powerful that one no longer needs these tricks to create top graphics. And physically based raytracing absolutely will simplify all of rendering to a couple core components when that time comes.

Const-me · 5 years ago
Optic is incredibly complicated, because matter is incredibly complicated.

Ray tracing is good for rendering surfaces, reflection and refraction, but what we see is not limited to these things.

Volumetric stuff is all over the place: fire and other plasma, smoke and other aerosols, other forms of mixed-state matter. Also we have lots of liquid water on our planet, when it interacts with solids things and/or air, the visual complexity of the scene simply explodes. Look at boiling water, or streams of water, or shoreline. And to complicate things even more, some visuals depend on wave nature of the light: soap bubble, rainbow, or DVD disk.

Here’s some 15 years old tech.demo: https://www.youtube.com/watch?v=HsWh66MvqBg

Stuff like the brick wall with these shadows and reflections is awesome fit for RT. The final scene also good. However, close up scenes of the ground with water pouring all over the stuff, or that glass window with drippling water at 1:30-1:45 — RT won’t help a bit. And the rain / water / wet surfaces is about half of their shaders: https://developer.amd.com/wordpress/media/2012/10/ToyShop-Eu...

killtimeatwork · 5 years ago
How long does it take to render a frame from a Disney's newest CGI movie release? 12 hours? So yeah, once the hardware improvements bring down that 12 hours to 16 milliseconds, then we won't need the hacks...
cuban-frisbee · 5 years ago
If you take The Mandalorian, which is arguably Disney's hottest ongoing project, then it happens almost entirely in realtime inside the Unreal engine by using their led dot dome screen.

https://www.starwars.com/news/the-mandalorian-stagecraft-fea...

asutekku · 5 years ago
Disney's latest CGI films are rendered on a much higher resolution and with multiple denoising passes and other settings set to max to hide even the smallest instances of grain. One would get a similar result with much less time and the differences would not be noticeable for 99% of the people, if they were to turn down some settings a bit.
dahart · 5 years ago
While that gap between film and games has been large for a long time, it definitely is closing now with the latest generation of film renderers based data on ray tracing hardware, and a large push in the film industry to incorporate real-time pre-vis workflows and reduce iteration times.

It’s very common for film CG frames to take a few hours but not usually 12 hours. You need to be able to start rendering jobs at the end of the day and have them ready for dailies on the morning. Jobs that take more than 4 hours risk clogging the queue and taking 2 days per iteration rather than 1, so people are motivated to keep it reasonable.

Another important difference between games and film is antialiasing techniques. Much of the difference in render time between them is in using fancier texture sampling and using lots more samples than games are willing to use or even need for decent quality.

xiphias2 · 5 years ago
Actually people don’t expect realistic simularion. They expect better. I love the original Lion King, but the movie was just plain boring with its realistic graphics.
josefx · 5 years ago
> 16 milliseconds

Way too slow for VR, try 5 ms just to be sure.

bonzini · 5 years ago
Backdrops for the Mandalorian are rendered in real time.
kllrnohj · 5 years ago
16ms is too slow. Gaming has long since moved past 60hz. 120hz to 360hz are the current targets, even the latest generation consoles are pushing 120hz modes.
forrestthewoods · 5 years ago
> There will come a time in which hardware is sufficiently powerful that one no longer needs these tricks to create top graphics.

I deeply wish this were true. But it doesn't seem likely to me.

Games are real-time. And frame rate expectations are getting higher (60, 90, and even 120Hz) which means frame times are getting shorter (16ms, 11ms, 8ms). Rasterization is likely to always be faster than raytracing so the tradeoff will always be there.

Pixar movies look better and better ever year. The more compute they can throw at the problem the better pixels they can render. Their offline rendering only gets more complex and higher quality every year. It's so complex and expensive I'm genuinely afraid that small animation studios simply won't be able to compete soon.

Maybe raytracing will be "fast enough" that most gamedevs can use the default Unity/Unreal raytracer and call it a day. But imho the author is spot on that AAA will continue to deeply invest in highly complex, bespoke solutions to squeeze every last drop and provide competitive advantage.

irjustin · 5 years ago
FWIW i think it's possible. The question is whether it's within our lifetime.

There's an upper limit to what's useful - the real world. If our minds can't comprehend it, you've hit the upper limit of practical use. Once in game graphics become literally indistinguishable from real life then we'll start to plateau in terms of complexity of "tricks" and raw computing power will be able to catch up.

rebuilder · 5 years ago
Coming from the VFX field, I think that point is pretty far off still. Even with current gen offline renderers running on renderfarms, we're left doing cost-benefit analysis on rendertime vs. quality issues. (Noise being the big one). Real-time, fully generic, photorealistic rendering is decidedly not here yet, and I seriously doubt it's around the corner, either.
TwoBit · 5 years ago
Monitor resolution and frame rate inflation seem to have endpoints at 4K or 8K and 144Hz. Scene detail still has some way to go before diminishing returns and so that will consume years of hardware evolution. I agree with you in principle but I don't think we're within a decade of that yet.
modeless · 5 years ago
VR/AR will continue past 8K to 16K and beyond. 240+ Hz will be desirable there too. We're going to need to keep squeezing the hardware for all it's worth for the foreseeable future. I doubt there will be "sufficiently powerful" hardware to drive that kind of workload without heroic optimization work within my lifetime.
Ygg2 · 5 years ago
I think after 144Hz, you hit a point of really diminishing returns. Like is it possible in blind test to feel 240Hz vs 144Hz? It might give you in game like 1% advantage for +100% the costs.
the8472 · 5 years ago
> Monitor resolution and frame rate inflation seem to have endpoints at 4K or 8K and 144Hz.

laughs in light fields

simion314 · 5 years ago
I seen many indie games made with Unreal Engine or Unity that have terrible performance in rendering a simple level where a AAA game like Tomb Raider will use less resources and have 10x more details. So when the hardware gets so popular that indie developers can deliver similar graphics as AAA games today the AAA games will still do it 10x better by having engineers optimizing things.
ChiefOBrien · 5 years ago
It's not the engine's fault for having terrible performance on simple levels. Most indie devs don't bother with LODs, occlusion culling, and turning off unused features. Decades old tricks like turning off unseen/far away actors, merging mesh and material to bring down drawcalls is also alien in the minds of asset flicks. Both engine provides many monitoring tools and utilities to fine-tune performance and settings on each quality level...
kyriakos · 5 years ago
One game that was recently released has shown that most games are simply unoptimized and that is Doom Eternal. It performs substantially better compared to any other game that was released in 2020 at the same resolutions even on old GPUs.
0-_-0 · 5 years ago
I don't think so, more rays will equal better graphics for a veeery long time. The Rendering Equation [0] is basically an infinite dimensional recursive integral, so computing it will be expensive. Today's Nvidia RTX hardware doesn't even come close to computing it properly, it can still only do bad approximations.

[0] https://en.wikipedia.org/wiki/Rendering_equation

makapuf · 5 years ago
In a different context, look at text rendering. It should be the opposite of the spectrum and stabilized for a long time, except we keep adding better rendering tricks and simplifications (not at the scale of 3d rendering but keep in mind how simple the problem looks compared to it)
crazygringo · 5 years ago
I think it's the opposite.

Text rendering used to involve complicated hinting and then subpixel rendering.

These days on Retina screens you don't need hinting or subpixel rendering.

You just render it like any set of curves, no longer any tricks about it being a font, because it's hi-res enough.

What rendering tricks are you suggesting are being added rather than being taken away, for fonts?

reitzensteinm · 5 years ago
I honestly wouldn't be surprised if Moore's Law dies off before we get to a point where a heavily optimized graphics pipeline isn't worth it.

I think the author makes a good point at the high end, but Raytracing is going to simplify the crap out of whatever we have that's like Unity in a decade or two (hopefully not Unity).

So many issues like transparency sorting, AO & GI, depth of field, baked and real time shadows and reflections you basically don't have to deal with the complexities of as an end user.

But you'll be leaving a lot of performance on the table with a one size fits all solution.

kaba0 · 5 years ago
Moore’s law has ended quite a few years ago - or do you mean it regarding GPUs because I don’t know about that.

But even though parallelism can be increased, the gains from them are limited as per Amdahl.

Rapzid · 5 years ago
I don't believe that's going to happen in the foreseeable future. Just like in film CGI the hardware is always going to continue to chase the best images content creators would like to create.

Hybrid render pipelines and AI enhanced RT passes is the future.

Dead Comment

jeroenhd · 5 years ago
That all depends on innovation with hardware manufacturers. For a computer graphics course, I've written a ray tracer in C# with some compute shades that will run with 10 to 15fps in a window on my 7700k and GTX1080. Not particularly impressive, no, but thirty years ago games would run this speed on similarly priced hardware.

Where the traditional rendering code is all based on hacks and tricks that simulate real life, raytracing is simply a set of formulas borrowed from a physics textbook. An engine casting more realistic shadows than a ten million lines of code rendering library can take less than a thousand lines of code.

The current raytracing acceleration is intended to boost some highlights and shadows in practice. The hardware isn't powerful enough or optimised for rendering real, full-screen games. Even a 3090 will have trouble rendering some Minecraft scenes, for example.

If the right hardware ever becomes available for the right price with enough consumers, proper raytracing engines will be feasible and complicated render paths will eventually be much simplified. Perhaps not to the point that they can be, no, but they'd be much simpler than the engines we're using now. We won't be getting Disney levels of quality, even at our own computers' resolution, but we don't need that. Equivalent to today's graphics but with proper lighting everywhere is an amazing graphical advancement already. Time will tell if that will ever happen.

datenwolf · 5 years ago
> The current raytracing acceleration is intended to boost some highlights and shadows in practice.

A couple of days ago, while gaming with a few friends and noticing a transparency ordering issue in the game we played and explaining to them, how and why that happens I realized something:

The raytracing accelerators we now have at our disposal can also be used to implement proper primitive level transparency ordering on the GPU; even if you don't spawn secondary rays, just being able to do a sparse ray-triangle hit test through the whole scene building the render order list in situ.

DavidVoid · 5 years ago
Transparency is still very expensive unfortunately. DICE spoke about it a GDC talk last year and recommended that the any hit shaders that are used for alpha testing (i.e., transparency) should be avoided if possible since they're rather expensive.

Their big problems is trees, since leaves modeled as flat squares with their the actual leaf shapes in textures (that are completely transparent around the leaves).

"Trees are our biggest problem in ray tracing. No matter what we do they're by far the slowest thing to trace, and they're much worse than on rasterizer, and if you have any ideas of how to ray trace trees efficiently, we're very happy to hear them. We've asked a lot of very smart people and no one has come up with a good answer yet. You can choose between geometry and alpha testing and both are kinda crap." - It Just Works: Ray-Traced Reflections in 'Battlefield V', GDC 2019.

Impossible · 5 years ago
Unfortunately it's still extremely expensive to do so. DXR documentation tells you to keep any hit shaders simple.

It's also possible to fix transparency ordering in rasterization using OIT and there is some hardware support for it (ROV), but it's also expensive and can be hard to do right so many games choose rough sorting

the8472 · 5 years ago
Naive raytracers fall down on many edge-cases and need a ton of optimizations that might not quite qualify as hacks but still drive up complexity a lot. And then you can add proper hacks like ML-based denoising on top.
slx26 · 5 years ago
Hi, could you or someone else with the right knowledge give some examples of those optimizations/hacks for ray tracers or point me to some place that explains those in more detail? Thanks in advance.
alkonaut · 5 years ago
Some time in the future wee can do real time path tracing for a few megapixels (per eye if stereo), in under 5ms, and THEN we you might think we could stop using smoke and mirrors for games.

But even then, someone will think "if I just use some smoke and mirrors I can quite easily get twice the performance, and those spare resources I can use for better AI/larger worlds/whatever".

Ray tracing (and physically based rendering in general) already simplifies things. Content creators can use "natural" parameters for material/lights, and you don't need to insert fake lights to achieve realistic shadows, dynamic lights. That's in a way "simplifying" things, but in the end it just gives more realistic games for the same or higher effort. Until everyone can use ray tracing, it'll also add another layer of complexity becauuse you need to make a separate path for ray tracing, so level editors and content creators need to ensure everything looks good both in both cases.

zokier · 5 years ago
Weird article. Sure, rendering engines are very complex, but they produce very complex images. So you can not just compare old engine against new ones and say that the simplification failed. When tech X is claimed to simplify rendering, the claim usually implies that the simplification happens for comparable visual output; achieving raytracing-like lighting and effects in conventional rasterizers would be even more complex, especially for dynamic scenes.
croes · 5 years ago
When the plough was invented, did they plough the same field in less time or in the same amount of time a larger field?
KineticLensman · 5 years ago
> When the plough was invented, did they plough the same field in less time

To be precise, before the plough was invented, they didn't plough. They often used hoes instead [0]. Intuitively, the plough would have let farmers process a greater area in the same time. But whether they did work a larger area might have depended on other factors such as land ownership - e.g. who owned the plot adjacent to yours. The actual history of agriculture is quite fascinating [1].

[0] https://en.wikipedia.org/wiki/Hoe-farming

[1] https://en.wikipedia.org/wiki/History_of_agriculture