This project generates synthetic computer vision training data. The arxiv paper has more detail including some cool pictures of random creatures it can generate. The images are nice but all of them are nature settings so I assume one would have to supplement this type of data with another data set for training a computer vision model.
Maybe this IS the demoscene of today? I saw more insanely beautiful computer generated pictures in the last couple years than I saw in previous 10, AI or no AI.
This feels like we've got all the pieces of the puzzle to create a reality experience - I'm pretty sure with visuals like this and haptic feedback that your brain will fill in any gaps once you adapt to this given enough time.
You could use this with a VR headset, monitoring heart-beat, temperature and adapt the environment based off the experiencer's desires.
It feels like we're on the precipice of recreating an experience of reality, which may reveal more about our existing reality than we ever expected.
From the homepage it sounds like they've prioritised geometry fidelity for CV research rather than performance:
> Infinigen is optimized for computer vision research, particularly 3D vision. Infinigen does not use bump/normal-maps, full-transparency, or other techniques which fake geometric detail. All fine details of geometry from Infinigen are real, ensuring accurate 3D ground truth.
So I suspect the assets wouldn't be particularly optimised for video games. Perhaps a good starting point though!
I doubt they prioritized it. To get normal maps you usually first need a high resolution mesh, but then need other steps to get good decimation for lods and normal bake. That's mostly extra work, not alternative work that wasn't prioritized. If by transparency they mean faking aggregates, you also need full geo there before sampling and baking down into planes or some other impostor technique.
This looks rather extremely similar to something that Unreal already natively supports. Here is a demo video from them - https://youtube.com/watch?v=8tBNZhuWMac
It looks great, but I'm missing what's innovative about this? AAA procedural folliage has been done for 20 years, terrain too. Blender has had procedural geo nodes for a long time too. What is so interesting here?
I just looked it up and WOW it has come a long way (and wasn't it free before? Maybe I misremember).
[1] https://arxiv.org/abs/2406.11824
You could use this with a VR headset, monitoring heart-beat, temperature and adapt the environment based off the experiencer's desires.
It feels like we're on the precipice of recreating an experience of reality, which may reveal more about our existing reality than we ever expected.
> Infinigen is optimized for computer vision research, particularly 3D vision. Infinigen does not use bump/normal-maps, full-transparency, or other techniques which fake geometric detail. All fine details of geometry from Infinigen are real, ensuring accurate 3D ground truth.
So I suspect the assets wouldn't be particularly optimised for video games. Perhaps a good starting point though!