Readit News logoReadit News
Kleptine · 8 months ago
To answer some of the questions here, the reason this has not been used before is because this technique requires being able to access the quad definitions (ie. which 4 vertices makeup each quad) within the gpu.

Up until recently with Mesh Shaders, there's really just been no good way to send this data to the GPU and read back the barycentric coordinates you need in the fragment shader for each pixel.

The article offers several options, to support older GPUs, like Geometry Shaders and Tesselation shaders. This is good, but these are really at best Terrible Hacks(tm). Proof of the ability to contort old extensions is not proof of reasonable performance!

Notably, geometry shaders are notorious for bad performance, so the fact that they list them as a viable strategy for older devices makes it pretty clear they aren't thinking much about performance, just possible compatibility.

Still, I think this is very cool, and now that GPUs are becoming much more of a generic computing device with the ability to execute arbitrary code on random buffers, I think we are nearly at the point of being able to break from the triangle and fix this! We hit this triangulation issue several times on the last project, and it's a real pain.

hnuser123456 · 8 months ago
This is one of those things that feels like a broken/half-assed/oversimplified implementation got completely proliferated into the world a long time ago and it took several years for the right person to do a full-depth mathematical analysis to reveal what we should've been doing all along. Similar to antialiasing and sharpening, texture filtering, color spaces and gamma correction, etc.

It reminded me of this article specifically: https://bgolus.medium.com/the-best-darn-grid-shader-yet-727f...

prideout · 8 months ago
The fact that triangles have proliferated is not due to half-assery. Hardware can rasterize them very quickly, and a triangle can have only one normal vector. Quads can be non-planar. It's true that quads are nice for humans and artists though!

As an aside, Catmull-Clark subdivision has been around since 1978, which, as a first step, breaks an arbitrary polyhedron into a mesh of quadrilaterals.

hnuser123456 · 8 months ago
It's not so much that triangles are the primitive, as much as our logic for combining multiple triangles into a mesh and texturing, lighting, and deforming them in continuous ways clearly has some gaps. It's definitely not an easy problem and it's a fun exercise to see how various silicon innovations unlocked increasingly accurate solutions, and what corners needed to be cut to hit 30fps back in the day.
spookie · 8 months ago
Yeah, I don't think triangles will go away anytime soon. And, sometimes they're even preferred in certain cases by artists (think creases on jeans).
127 · 8 months ago
For someone who wrote textured triangles on a 386:

First rule of computer graphics: lie

Second rule of computer graphics: lie

Third rule of computer graphics: lie

TinkersW · 8 months ago
It in no way replaces triangles, and very few will use it for good reason.

Why?

In many cases modern renders use triangles only a few pixels in size, you won't see C1 discontinuity at that size.

All the outer edges of quad still have C1 discontinuity between other quads, all it fixes is the internal diagonal

It has performance & complexity overhead

idbehold · 8 months ago
It's quite astonishing how complicated it is to draw lines in 3D graphics. As a novice it was a little unbelievable that the primitives for drawing lines was effectively limited to a solid screen-space pixel wide line. Want to draw a 2 pixel wide line? Do it yourself with triangles.
pixelesque · 8 months ago
Ironically, back in the OpenGL 2.0 days, it was a lot easier to do things like this.
meindnoch · 8 months ago
For most workflows this is a non-issue. When texturing a triangle mesh, the distortions are baked into the texture map, so no seams are visible at the quad diagonals.
AlienRobot · 8 months ago
This seems to happen really often! I think I remember there was another one being about color blending being done on the wrong gamma space on GPUs?
bla3 · 8 months ago
How does this compare to https://jcgt.org/published/0011/03/04/paper.pdf? It seems superficially pretty similar.
westurner · 8 months ago
/? Barycentric

From "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :

> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.

Phase from second order amplitude FWIU

sabslikesobs · 8 months ago
Very interesting! This reminds me of how stumped I was learning about UV unwrapping for texturing. Even simple models are difficult to unwrap into easily editable textures. "Why can't I just draw on the model?"

Blender has a few plugins these days that make it a lot easier --- one that impressed me was Mio3 UV: https://extensions.blender.org/add-ons/mio3-uv/

meindnoch · 8 months ago
You can draw on a model: https://youtu.be/WjS_zNQNVlw
GolDDranks · 8 months ago
I am definitely not an expert in 3D graphics... but this looks such an astonishingly simple and effective method, it makes me to question why this wasn't already thought of and picked up?

I get that with fixed-pipeline GPUs you do what the hardware and driver make you do, but with the advent of programmable pipelines, you'd though improving stuff like this would be the first things people do?

Anyway, gotta run and implement this in my toy Metal renderer.

somethingsome · 8 months ago
You want triangles in general, they behave way better, think for example computing intersections.

Also, debugging colors on a 3D surface is not an easy task (debugging in general in 3D is not easy). So if the rendering is nice and seems correct, you tend to think it is.

And if it was not, and you didn't encounter something that bothers you, it doesn't matter that much, after all, what is important is the final rendering style, not that everything is perfectly accurate.

meindnoch · 8 months ago
Because when working with a textured asset, these seam distortions simply don't occur. The inverse of the distortion is baked into the texture map of the asset. So the distortion between a triangle's world-space size vs. its texture-space size cancels out exactly, and everything looks correct.
GolDDranks · 8 months ago
Okay, so the same idea that I spitballed in a sibling thread:

> Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.

How does that work with animated meshes?

spookie · 8 months ago
Because there is no reason to not use triangles.

Look at prideout's reply in the thread, the argument about having just one normal vector and the fact they can only describe one plane is huge. Unless you want more edge cases to deal with (hehe, pun intended), you're better off sticking to tris.

GolDDranks · 8 months ago
I know about the advantages and uniqueness properties of triangles. However, if the article is correct about that artists prefer using quads when editing (I know absolutely nothing about 3D editing, and didn't know that, I thought triangles are universal these days), something is clearly missing from the pipeline if a neatly mapping textures to quads, then converted to triangles ends up messing the interpolation.

Maybe we can continue to describe the geometry in triangles, but could use an additional "virtual fourth handle point" data (maybe expressed in barycentric coords of the other three, so everything is automatically planar) for stretching the interpolation in the expected way.

Anyway, I'm just getting started with Metal, and this provided for a nice theme for experimentation.

rnvannatta · 8 months ago
It's not really that simple, barycentric coordinate access is relatively recent. It's asking the rasterizer for information and transforming that information into barycentric coordinates, and the correspondence of barycentric coordinates to vertices is unstable without further hardware support or further shader trickery. In the case of AMD gpus, it's only RDNA2 and later that have hardware support for stable barycentrics.

And you're right that this has been thought of. There are other approaches for bilinearly interpolating quads that have been historically used, but they require passing extra data through the vertex shader and thus often splitting vertices.

julian9499 · 8 months ago
This actually seems quite easy to implement. Any thoughts on the performance hit a program takes when going this route instead of using one of the workarounds?

Deleted Comment

MeteorMarc · 8 months ago
Is this really new? Will it become an option in Unity, Unreal and the like? The results seem convincing!