AMD, do support for NV_shader_buffer_load next! Shader Buffer Load brought "Buffer Device Address" / pointers to OpenGL/glsl, long before Vulkan was even a thing. It's the best thing since sliced bread, and easily lets you access all your vertex data with pointers, i.e., you don't need to bind any vertex buffers anymore. Also easily lets you draw the entire scene in a single draw call since vertex shaders can just load data from wherever the pointers lead them, e.g., it makes GLSL vertex shaders look like this:
This is the real killer feature of Vulkan/DX12, it makes writing generalized renderer so much easier because you don't need to batch draw calls per vertex layout of individual meshes. Personally I use Buffer Device Address for connecting Multidraw Indirect calls to mesh definitions to materials as well.
I just wish there was more literature about this, especially about perf implications. Also synchronization is very painful, which may be why this is hard to do on a driver level inside OpenGL
Maybe I’m missing something, but isn’t this the norm in Metal as well? You can bind buffers individually, or a use a single uber-buffer that all vertex shaders can access.
But I haven’t written OpenGL since Metal debuted over a decade ago.
I'm talking about OpenGL. Vulkan is too hard for my small mind to understand, so I'm still using OpenGL. And the extension that allows this in OpenGL came out in 2010, so long before Vulkan.
That is for SSBOs. u_nodes is a pointer to an SSBO in this case. That SSBO then has lots of more pointers to various different SSBOs that contain the vertex data.
A little bit off topic but: GL_LINES doesn't have a performant analog on lots of other platforms, even Unity. Drawing a line properly requires turning the two endpoint vertices into a quad and optionally adding endcaps which are at least triangular but can be polygons. From my understanding, that requires a geometry shader since we're adding virtual/implicit vertices. Does anyone know if mesh shaders could accomplish the same thing?
Also I wish that GL_LINES was open-sourced for other platforms. Maybe it is in the OpenGL spec and I just haven't looked. I've attempted some other techniques like having the fragment shader draw a border around each triangle, but they all have their drawbacks.
To draw lines instead of a geometry shader you can use instancing, since you know how many vertices you need to represent a line segment's bounding box. Have one vertex buffer that just contains N vertices (the actual attribute data doesn't matter, but you can shove UVs or index values in there) and bind it alongside a buffer containing your actual line information (start, end, color, etc). The driver+GPU will replicate the 'line vertex buffer' vertices for every instance in the 'line instance buffer' that you bound.
This works for most other regular shapes too, like a relatively tight bounding box for circles if you're drawing a bunch of them.
In my experience, drawing quads with GL_POINTS in OpenGL was way faster than drawing quads with instancing in DirectX. That was noticable with the DirectX vs. OpenGL backends for WebGL, where switching between the two resulted in widely different performance.
drawing using GL_LINES is old school fixed function pipeline and it's how modern graphics hardware works. If you want a single line, draw a small rectangle between V1 and V2 using geometry. The thickness is the distance between P1 and P2 / P3 and P4 of the rectangle. A line has no thickness as it's 1 dimensional.
Draw in screen space based on projected points in world space.
set gl_Color to your desired color vec and bam, line.
Oh man, that was about 3 lifetimes ago. I'm on linkedin (every couple months hah) if you ever want to chat.
It was a fun project, but we released it just in time for the Dot Bomb to wipe us out. Our last month, my partner and I got about $600 each from shareware game sales ($1200 with inflation today), enough to pay rent and my student loans for the first time. After the pop, the next month we got one sale. $12. The fun was over and never came back in the same way.
The powers that be conspired to end the 90s climb towards FU money and UBI, and there was no tech investment for about 6-7 years until 2007 when the iPhone and Facebook came out, which started the mobile bubble. Lots of people made $100k those first years, but mostly not established players, who had too much time, effort and money sunk into the old desktop platforms.
Then Google and Facebook took the lion's share of ad money, which removed the avenues to scale a business. Everyone moved into other models like in-game ads, in-app purchases and going viral through influencers, but none of those worked for the vast majority. To leave us where we are today, where long tail effects ensure a winner-take-all sales distribution.
Now that AI is here, even apps will begin disappearing. I predict that within 3 years, nobody will be using the web or buying software anymore. We'll just ask the AI to do everything, and it will make it so. Thus ends the mobile bubble too.
I basically missed every bubble due to unfortunate life choices. So let this be a cautionary tale to any young people who read this. You need to do the opposite of what people say. Nobody gets rich pulling themselves up by their bootstraps, they get rich by borrowing someone else's money and investing it in sure things like Bitcoin 15 years ago. If you feel yourself clinging to a project or situation because you have a lot invested in it or don't want to let someone down, that's the time to explore other options. You won't make yourself poor - your empathy will. That's why the rich demonize it. But empathy is where the true meaning in life is found. You can choose to skim money off other people's backs through investment, or "earn" it yourself, but there is a karmic cost no matter how you obtain it. So just have fun and do your best and at least pay your taxes so that society can progress forward, or we'll never stop repeating these systems of control and suffering IMHO.
When I was making Retro, I was trying to capture the feeling of the golden age of arcade video games, even if I maybe missed the mark. But I didn't realize that I was in a golden age at the time. I believe that we're in one of those now, maybe the last one. But also the first one, from a certain perspective.
Maybe it's time to dust off the old compiler and make a game..
I'm not sure exactly what you mean, but you can both output line primitives directly from the mesh shader or output mitered/capped extruded lines via triangles.
As far as other platforms, there's VK_EXT_line_rasterization which is a port of opengl line drawing functionality to vulkan.
nvidium is using GL_NV_mesh_shader which is only available for nVIDIA cards. This mod is the only game/mod I know of that uses mesh shaders & is OpenGL. & so the new gl extension will let users of other vendors use the mod if it gets updated to use the new extension.
pretty sure the base minecraft rendering engine is still using opengl, and most of the improvement mods also just use opengl so exposing this extension to them is probably important to a game where its 50 billion simple cubes being rendered
It's officially deprecated in favor of Vulkan, but it will likely live on for decades to come due to legacy CAD software and a bunch of older games still using it. I don't share the distaste many have for it, it's good to have a cross-platform medium-complexity graphics API for doing the 90% of rendering that isn't cutting-edge AAA gaming.
It's super frequently recommended as a starting point for learners because it's high level enough to get something on the screen in ten lines of code but low level enough to teach you the fundamentals of how the rendering pipeline works (even though GL's abstraction is rather anachronistic and differs from how modern GPUs actually work). Vulkan (requiring literally a thousand LoC worth of initialization to render a single triangle) is emphatically not any sort of replacement for that use case (and honestly not for 95% of hobbyist/indie use cases either unless you use a high-level abstraction on top of it).
The worst thing about OpenGL is probably the hilariously non-typesafe C API.
I don't think any major platform that ever supported OpenGL or OpenGL ES--including desktops, smartphones/tablets, and web browsers--has actually removed it yet. Apple will probably be the first to pull the plug, but they've only aggressively deprecated it so far.
This sounds pretty cool, but can anyone dumb this down for me? Mesh shaders are good because they are more efficient than the general purpose triangle shaders? Or is this something else entirely?
It's essentially a replacement for vertex shaders which map more closely to how GPUs are processing big and complex triangle meshes as small packets of vertices in parallel by doing the job of splitting a complex triangle mesh into such small packets of vertices in an offline asset-pipeline job instead of relying too much on 'hardware magic' like vertex caches.
AFAIK mesh shaders also get rid of (the ever troublesome) geometry shaders and hull shaders, but don't quote me on that :)
By far most traditional triangle rendering use cases should only see minimal performance improvements though, it's very much the definition of 'diminishing returns'.
It's definitely more straightforward and 'elegant' though.
Oh, awesome! Yeah, that's a great introduction. Seems like it introduces a new abstraction that allows a single mesh to be mapped to much smaller groups of vertices so you can take advantage of BVHs and stuff like that on a more granular level, right in the shader code. Very cool stuff! Thanks for the info.
Fundamentally, for OpenGL, "getting shaders" meant moving from a fixed, built-in set of graphics effects to giving developers custom control over the graphics pipeline.
Imagine you hired a robot artist to draw.
Before Shaders (The Old Way): The robot had a fixed set of instructions. You could only tell it "draw a red circle here" or "draw a blue square there." You could change the colors and basic shapes, but you couldn't change how it drew them. This was called the fixed-function pipeline.
After Shaders (The New Way): You can now give the robot custom, programmable instructions, or shaders. You can write little programs that tell it exactly how to draw things.
The Two Original Shaders
This programmability was primarily split into two types of shaders:
Vertex Shader: This program runs for every single point (vertex) of a 3D model. Its job is to figure out where that point should be positioned on your 2D screen. You could now program custom effects like making a character model jiggle or a flag wave in the wind.
Fragment (or Pixel) Shader: After the shape is positioned, this program runs for every single pixel inside that shape. Its job is to decide the final color of that pixel. This is where you program complex lighting, shadows, reflections, and surface textures like wood grain or rust.
As far as I understand, mesh shaders allow you to generate arbitrary geometry on the GPU. That wasn't possible with the traditional vertex pipeline, which only allowed specialized mesh transformations like tesselation.
For example, hair meshes (lots of small strands) are usually generated on the CPU from some basic parameters (basic hairstyle shape, hair color, strand density, curliness, fuzziness etc) and then the generated mesh (which could be quite large) is loaded onto the GPU. But the GPU could do that itself using mesh shaders, saving a lot of memory bandwidth. Here is a paper about this idea: https://www.cemyuksel.com/research/hairmesh_rendering/Real-T...
However, the main application of mesh shaders currently is more restricted: Meshes are chunked into patches (meshlets), which allows for more fine grained occlusion culling of occluded geometry.
Though most these things, I believe, can already be done with compute shaders, although perhaps not as elegantly, or with some overhead.
It seems to me, as not so 3d savy, that 3d objects and shaders have a similar connection as html structure and css.
Nowadays you need a structure of objects yet the layout, color and behavor comes from css.
In this regard, 3d scenes offer the elements but shaders can design them much more efficient than a engine ever could.
Is that accurate?
Btw, can objects modified by shaders signal collisions?
3D scenes (closest thing to the DOM) and materials (closest thing to CSS) are several abstraction layers above what modern 3D APIs provide, this is more 'rendering/game engine' territory.
3D APIs are more on the level of 'draw this list of triangles, and the color of a specific pixel in the triangle is computed like this: (hundreds of lines of pixel shader code)" - but even this is slowly being being replaced by even lower level code which implements completely custom rendering pipelines entirely on the GPU.
Shaders are not layout. I don't think there is an HTML/DOM analogy here that works. But if you had to force one, shaders are more like Javascript. It's a terrible analogy though.
OpenGL was a very nice API and even despite its shortcomings, it is quite telling that VK didn't fully replace it 10 years later.
Cross-vendor mesh shader support is great - we had NV_mesh_shader for quite a while but it's great that it's also supported on AMD now. It's good for voxel games like this - the shape of the vertex data is fairly fixed and very compressible, mesh shaders can really cut down on the VRAM usage and help reduce overhead.
Most minecraft optimisation mods generally try to reduce drawcalls by batching chunks (16x16x16) into bigger regions and use more modern OpenGL to reduce API overhead.
This mod does GPU-driven culling for invisible chunk sections (so the hidden chunks aren't rendered but without a roundtrip to the CPU) and also generates the triangles themselves with a mesh shader from the terrain data, which cuts down on the vertex size a lot.
(EDIT: I reworded this section because the mod does only a few drawcalls in total so my wording was inaccurate. Sorry!)
Sadly, optimising the game is a bit tricky due to several reasons - the first big one is translucency sorting, because there are translucent blocks in the game like stained glass, which have to be properly sorted for the blending to work. (the base game doesn't sort correctly either by default....)
The second is that it's quite overengineered, so improving it while also not breaking other mods and accidentally fixing vanilla bugs is quite hard.
There are further improvements possible but honestly, this is day and night compared to the vanilla renderer :)
For us mere mortals (not working at Unity or Unreal), the complexity is just too much. Vulkan tries to abstract desktop and mobile together, but if you're making an indie game, there's no value for you in that. The GL/GLES split was better because each could evolve to its strengths instead of being chained to a fundamentally different design.
The global state in OpenGL is certainly an annoyance, but I do not think that replacing it with fixed pipelines is an improvement, especially considering that most of that state is just a register write in desktop GPUs. Luckily, they eased up on that, but the API is still confusing, the defaults are not sane, and you need vendor-specific advice to know what's usable and what isn't. Ironically, writing Vulkan makes you more vendor-dependent in a sense, because you don't have OpenGL extension hell - you have Vulkan extension hell AND a bunch of incidental complexity around the used formats and layouts and whatnot.
On a more positive note, I seriously hope that OpenGL won't be entirely abandoned in the future, it has been a great API so far and it only really has small issues and driver problems but nothing really unfixable.
I think this is an extremely subjective take :) If you haven't been closely following OpenGL development since the late 1990s it is a very confusing API, since it simply stacks new concepts on top of old concepts all the way back to GL 2.0. E.g. if anything good can be said about Vulkan it's that at least it isn't such a hot mess of an API (yet) like OpenGL has become in the last 25 years ;)
Just look at glVertexAttribPointer()... it's an absolute mess of hidden footguns. A call to glVertexAttribPointer() 'captures' the current global vertex buffer binding for that attribute (very common source of bugs when working with vertex-input from different buffers), and the 'pointer' argument isn't a pointer at all, but a byte-offset into a vertex buffer. The entire API is full of such weird "sediment layers", and yes there are more recent vertex specification functions which are cleaner, but the old functions are still part of the new GL versions and just contribute to the confusion for new people trying to understand the API.
>I think this take is an extremely subjective take.
Okay fair but that's all takes on this site :)
Yes, vertexAttribPointer is a footgun (in my project I wrote an analyser to generate a compiler error when you write it down...) but luckily in modern OpenGL it doesn't matter because you have separated vertex format. The names are confusing because it's legacy shit but the functionality is there. It's very much not as clean as other APIs but it gets the job done.
If you stick to the modern versions (so bindVertexBuffer / vertexAttribFormat / VertexAttribBinding) and do one VAO per vertex format, it's quite nice. And just forbid using the old ones. ;)
More broadly, I admit it's a subjective thing but I find these issues much smaller than like, broader conceptual issues. You mix the function names up a few times then you learn not to do it. But when an API is just fundamentally unergonomic and inflexible, you can't really get past that. Maybe you get used to it after a while but the pain will always be there....
I just wish there was more literature about this, especially about perf implications. Also synchronization is very painful, which may be why this is hard to do on a driver level inside OpenGL
But I haven’t written OpenGL since Metal debuted over a decade ago.
If you are using Slang, then you just access everything as standard pointers to chunks of GPU memory.
And it's mostly Intel and mobile dragging their feet on VK_EXT_descriptor_buffer ...
Also I wish that GL_LINES was open-sourced for other platforms. Maybe it is in the OpenGL spec and I just haven't looked. I've attempted some other techniques like having the fragment shader draw a border around each triangle, but they all have their drawbacks.
This works for most other regular shapes too, like a relatively tight bounding box for circles if you're drawing a bunch of them.
Draw in screen space based on projected points in world space.
set gl_Color to your desired color vec and bam, line.
https://github.com/mattdesl/webgl-lines
https://hundredrabbits.itch.io/verreciel
PS— I still play Retro, and dream of resuscitating it :)
Oh man, that was about 3 lifetimes ago. I'm on linkedin (every couple months hah) if you ever want to chat.
It was a fun project, but we released it just in time for the Dot Bomb to wipe us out. Our last month, my partner and I got about $600 each from shareware game sales ($1200 with inflation today), enough to pay rent and my student loans for the first time. After the pop, the next month we got one sale. $12. The fun was over and never came back in the same way.
The powers that be conspired to end the 90s climb towards FU money and UBI, and there was no tech investment for about 6-7 years until 2007 when the iPhone and Facebook came out, which started the mobile bubble. Lots of people made $100k those first years, but mostly not established players, who had too much time, effort and money sunk into the old desktop platforms.
Then Google and Facebook took the lion's share of ad money, which removed the avenues to scale a business. Everyone moved into other models like in-game ads, in-app purchases and going viral through influencers, but none of those worked for the vast majority. To leave us where we are today, where long tail effects ensure a winner-take-all sales distribution.
Now that AI is here, even apps will begin disappearing. I predict that within 3 years, nobody will be using the web or buying software anymore. We'll just ask the AI to do everything, and it will make it so. Thus ends the mobile bubble too.
I basically missed every bubble due to unfortunate life choices. So let this be a cautionary tale to any young people who read this. You need to do the opposite of what people say. Nobody gets rich pulling themselves up by their bootstraps, they get rich by borrowing someone else's money and investing it in sure things like Bitcoin 15 years ago. If you feel yourself clinging to a project or situation because you have a lot invested in it or don't want to let someone down, that's the time to explore other options. You won't make yourself poor - your empathy will. That's why the rich demonize it. But empathy is where the true meaning in life is found. You can choose to skim money off other people's backs through investment, or "earn" it yourself, but there is a karmic cost no matter how you obtain it. So just have fun and do your best and at least pay your taxes so that society can progress forward, or we'll never stop repeating these systems of control and suffering IMHO.
When I was making Retro, I was trying to capture the feeling of the golden age of arcade video games, even if I maybe missed the mark. But I didn't realize that I was in a golden age at the time. I believe that we're in one of those now, maybe the last one. But also the first one, from a certain perspective.
Maybe it's time to dust off the old compiler and make a game..
As far as other platforms, there's VK_EXT_line_rasterization which is a port of opengl line drawing functionality to vulkan.
nvidium is using GL_NV_mesh_shader which is only available for nVIDIA cards. This mod is the only game/mod I know of that uses mesh shaders & is OpenGL. & so the new gl extension will let users of other vendors use the mod if it gets updated to use the new extension.
What is the current state of OpenGL, I thought it had faded away?
The worst thing about OpenGL is probably the hilariously non-typesafe C API.
AFAIK mesh shaders also get rid of (the ever troublesome) geometry shaders and hull shaders, but don't quote me on that :)
By far most traditional triangle rendering use cases should only see minimal performance improvements though, it's very much the definition of 'diminishing returns'.
It's definitely more straightforward and 'elegant' though.
PS: this is a pretty good introduction I think https://gpuopen.com/learn/mesh_shaders/mesh_shaders-from_ver...
Imagine you hired a robot artist to draw.
Before Shaders (The Old Way): The robot had a fixed set of instructions. You could only tell it "draw a red circle here" or "draw a blue square there." You could change the colors and basic shapes, but you couldn't change how it drew them. This was called the fixed-function pipeline.
After Shaders (The New Way): You can now give the robot custom, programmable instructions, or shaders. You can write little programs that tell it exactly how to draw things.
The Two Original Shaders This programmability was primarily split into two types of shaders:
Vertex Shader: This program runs for every single point (vertex) of a 3D model. Its job is to figure out where that point should be positioned on your 2D screen. You could now program custom effects like making a character model jiggle or a flag wave in the wind.
Fragment (or Pixel) Shader: After the shape is positioned, this program runs for every single pixel inside that shape. Its job is to decide the final color of that pixel. This is where you program complex lighting, shadows, reflections, and surface textures like wood grain or rust.
For example, hair meshes (lots of small strands) are usually generated on the CPU from some basic parameters (basic hairstyle shape, hair color, strand density, curliness, fuzziness etc) and then the generated mesh (which could be quite large) is loaded onto the GPU. But the GPU could do that itself using mesh shaders, saving a lot of memory bandwidth. Here is a paper about this idea: https://www.cemyuksel.com/research/hairmesh_rendering/Real-T...
However, the main application of mesh shaders currently is more restricted: Meshes are chunked into patches (meshlets), which allows for more fine grained occlusion culling of occluded geometry.
Though most these things, I believe, can already be done with compute shaders, although perhaps not as elegantly, or with some overhead.
In this regard, 3d scenes offer the elements but shaders can design them much more efficient than a engine ever could.
Is that accurate?
Btw, can objects modified by shaders signal collisions?
3D APIs are more on the level of 'draw this list of triangles, and the color of a specific pixel in the triangle is computed like this: (hundreds of lines of pixel shader code)" - but even this is slowly being being replaced by even lower level code which implements completely custom rendering pipelines entirely on the GPU.
Collisions aren't part of a graphics API.
You can do occlusion queries though, which is a form of 2D collision detection similar to what home computer sprite hardware provided ;)
- https://alteredqualia.com/css-shaders/article/
- https://developer.mozilla.org/en-US/docs/Web/API/Houdini_API...
Deleted Comment
And why for ES? I thought ES was for less advanced hardware.
https://wikis.khronos.org/opengl/OpenGL_Extension#Extension_...
OpenGL was a very nice API and even despite its shortcomings, it is quite telling that VK didn't fully replace it 10 years later.
Cross-vendor mesh shader support is great - we had NV_mesh_shader for quite a while but it's great that it's also supported on AMD now. It's good for voxel games like this - the shape of the vertex data is fairly fixed and very compressible, mesh shaders can really cut down on the VRAM usage and help reduce overhead.
Most minecraft optimisation mods generally try to reduce drawcalls by batching chunks (16x16x16) into bigger regions and use more modern OpenGL to reduce API overhead.
This mod does GPU-driven culling for invisible chunk sections (so the hidden chunks aren't rendered but without a roundtrip to the CPU) and also generates the triangles themselves with a mesh shader from the terrain data, which cuts down on the vertex size a lot. (EDIT: I reworded this section because the mod does only a few drawcalls in total so my wording was inaccurate. Sorry!)
Sadly, optimising the game is a bit tricky due to several reasons - the first big one is translucency sorting, because there are translucent blocks in the game like stained glass, which have to be properly sorted for the blending to work. (the base game doesn't sort correctly either by default....)
The second is that it's quite overengineered, so improving it while also not breaking other mods and accidentally fixing vanilla bugs is quite hard.
There are further improvements possible but honestly, this is day and night compared to the vanilla renderer :)
For us mere mortals (not working at Unity or Unreal), the complexity is just too much. Vulkan tries to abstract desktop and mobile together, but if you're making an indie game, there's no value for you in that. The GL/GLES split was better because each could evolve to its strengths instead of being chained to a fundamentally different design.
The global state in OpenGL is certainly an annoyance, but I do not think that replacing it with fixed pipelines is an improvement, especially considering that most of that state is just a register write in desktop GPUs. Luckily, they eased up on that, but the API is still confusing, the defaults are not sane, and you need vendor-specific advice to know what's usable and what isn't. Ironically, writing Vulkan makes you more vendor-dependent in a sense, because you don't have OpenGL extension hell - you have Vulkan extension hell AND a bunch of incidental complexity around the used formats and layouts and whatnot.
On a more positive note, I seriously hope that OpenGL won't be entirely abandoned in the future, it has been a great API so far and it only really has small issues and driver problems but nothing really unfixable.
I think this is an extremely subjective take :) If you haven't been closely following OpenGL development since the late 1990s it is a very confusing API, since it simply stacks new concepts on top of old concepts all the way back to GL 2.0. E.g. if anything good can be said about Vulkan it's that at least it isn't such a hot mess of an API (yet) like OpenGL has become in the last 25 years ;)
Just look at glVertexAttribPointer()... it's an absolute mess of hidden footguns. A call to glVertexAttribPointer() 'captures' the current global vertex buffer binding for that attribute (very common source of bugs when working with vertex-input from different buffers), and the 'pointer' argument isn't a pointer at all, but a byte-offset into a vertex buffer. The entire API is full of such weird "sediment layers", and yes there are more recent vertex specification functions which are cleaner, but the old functions are still part of the new GL versions and just contribute to the confusion for new people trying to understand the API.
Okay fair but that's all takes on this site :)
Yes, vertexAttribPointer is a footgun (in my project I wrote an analyser to generate a compiler error when you write it down...) but luckily in modern OpenGL it doesn't matter because you have separated vertex format. The names are confusing because it's legacy shit but the functionality is there. It's very much not as clean as other APIs but it gets the job done.
If you stick to the modern versions (so bindVertexBuffer / vertexAttribFormat / VertexAttribBinding) and do one VAO per vertex format, it's quite nice. And just forbid using the old ones. ;)
More broadly, I admit it's a subjective thing but I find these issues much smaller than like, broader conceptual issues. You mix the function names up a few times then you learn not to do it. But when an API is just fundamentally unergonomic and inflexible, you can't really get past that. Maybe you get used to it after a while but the pain will always be there....