Alyssa Rosenzweig is a gift to the community that keeps on giving. Every one of her blog posts is a guarantee to learn something you didn't know about the internals of modern graphics hardware.
This endeavour proofs to me skills beat talkativeness every single day. Just reading the blogs sets my brain on fire. There is so much to unpack. The punch line is not the last but the second sentence, nevertheless you're forced to follow the path into the rabbit hole until you enjoy reading one bit manipulation after the other.
If there ever are benchmarks with eureka effects per paragraph Alyssa will lead them all.
One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.
I've read that generally opengl is just easier to use than vulkan, I don't know if that's true, but if something is too complicated, it becomes just too hard for less experienced devs to exploit those GPU, and it becomes a barrier to entry, which might discourage some indie game developers.
Although everyone uses unity and unreal now, baking things from scratch or using other engines is just weird now, for some reason. It's really annoying, and it's fun to see gamedev wake up after unity tried to lock things more.
Open source in gaming has always been stretched thin. Godot is there, but I doubt it's able to seriously compete with unity and unreal even if I want it to, so even if godot is capable, indie gamedevs are more experienced with unity and unreal and will stick to those.
The state of open source in game dev feels really hopeless sometimes, the rise of next gen graphics API are not making things easy.
You’re getting downvoted for some reason, but OpenGL is absolutely easier. It abstracts so much (and for beginners there’s still a ton even with all the abstraction!). No need to think about how to prep pipelines, optimally upload your data, manually synchronize your rendering, and more with OpenGL, unlike Vulkan. The low level nature of Vulkan allows you to eek out every bit of performance, but for indie game developers and the majority of graphics development that doesn’t depend on realtime PBR with giant amounts of data, OpenGL is still immensely useful.
If anything, an OpenGL-like API will naturally be developed on top of Vulkan for the users that don’t care about all that stuff. And once again, I can’t stress this enough, OpenGL is still a lot for beginners. Shaders, geometric transformations, the fixed function pipeline, vertex layouts, shader buffer objects, textures, mip maps, instancing, buffers in general, there’s sooo much to learn and these foundations transcend OpenGL and apply to all graphics rendering. As a beginner, OpenGL allowing me to focus on the higher level details was immensely beneficial for me getting started on my graphics programming journey.
This is a bit misleading. Much of the extra code that you'd have to write in Vulkan to get to first-triangle is just that, a one-time cost. And you can use a third-party library, framework or engine to take care of it. Vulkan merely splits out the hardware-native low level from the library support layer, that were conflated in OpenGL, and lets the latter evolve freely via a third party ecosystem. That's just a sensible choice.
Drawing conclusions from a hello world example is not representative of which API is "easier". You are also using lines of code as measure of "ease" where it's a measure of "verbosity".
Further, the OpenGL example is not following modern graphics best practices and relies on defaults from OpenGL which cuts down the lines of code but is not practical in real applications.
Getting Vulkan initialized is a bit of a chore, but once it's set up, it's not much more difficult than OpenGL. GPU programming is hard no matter which way you put it.
I'm not claiming Vulkan initialization is not verbose, it certainly is, but there are libraries to help you with that (f.ex. vkbootstrap, vma, etc). The init routine requires you to explicitly state which HW and SW features you need, reducing the "it works on my computer" problem that plagues OpenGL.
If you use a recent Vulkan version (1.2+), namely the dynamic rendering and dynamic state features, it's actually very close to OpenGL because you don't need to configure render passes, framebuffers etc. This greatly reduces the amount of code needed to draw stuff. All of this is available on all desktop platforms, even on quite old hardware (~10 year old gpus) if your drivers are up to date. The only major difference is the need for explicit pipeline barriers.
Just to give you a point of reference, drawing a triangle with Vulkan, with the reusable framework excluded, is 122 lines of Rust code including the GLSL shader sources.
Another data point from my past projects, a practical setup for OpenGL is about 1500 lines of code, where Vulkan is perhaps 3000-4000 LOC where ~1000 LOC is trivial setup code for enabled features (verbose, but not hard).
As a graphics programmer, going from OpenGL to Vulkan has been a massive quality of life improvement.
FWIW Metal is actually easier to use than Vulkan in my opinion, as Vulkan is kind of designed to be super flexible and doesn't have as much niceties in it. Either way, OpenGL was simply too high level to be exposed as the direct API of the drivers. It's much better to have a lower level API like Vulkan as the base layer, and then build something like OpenGL on top of Vulkan instead. It maps much better to how GPU hardware works this way. There's a reason why we have a concept of software layers.
It's also not quite true that everyone uses Unity and Unreal. Just look at the Game of the Year nominees from The Game Award 2023. All 6 of them were built using in-house game engines. Among indies there are also still some amount of developers who develop their own engines (e.g. Hades), but it's true that the majority of them will just use an off-the-shelf one.
Metal is probably the most streamlined and easiest to use GPU API right now. It's compact, adapts to your needs, and can be intuitively understood by anyone with basic C++ knowledge.
OpenGL is not deprecated, it is simpler and continues to be used where Vulkan is overkill. Using it for greenfields is a good choice if it covers all your needs (and if you don't mind the stateful render pipeline).
It kind of is, OpenGL 4.6 is the very last version, the red book only covers until OpenGL 4.5, and some hardware vendors are now shipping OpenGL on top of Vulkan or DirectX, instead of providing native OpenGL drivers.
While not officially deprecated, it is standing still and won't get anything newer past 2017 hardware, not even newer extensions are being made available.
The focus has already moved to other APIs (Vulkan and Metal), and the side effect of this will be that bitrot sets in, first in OpenGL debugging and profiling tools (older tools won't be maintained, new tools won't support GL), then in drivers.
OpenGL is already deprecated on macOS and iOS for a couple of years. It still works (nowadays running as layer on top of Metal), but when building GL code for macOS or iOS you're spammed with deprecation warnings (can be turned off with a define though).
WGPU is kinda supposed to solve the problem by making a cross platform API more user friendly than Vulkan. The problem with OpenGL is that it is too far from how GPUs work and it's hard to get good performance out of it.
It is hard to get the absolute best performance out of OpenGL but it isn't really hard to get good performance. Unless you're trying to make some sort of seamless open world game with modern AAA level of visual fidelity or trying to do something very out of the ordinary, OpenGL is fine.
A bigger issue you may face is OpenGL driver bugs but AFAIK the main culprit here was AMD and a couple of years ago they improved their OpenGL driver to be much better.
Also at this point OpenGL still has no hardware raytracing extension/API so if you need that you need to use Vulkan (either just for the RT bits with OpenGL interop or switching to it completely). My own 3D engine uses OpenGL and while the performance is perfectly fine, i'm considering switching to Vulkan at some point in the future to have raytracing support.
My understanding is that one of the primary reasons Vulkan was developed was because OpenGL was not a good model for GPUs, and supporting it prevented people from taking advantage of the hardware in many cases.
It's because Vulkan is designed for driver developers and (to a lesser degree) for middleware engine developers. As far as APIs go, it's pretty much awful. I was very pumped for Vulkan when it was initially announced, but seeing the monstrosity the committee has produced has cooled down my enthusiasm very quickly.
> One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.
And here I am, recalling all the games and programs that failed once OpenGL 2.0 was implemented because they required OpenGL 1.1 or 1.2 but just checked the minor version number... time flies!
> I've read that generally OpenGL is just easier to use than Vulkan.
OpenGL mostly only makes sense if you followed its progress from the late 90's and understand the reasons behind all the accumulated design warts, sediment layers and just plain weird design decisions. For newcomers, OpenGL is just one weirdness after another.
Unfortunately Vulkan seems to be on the same track, which makes me think that the underlying problem is organisational, not technical, e.g. both APIs are lead by Khronos, resulting in the same 'API design and maintenance philosophy' - and frankly, the approach to API design was OpenGL's main problem, not that it didn't map to modern GPU architectures (which could have been fixed with a different API design approach without throwing the baby out with the bath water).
But on Mac, what matters more is how OpenGL compares to Metal, and the answer is much simpler: Metal both has a cleaner design, and is easier to use than OpenGL.
> How do we break the 4.1 barrier? Without hardware support, new features need new tricks. Geometry shaders, tessellation, and transform feedback become compute shaders. Cull distance becomes a transformed interpolated value. Clip control becomes a vertex shader epilogue. The list goes on.
I wonder how much of this work is in m1 gpu code, versus how much feature-implemented-on-another-festure work could be reused by others.
This feels very similar to what Zink does (runs complex opengl capabilities via a more primitive Vulkan), except there is no Vulkan backend to target for m1. Yet.
More generally, you could execute complex OpenGL or Vulkan on some more-or-less arbitrary combination of CPU soft-rendering and hardware-specific native acceleration support. It would just be a matter of doing the work, and it could be reused across a wide variety of hardware - including perhaps older hardware that may be quite well understood but not usable on its own for modern workloads.
> Regrettably, the M1 doesn’t map well to any graphics standard newer than OpenGL ES 3.1. While Vulkan makes some of these features optional, the missing features are required to layer DirectX and OpenGL on top. No existing solution on M1 gets past the OpenGL 4.1 feature set.
I'm very curious to know the performance impact of this, particularly compared to using Metal on macOS. (I'm sure the answer is "it depends", but still.)
It's possible the article answers this question, but I didn't understand most of it. :(
There isn't necessarily much difference between implementing features in driver compute code versus GPU hardware support. Even the "hardware support" is usually implemented in GPU microcode. It often goes through the same silicon. Any feature could hit a performance bottleneck and it's hard to know which feature will bottleneck until you try.
Alyssa chooses some very odd language here, it seems to me. Yes, Apple GPUs do not support geometry shaders natively because geometry shaders are a bad design and do not map well to GPU hardware (geometry shaders are known to be slow even on hardware that allegedly supports it — there is a reason why Nvidia went ahead to design mesh shading). Transform feedback (ability to write transformed vertex data back to memory) is another feature that is often brought up in these discussions, but Apple GPUs can write to arbitrary memory locations from any shader stage, which makes transform feedback entirely superfluous.
The core of the issue is that Apple chose to implement a streamlined compute architecture, and they have cut a lot of legacy cruft and things that were known not to work well in the process. I don't think that the rhetorics of "M1 getting stuck at OpenGL 4.1" is appropriate. I stopped following OpenGL many years ago, so I don't know specifically which features past 4.1 she might refer to. What I can say is that I'd be very surprised if there is something that OpenGL offers that cannot be done in Metal, but there are plenty of things possible in Metal that cannot be done at all in OpenGL (starting with a fact that Metal shading language has fully featured pointers).
I think Apple bans third party kernel drivers? To write a proper Vulkan or OpenGL implementation you need a kernel counterpart for handling the GPU if I understand correctly. That's probably the reason no one bothers implementing native Vulkan for macOS.
But if it's doable with Apple's driver - then not sure.
Because it's a cross-platform framework that thousands of programs (including Mac exclusives) rely on? There's so much OpenGL-only software that it should be Apple's moral imperative to support it anyways. I don't think anyone can honestly say that the graphics situation on Mac has improved in the absence of cross-platform APIs. Even Apple admits it with the Game Porting Toolkit.
I find it very amusing that transitioning out of bounds accesses from traps to returning some random data is called “robustness”. Graphics programming certainly is weird.
It makes sense from the perspective of writing graphics drivers, and aligns with Postel's law (also called the robustness principle). GPU drivers are all about making broken applications run, or run faster. Making your GPU drivers strict by default won't fix the systemic problems with the video game industry shipping broken code, it'll just drive away all of your users.
And on hardware where branches are generally painfully expensive, it sounds really useful to have a flag to tell the system to quietly handle edge cases in whatever way is most efficient. I suspect there are a lot of valid use cases for such a mode where the programmer can be reasonably sure that those edge cases will have little or no impact on what the user ends up seeing in the final rendered frame.
The out of bounds accesses don't necessarily trap without the robustness checks, so the robustness is about delivering known results under those goofy cases. So it makes sense when you combine that with the fact that GPUs are pretty against traps in general. Carmack remarked once that it's was a pain to get manufacturers to be into the idea of virtual memory when he was designing megatexture.
This is obviously very exciting, but—why not target Vulkan first? It seems like the more salient target these days and one on top of which we already have an OpenGL implementation.
OpenGL-on-Vulkan compat layers aren't magic. For them to support a given OpenGL feature, an equivalent feature must be supported by the Vulkan driver (often as an extension). That means you can't just implement a baseline Vulkan driver and get OGL 4.6 support for free, you must put in the work to implement all the OGL 4.6 features in your Vulkan driver if you want MESA to translate OGL 4.6 to Vulkan for you.
Plus, this isn't Alyssa's first reverse engineering + OpenGL driver project. I don't know the details but I'd imagine it's much easier and quicker to implement a driver for an API you're used to making drivers for, than to implement a driver for an API you aren't.
They started with targeting older OpenGL to get a basic feature set working first. I guess from there, getting up to a more recent OpenGL was less work than doing a complete Vulkan implementation, and they probably learned a lot about what they'll need to do for Vulkan.
I thought something similar, but from their comments, to support OpenGL over Vulkan you need higher versions of Vulkan anyway and it's still a big effort. So they decided to go with (lower versions of) OpenGL first to get something functional sooner.
If there ever are benchmarks with eureka effects per paragraph Alyssa will lead them all.
Just thanks!
I've read that generally opengl is just easier to use than vulkan, I don't know if that's true, but if something is too complicated, it becomes just too hard for less experienced devs to exploit those GPU, and it becomes a barrier to entry, which might discourage some indie game developers.
Although everyone uses unity and unreal now, baking things from scratch or using other engines is just weird now, for some reason. It's really annoying, and it's fun to see gamedev wake up after unity tried to lock things more.
Open source in gaming has always been stretched thin. Godot is there, but I doubt it's able to seriously compete with unity and unreal even if I want it to, so even if godot is capable, indie gamedevs are more experienced with unity and unreal and will stick to those.
The state of open source in game dev feels really hopeless sometimes, the rise of next gen graphics API are not making things easy.
[here's](https://learnopengl.com/code_viewer_gh.php?code=src/1.gettin...) an opengl triangle rendering example code (~200 LOC)
[here's](https://vulkan-tutorial.com/code/17_swap_chain_recreation.cp...) a vulkan triangle rendering example code (~1000 LOC)
ye it's fair to say opengl is a bit easier to use ijbol
If anything, an OpenGL-like API will naturally be developed on top of Vulkan for the users that don’t care about all that stuff. And once again, I can’t stress this enough, OpenGL is still a lot for beginners. Shaders, geometric transformations, the fixed function pipeline, vertex layouts, shader buffer objects, textures, mip maps, instancing, buffers in general, there’s sooo much to learn and these foundations transcend OpenGL and apply to all graphics rendering. As a beginner, OpenGL allowing me to focus on the higher level details was immensely beneficial for me getting started on my graphics programming journey.
Further, the OpenGL example is not following modern graphics best practices and relies on defaults from OpenGL which cuts down the lines of code but is not practical in real applications.
Getting Vulkan initialized is a bit of a chore, but once it's set up, it's not much more difficult than OpenGL. GPU programming is hard no matter which way you put it.
I'm not claiming Vulkan initialization is not verbose, it certainly is, but there are libraries to help you with that (f.ex. vkbootstrap, vma, etc). The init routine requires you to explicitly state which HW and SW features you need, reducing the "it works on my computer" problem that plagues OpenGL.
If you use a recent Vulkan version (1.2+), namely the dynamic rendering and dynamic state features, it's actually very close to OpenGL because you don't need to configure render passes, framebuffers etc. This greatly reduces the amount of code needed to draw stuff. All of this is available on all desktop platforms, even on quite old hardware (~10 year old gpus) if your drivers are up to date. The only major difference is the need for explicit pipeline barriers.
Just to give you a point of reference, drawing a triangle with Vulkan, with the reusable framework excluded, is 122 lines of Rust code including the GLSL shader sources.
Another data point from my past projects, a practical setup for OpenGL is about 1500 lines of code, where Vulkan is perhaps 3000-4000 LOC where ~1000 LOC is trivial setup code for enabled features (verbose, but not hard).
As a graphics programmer, going from OpenGL to Vulkan has been a massive quality of life improvement.
It's also not quite true that everyone uses Unity and Unreal. Just look at the Game of the Year nominees from The Game Award 2023. All 6 of them were built using in-house game engines. Among indies there are also still some amount of developers who develop their own engines (e.g. Hades), but it's true that the majority of them will just use an off-the-shelf one.
While not officially deprecated, it is standing still and won't get anything newer past 2017 hardware, not even newer extensions are being made available.
Whether it will actually stop working anytime soon is a different question; but it is not a supported API.
OpenGL is already deprecated on macOS and iOS for a couple of years. It still works (nowadays running as layer on top of Metal), but when building GL code for macOS or iOS you're spammed with deprecation warnings (can be turned off with a define though).
A bigger issue you may face is OpenGL driver bugs but AFAIK the main culprit here was AMD and a couple of years ago they improved their OpenGL driver to be much better.
Also at this point OpenGL still has no hardware raytracing extension/API so if you need that you need to use Vulkan (either just for the RT bits with OpenGL interop or switching to it completely). My own 3D engine uses OpenGL and while the performance is perfectly fine, i'm considering switching to Vulkan at some point in the future to have raytracing support.
And here I am, recalling all the games and programs that failed once OpenGL 2.0 was implemented because they required OpenGL 1.1 or 1.2 but just checked the minor version number... time flies!
OpenGL mostly only makes sense if you followed its progress from the late 90's and understand the reasons behind all the accumulated design warts, sediment layers and just plain weird design decisions. For newcomers, OpenGL is just one weirdness after another.
Unfortunately Vulkan seems to be on the same track, which makes me think that the underlying problem is organisational, not technical, e.g. both APIs are lead by Khronos, resulting in the same 'API design and maintenance philosophy' - and frankly, the approach to API design was OpenGL's main problem, not that it didn't map to modern GPU architectures (which could have been fixed with a different API design approach without throwing the baby out with the bath water).
But on Mac, what matters more is how OpenGL compares to Metal, and the answer is much simpler: Metal both has a cleaner design, and is easier to use than OpenGL.
I wonder how much of this work is in m1 gpu code, versus how much feature-implemented-on-another-festure work could be reused by others.
This feels very similar to what Zink does (runs complex opengl capabilities via a more primitive Vulkan), except there is no Vulkan backend to target for m1. Yet.
I'm very curious to know the performance impact of this, particularly compared to using Metal on macOS. (I'm sure the answer is "it depends", but still.)
It's possible the article answers this question, but I didn't understand most of it. :(
The core of the issue is that Apple chose to implement a streamlined compute architecture, and they have cut a lot of legacy cruft and things that were known not to work well in the process. I don't think that the rhetorics of "M1 getting stuck at OpenGL 4.1" is appropriate. I stopped following OpenGL many years ago, so I don't know specifically which features past 4.1 she might refer to. What I can say is that I'd be very surprised if there is something that OpenGL offers that cannot be done in Metal, but there are plenty of things possible in Metal that cannot be done at all in OpenGL (starting with a fact that Metal shading language has fully featured pointers).
The original Mesa drivers for the M1 GPU were bootstrapped by doing just that, sending command buffers to the AGX driver in macOS using IOKit.
https://rosenzweig.io/blog/asahi-gpu-part-2.html
https://github.com/AsahiLinux/gpu/blob/main/demo/iokit.c
So you'd need a bit more glue in Mesa to get the surfaces from the GPU into something you can composite onto the screen in macOS.
But if it's doable with Apple's driver - then not sure.
And on hardware where branches are generally painfully expensive, it sounds really useful to have a flag to tell the system to quietly handle edge cases in whatever way is most efficient. I suspect there are a lot of valid use cases for such a mode where the programmer can be reasonably sure that those edge cases will have little or no impact on what the user ends up seeing in the final rendered frame.
In domains where "performance trumps safety" culture reigns, talking about other programing languages is like talking to a wall.
Plus, this isn't Alyssa's first reverse engineering + OpenGL driver project. I don't know the details but I'd imagine it's much easier and quicker to implement a driver for an API you're used to making drivers for, than to implement a driver for an API you aren't.