They're the ones writing most of the open source AI code.
Most AI developers don't actually use CUDA directly, they use programming libraries like PyTorch that use CUDA under the hood to communicate with the GPU to run tasks in parallel.
CUDA is pretty much the standard and is supported anywhere it's relevant.
Just creating an alternative is pretty meaningless if it isn't actually supported anywhere.
Adding support isn't easy, and there's also stability issues, bugs, etc. People want something that works and is reliable (= CUDA, since it's battle-tested).
That's the same flawed argument that people have used to expect Huawei to replace Android/EUV.
I am not saying whether retaining an edge is good or bad or that I have a different answer if one thought it was good. Just curious what you guys think.