I’ll probably get downvoted for saying this . . .
Ever since using JWT’s became a trend, I’ve found that I can’t get a useful answer almost every single time I’ve asked an engineer (or team) why they picked JWT’s over old, boring, and tested sessions for a web app. It seems, just like React, GraphQL, etc., a lot of the industry just love jumping on bandwagons. I see so many companies adopting the new and shiny thing (or the thing attached to a big name) rather than the best tool for the job. Unless I encounter a specific use case that would be best served using JWT’s, I’ll stick with the “old” Redis sessions model.
I guess you’re not a real engineer nowadays if you can’t say that your new app uses insert buzzword or trendy technology here . . .
OpenAI's Triton compiles down to CUDA atm (if I read their github right), and only supports Nvidia GPUs.
PyTorch 2.0's installation page only mentions CPU and CUDA targets, therefor it's effectively all Nvidia GPUs.
While all the frameworks and abstractions could offer other back-ends in theory the story of anything ML related on the other big name in the industry, AMD, is still poor.
If anybody looses business because of bad decisions it is AMD, not Nvidia, who lead the whole industry. I am not convinced that anything will change in the near future.
Apple has done more to see Apple Silicon supported in the ML space in almost no time than AMD did over the past decade. Apple obviously has more resources, but they don't have a 1000 person team working on this, I suspect, but instead it's a very commando, efficient operation. They got a small number of dedicated resources committed to bringing support and enabling the community, and it happened.