Readit News logoReadit News
JonChesterfield · 2 years ago
I'm confused by this. It's open source, it's been forked. The code doesn't disappear because you delete one copy of it.

This move primarily makes AMD legal look frightened of Nvidia, which seems a bad thing to put out there on every axis.

RIMR · 2 years ago
Grab Version 3 source code from the releases section while you still can!

https://github.com/vosen/ZLUDA/releases

caeruleus · 2 years ago
... or mirror a recently updated fork. :)

https://github.com/vosen/ZLUDA/forks?include=active&page=1&p...

It seems the last commit to master was 9e56862.

Kim_Bruning · 2 years ago
I don't quite get the strategy at AMD here. This would have allowed them to compete directly with NVIDIA.
arghwhat · 2 years ago
And by none other than AMD themselves.
iforgotpassword · 2 years ago
Disappoining. Nvidia wouldn't have surprised me at all, but amd seems to be turning more and more into this. Inevitable if you're successful enough?
0x_rs · 2 years ago
Not the first time it happens. Another drop-in replacement for CUDA was also stopped in its tracks last decade by them.
arghwhat · 2 years ago
The surprise here is that AMD was sponsoring it.
api · 2 years ago
I wonder if we could train an LLM to port CUDA to Metal, OpenCL, etc?

Seems like the lock in here actually isn’t that powerful. Fundamentally it’s math implemented in a C-like language.

Lockal · 2 years ago
This has already been done multiple times without using LLMs. For HIP, there are tools like hipify-clang, hipify-perl, and hipify (the Python-based tool in PyTorch). For SYCL, there is SYCLomatic.

The devil is in the details, though; at some point, all projects encounter non-portable code due to different instruction sets. For example, if the hardware does not support Warpgroup Level Multiply-and-Accumulate or a specific minifloat format, it is actively harmful to translate the code 'as is.' These platforms require software redesign, which is not something that LLMs are currently capable of handling.

WithinReason · 2 years ago
You would need to translate CUDA already compiled for an Nvidia GPU to e.g. OpenCL that runs fast on an AMD GPU, which is close AGI level in difficulty.
pjmlp · 2 years ago
People keep forgetting CUDA is polyglot, and then there is the graphical tooling, IDE integration, and libraries ecosystem.
arghwhat · 2 years ago
The lock-in is mostly in libraries and tooling. It's not really your CUDA code that is the issue.
ta8645 · 2 years ago
AMD makes it really hard to cheer for them, even with such a desperate need for competition to keep nVidia honest.
dpoljak · 2 years ago
I can understand AMD not actively working on this solution since it gains them little while costing a hefty amount of developer time. Especially considering the fact that nVidia will be able to arbitrarily break this implementation with every update. What I don't understand is why they would take it down completely, is there some safety concern or are we in the 11 competing standards xkcd again?
bryanlarsen · 2 years ago
zluda is a thin wrapper around ROCm. AMD is investing heavily into ROCm.

Deleted Comment