Readit News logoReadit News
klft · 3 years ago
> Using Microsoft Olive and DirectML instead of the PyTorch pathway results in the AMD 7900 XTX going form a measly 1.87 iterations per second to 18.59 iterations per second!

So the headline should be Microsoft Olive vs. PyTorch and not AMD vs. Nvidia.

mananaysiempre · 3 years ago
The results of the usual benchmarks are inconclusive between the 7900 XTX and the 4080, Nvidia is only somewhat more expensive, yet CUDA is much more popular than anything AMD is allowed to support. So I’d say this makes sense as an AMD vs Nvidia comparison as well.
dangus · 3 years ago
The existence of the 4090 is another issue.

I’m not sure which customer willing to spend $1000-1200 to do ML workflows isn’t willing to spend $1600 to get another 20%+ of performance and have the fastest card available.

I’m not saying people have unlimited budgets but it just seems like the choice most people in that price range would make.

ineedasername · 3 years ago
If it’s completely down to Olive and DirectML then nvidia should be able to use them for similar performance improvements. If not, then AMD is still a defining factor. A quick search didn’t bring up anything definitive on the question though, so I guess we’ll have to wait for someone to try it out ( or someone with faster Google-fu than me)
DarkmSparks · 3 years ago
Been watching this quite closely. As far as I summarise, the 7900XTX is the first (and only) desktop GPU from AMD that _might_ be worth buying. (They own the console gaming space, but thats a different story).

Not Nvidia beating due to the CUDA issue, but a massive leap in the right direction.

Intel is also making _some_ progress with its ARC range.

Its going to be happy days for us users if/when AMD/Intel are competitive, and cut some of that monopoly margin off Nvidias pricing, but a way to go yet.

EtienneK · 3 years ago
This is not true at all. AMD GPUs have been constantly delivering better bang-for-buck for a while now.

Edit: of course I am talking about gaming here since you mentioned consoles

PeterStuer · 3 years ago
For mid-tier gaming they are very competitive, but for consumer AI they were not even a player until very recently and still marginal at best.
anshukg · 3 years ago
I don't know , I think it is still plenty of time before nvidia could have a serious competitor for value : https://medium.com/@1kg/nvidias-cuda-monopoly-6446f4ef7375
Iulioh · 3 years ago
Eh.

For me the problem is technology and not raw performance

DLSS and RT are massive for someone with a 4k screen but now 4k gaming hardware outside league of legends lol

ekianjo · 3 years ago
Gaming is only a very small part of what you expect from GPUs these days
throwaway2990 · 3 years ago
> the 7900XTX is the first (and only) desktop GPU from AMD that _might_ be worth buying.

Unless you need RT for gaming then most of AMDs cards are better value.

dralley · 3 years ago
And if you use Linux and don't need CUDA, it's really no contest. The AMD experience on Linux is vastly better than the Nvidia one.
nixass · 3 years ago
> Unless you need RT for gaming then most of AMDs cards are better value.

Nobody "needs" RT for gaming. It's still in gimmicky phase and not worth neither performance hit nor the way it looks on screen.

zouhair · 3 years ago
I have a 6700 XT and I have extremely good times with it. I'm not gonna spend $1300 CAD on a card, that's insane.
pxmpxm · 3 years ago
> extremely good times with it

doing?

WanderPanda · 3 years ago
For us users it has always been relatively fine, datacenters are the ones that are really milked by nvidia
brucethemoose2 · 3 years ago
Well the problem is that Automatic1111 is not fast...

Other diffusers based UIs with PyTorch Triton will net you 40%+ performance.

Facebook AITemplate inference in VoltaML will be at least twice as fast as A1111 on a 3080, with support for LORAs, controlnet and such. This supports AMD Instinct cards too.

What I am getting at is that people dont really care about A1111 performance on a 3080 because, for the most part, its fast enough.

kristopolous · 3 years ago
The extension ecosystem is what makes 1111 the winner for now. SegmentAnything, DreamBooth, ControlNet, OpenPose ... It's almost easy
brucethemoose2 · 3 years ago
SegmentAnything is the big one missing from other UIs, but IMO most of the other extensions are pretty niche, especially with how hackable diffusers is compared to the A1111/Comfy SAI backend.
lelandbatey · 3 years ago
The comments point out that AMD in the table performing well required the use of Microsoft Olive, and someone in the article comments implies that if you use Microsoft Olive with Nvidia instead of Pytorch with Nvidia, then you'll see the Nvidia jump in performance as well, largely rendering the supposed leap by AMD not relevant. Is that true? Can folks chime in?
Havoc · 3 years ago
Nearly bought one thinking AMD will sort itself out shortly but hard to justify vs a second hand 3090 with 24gb and no cuda hassles
xigency · 3 years ago
It makes a lot of sense to invest in a 24GB card for the right price.
doctorpangloss · 3 years ago
I'm still basking in my good fortune buying a hundred 3090s from crypto miners at rock bottom prices.
cschmid · 3 years ago
Can I also interpret this as: 'AMD's pytorch support is so abysmal that inference is 10x slower than it should be'?
croes · 3 years ago
Should it not say PyTorch's AMD support?
dannyw · 3 years ago
It takes two to tango. AMD is always welcome to contribute patches.

You also have to keep in mind some latest gen AMD GPUs don’t even officially support ROCm on Linux. That’s absurd.

AMD has a choice to invest more staff into ML support, they’re choosing not to.

delusional · 3 years ago
I've been running pytorch and rocm (5.6 has support for gfx1100 if you compile it yourself) for at least 3 months at 18 it/s on a 7900 XTX. This has been possible for quite a while.

Could someone fill me in on what's actually new here, other than the specific technology used?

smoldesu · 3 years ago
Wait, why are they comparing Microsoft Olive on AMD to Pytorch on Nvidia? Nvidia supposedly shipped support for Olive recently, there should be no problem getting a head-to-head comparison: https://www.tomshardware.com/news/nvidia-geforce-driver-prom...

This is a very strange comparison.

dragonwriter · 3 years ago
> Nvidia supposedly shipped support for Olive recently

I mean, they announced it with a more than 2x speedup for SD in May:

https://blogs.nvidia.com/blog/2023/05/23/microsoft-build-nvi...

lostmsu · 3 years ago
My understanding is Olive is a compressor, so comparing olive results to raw model is invalid.
asu_thomas · 3 years ago
A head-to-head comparison would render less ad views.