Readit News logoReadit News
mastax · 5 years ago
Rumors were that AMD wanted to push the performance and price so they weren't stuck being the discount option forever. This looks like it'll do that. Performance and power efficiency look good, and they implemented most of the Nvidia early adopter features. If they want to really compete with Nvidia as a premium option they'll need to match the non-performance features that Nvidia has:

- Driver stability. Nvidia is not perfect but the 5700 series was awful from AMD (had to return mine). They need to at least match Nvidia.

- DLSS. It started out as a gimmick but with DLSS 2.0 it's just 70%+ free performance improvement with the only downside being somewhat limited game support.

- Video Encoder. Nvidia's encoder has very high quality and is well supported by streaming and recording software. I wonder what sort of improvements the RX 6000 series has?

- CUDA, Tensor Cores, Ansel, etc. I don't really use these things but if I'm paying the same amount I want similar capabilities.

It's kinda crazy that they're using the same silicon in all three cards; the 6800 has 1/4 of the CUs disabled! It is a big chip for 7nm, though. AMD has said that availability won't be an issue (which could give them a leg up on Nvidia in the near term), but I do have to wonder about it.

The 80CU 6900XT was rumored months ago and there was a lot of speculation and incredulity about the memory system to feed all those cores. GDDR6X is Nvidia exclusive so some said a 512-bit bus, or HBM2, or even a combination of HBM and GDDR6. The big cache is an interesting solution, I'm curious how it'll perform. It makes sense in a gaming context where a lot of buffers and textures get reused over the course of a frame. I'm a bit worried about tail latency affecting the 99-percentile frame times which have a huge impact on the subjective experience of smoothness. Also bad for massively parallel compute work.

PaulKeeble · 5 years ago
All that bothers me certainly but I have two further concerns based on the types of games I play. The first is Minecraft, AMD has atrocious abandoned OpenGL drivers and Minecraft regularly fails to reach 60 fps on AMD cards where it can get to 500+ on Nvidia, its one of the most played games on the planet. The second is Virtual Reality, AMD has lagged in the technology of VR and doing things smarter with support for foveated rendering and multiple pass optimisation are really important for getting hard to run games playable.

Raw performance looks fine, its all the lack of news on the features and zero acknowledgement that people play games other than the latest AAA games that get benchmarked and they need to run well too. Not to mention all the bugs, if I buy an AMD card I can guarantee based on history something weird will happen for the life of the card, it always has for the past 20+ years, I haven't yet had a clean experience with an ATI/AMD card and I have owned a bunch of them.

What I really need AMD to say and do is acknowledge how important a good customer experience is with their driver stack, how every game and API matter, that they understand that customer experience is reality and are now committing it to history for good. Then they can sell me a high priced card, until they do well its got to be a lot cheaper to be worth the risk.

snvzz · 5 years ago
>The first is Minecraft, AMD has atrocious abandoned OpenGL drivers

Try that under Linux sometime. It's a different song there thanks to Mesa.

>opengl

Is not the most popular API, and AMD had, until recently, limited economic resources, which they mostly directed towards their CPU team and towards hardware. For their proprietary windows efforts, they needed laser focus on what's most important: The newest and most popular APIs.

There's money these days, and they multiplied the size of their driver team, but money spent on OpenGL would be indeed wasted, as there's a pretty compelling solution for this API from the Mesa guys, in the form of an opengl implementation that's on top of Vulkan: Zink.

I expect future GPUs and drivers (for all platforms) to absolutely not bother with opengl and simply ship Zink instead.

kllrnohj · 5 years ago
> The first is Minecraft, AMD has atrocious abandoned OpenGL drivers and Minecraft regularly fails to reach 60 fps on AMD cards where it can get to 500+ on Nvidia, its one of the most played games on the planet.

I'm really not finding anything that supports this claim, can you substantiate it? I see scattered sub-60fps reports in Minecraft for both Nvidia & AMD. It seems to just be a "minecraft's engine is shit" thing, not an AMD thing?

But I'm not seeing any actual comparisons or head-to-heads or anything showing AMD in particular is struggling with Minecraft in particular. This unknown quality benchmark is the only thing I'm finding for Minecraft: https://www.gpucheck.com/game-gpu/minecraft/amd-radeon-rx-57... and it's definitely not struggling to hit 500 fps numbers at 1080p, either.

And other OpenGL games & apps are not showing any severe AMD issues, although the only significant sample of those are on Linux so that also has a different OS, so there certainly doesn't appear to be anything systemically wrong with AMD's OpenGL as you're claiming. Unless for some reason they just never brought over the way better Linux OpenGL driver to Windows, which would be highly unexpected & definitely something that needs evidence to support.

> zero acknowledgement that people play games other than the latest AAA games

This is just completely false and seems to just be because your pet peeve is Minecraft in particular? Both Nvidia & AMD discuss popular games not just the latest AAA games. Usually in their latency section, since that's where it comes up as more relevant (esports games), but it's definitely not an ignored category entirely.

zamadatix · 5 years ago
I'm not really worried about Windows OpenGL performance, even the new Minecraft edition (which performs enormously better all around) uses DirectX. Browsers use ANGLE to convert GL to DirectX for a reason, it's a crap API all around on Windows (not that it had to be but that's how it is).

The driver bugs are what I'm most worried about though. I remember when I got a 4670 in 2008 there was a cursor corruption bug (google it, it's common) and if you didn't shake the mouse enough for it to fix itself it'd crash. Then I built a new PC with a 5770 and I still had it. Later a new PC with a 6950 and still had it. Then 2x280X and still had it. It even occurred on Linux once. Then I went Nvidia for a few generations and it seems AMD finally fixed it. It seems fixed now (maybe because of new hardware) based on internet searches but from 2008-2015 the bug was there. And this is just one example of things.

Not that Nvidia drivers have ever been perfect but I never ran into this kind of shit so often for so long on their cards. Hopefully I don't end up regretting the 6900 XT because of it.

officeplant · 5 years ago
Obviously with the range of settings and mods out there I'm not sure of everyone's Minecraft experience on AMD.

But my simple 3400G based rig using the integrated Vega11 plays stock minecraft just fine @ 1440p and 20 chunk viewing distance with everything else set to max.

dmayle · 5 years ago
I understand that Mesa (with the Zink OpenGL on top of Vulkan implementation) can be compiled for Windows. If you care that much, maybe give it a shot.
TwoBit · 5 years ago
I'm sorry but OpenGL is dead on Windows. There are exceedingly few apps written against OpenGL and if you write such an app then don't be surprised with the result. Vulkan is the portable solution of course.
pojntfx · 5 years ago
> Driver Stability

Ironically, the exact opposite is the reason I use AMD on Linux. It "just works" without installing anything and everything is super stable, while Nvidia is a huge proprietary mess.

peatmoss · 5 years ago
I replaced an NVidia 1060 with a 5700XT and also feel it’s been a major improvement on Linux. Ubuntu mostly handles the NVidia driver installation and upgrades, but only mostly. Having the card supported in the kernel is excellent.

That, and Proton in Steam have made it possible to play some old favorite games as well as new titles with impressive performance.

I will say that real-time ray tracing has me at least considering one of those new cards...

M277 · 5 years ago
Am I the only one who notices a clear difference between DLSS 2 and native image? I get that "native" is also flawed due to TAA, but DLSS still has some work to do with regards to image clarity imho.
PaulKeeble · 5 years ago
It is one of those technologies that is very well hidden in effect behind Youtube's compression. Given how aggressive compression is on Youtube the gap in what it looks like in front of you verses on the tube has widened quite a bit. DLSS seems to be indistinguishable on Youtube but in front of you on a monitor it clearly introduces some odd artifacts and is not native looking. The gain in performance is usually worth it but its definitely not just magic free performance with no impact on image quality.
dragontamer · 5 years ago
I can see the visual downgrade.

IMO, VRS (variable rate shading), which is in all major GPU providers (NVidia, AMD, and even Intel iGPUs / Xe), provides the "upscaling-ish" performance gains that we want for the next generation.

Its much harder to see the difference between native and VRS.

https://devblogs.microsoft.com/directx/variable-rate-shading...

-----

Its not a "competitive" feature, because everyone is doing it. But it basically accomplishes the same thing: carefully upscaling results (2x2 instead of 1x1) in locations where the gamer probably won't notice.

gambiting · 5 years ago
For me personally, in Control, 720p upscaled with DLSS to 1080p looks better than native 1080p presentation. I say that with a hand on my heart, the DLSS image is cleaner, the lines are sharper, and the texture filtering is just vastly superior. I have no idea how they are doing it, because it seems to be pure black magic, the traditional logic of "upscaled can never look as good as native" has been completely thrown out of the window. It doesn't look as good - it looks better in my opinion.
izacus · 5 years ago
Well of course you'll notice, but do you notice it more than if you need to drop off to non-native resolution because your card keeps dropping under screen FPS?

Because that's where the difference gets important. If I compare to my PS4 Pro, many games "look" better because they drive the UI at full native resolution while using checkerboard rendering to keep the framerate up. The same game on my 970GTX will chug, drop frames or have all the text and UI look blurry because I need to drop the resolution.

If DLSS 2.0 fixes this issue, it's a massive improvement.

dathinab · 5 years ago
What I had been wondering about was "what if you place DLSS like technology into the screen itself?
m-p-3 · 5 years ago
> CUDA, Tensor Cores, Ansel, etc. I don't really use these things but if I'm paying the same amount I want similar capabilities.

CUDA might be a bit tough. It's hard for an equivalent to catch on if everyone is doing CUDA. It's the chicken-and-egg problem. There's ROCm* if that matters.

*: https://github.com/RadeonOpenCompute/ROCm

ZekeSulastin · 5 years ago
I thought ROCm didn't support RDNA1 as of yet, let alone RDNA2.
dathinab · 5 years ago
Honestly at least for now for the desktop marked it doesn't matter that much.

Software devs might consider it as a factor because they might want to play around with it, but for everyone else is a irrelevant feature.

RavlaAlvar · 5 years ago
They are 10yrs behind, I doubt they will ever able to catch up.
Kaze404 · 5 years ago
> - Driver stability. Nvidia is not perfect but the 5700 series was awful from AMD (had to return mine). They need to at least match Nvidia.

Seriously. I have the RX 5700 XT and at least once a week the driver crashes with the same error on `dmesg`, with the only solution being to reboot the machine because the 5700 XT for some ungodly reason doesn't support software reset. I love the card and I feel like I got what I paid for in terms of performance, but the driver instability is absurd.

Deleted Comment

tim-- · 5 years ago
alasdair_ · 5 years ago
>- DLSS. It started out as a gimmick but with DLSS 2.0 it's just 70%+ free performance improvement with the only downside being somewhat limited game support.

Watch Dogs Legion comes out tomorrow and I've been benchmarking it today, as have many others in the subreddit. DLSS is sometimes improving things but is also quite often leading to worse performance, especially if RTX is set to ultra. I have no idea if the issue is game specific or not but I'd be curious to know which games use it well.

ulber · 5 years ago
4k DLSS takes something like 1.5ms to compute on a 2080ti IIRC, so the drop in render time has to be at least that to give any improvement. So it's quite situational and doesn't help with for example when frame rate is high already. Ray tracing at high settings would be expected to be one of those situations though, so something might be just broken.
zionic · 5 years ago
That _has_ to be game specific. Every other game sees major wins with DLSS2.0+
p1necone · 5 years ago
Given the framerates they're showing games running at 4k without a DLSS equivalent I'm not really sure it's necessary yet, maybe when stuff comes out that really pushes the hardware though (and when they announce lower end cards).
rocky1138 · 5 years ago
Virtual reality is pushing the boundaries of performance.
snvzz · 5 years ago
And even then, higher performance would mean they can source their upscale from a higher resolution.
sundvor · 5 years ago
Speaking of features: Do the new AMD GPUs feature SMP, Simultaneous Multi-Projection?

Use case: iRacing on triple screens, it offloads the CPU, in a CPU limited title.

sundvor · 5 years ago
Looks like it's a big no, same for VR features as well as far as I can see. That takes AMD out of contention for me (iRacing is the only thing I play, just about), but thankfully it is applying significant price and feature pressures to the market. I'll jump in a 3080TI when it is out and readily available. No rush.

Meanwhile I'm loving my 3900XT Ryzen CPU.

TheFlyingFish · 5 years ago
The announcement briefly mentioned that they are working on a "super resolution" feature, but specifically what that is has so far been left as an exercise for the viewer. It sounds like it might be a competitor for DLSS, but only time will tell.
PolCPP · 5 years ago
Which is a terrible name considering they already have a Virtual Super resolution feature (https://www.amd.com/es/support/kb/faq/dh-010)
taurath · 5 years ago
Big shot across the bow of Nvidia - the whole 30-series stock problems seem like a big miscalculation right now. Granted it’s still early as this is just announced, so no real world benchmarks, but their 3090 competitor coming in at $500 under Nvidia makes it a really tough sell - not that a lot of people have even gotten them at this point. Rumors of Nvidia making TI versions in literal months after the 30-series launch are probably true.
tomerico · 5 years ago
Addressing the price difference between RTX 3090 and 6900 XT:

3090 is priced at $1500 for its 24GB RAM which enables ML & Rendering use cases (Nvidia is segmenting the market by RAM capacity).

AMD's 6900XT has the same 16GB of RAM as 6800XT, with less support on the ML side. Their target market is gamers enthusiasts who wanted the absolute fastest GPU.

__alexs · 5 years ago
The 3090 is priced at $1500 because it is designed for whales who want the best regardless of price.
colejohnson66 · 5 years ago
But is 8 GB of GDDR whatever worth almost $1000? If AMD can put 16 in at about $500, paying another $1000 for an extra 8 is IMHO outrageous. Nvidia is charging that much because many gamers hold to the “Intel & Nvidia are the best” even if benchmarks say otherwise.
athorax · 5 years ago
Agreed, the 3090 is definitely meant for ML, but my guess is people are largely buying it for gaming purposes.
freeflight · 5 years ago
> Their target market is gamers enthusiasts who wanted the absolute fastest GPU.

That's also the target market for the RTX 3090, all the Nvidia marketing material describes it as a gaming card, it's Geforce branded and Ampere based Quadro's will be a thing.

There was/is also this whole thing: https://www.reddit.com/r/MachineLearning/comments/iz7lu2/d_r...

alfalfasprout · 5 years ago
Nope. Half-rate accumulate ops on the 3090. Definitely not an ML card.
taurath · 5 years ago
Good point - I wonder what the market split is there. Even so, wouldn’t it be far more economical to stack a bunch of 6800xts for ML, or is the ML support that far behind on AMD?

Deleted Comment

chrisjc · 5 years ago
> which enables ML

Are individuals buying graphics cards for ML? I would think that it makes more sense to provision scalable compute on a cloud-plaform on an on-demand basis than buy a graphics card with ML capabilities?

bitL · 5 years ago
AMD can release 6950 with 32GB of RAM for the price of 3090 at some point.
llampx · 5 years ago
I am a huge AMD supporter and recently bought their stock as well, but in terms of competition to NVIDIA for non-gaming purposes they have lot of ground to cover. NVENC for example is great for video encoding and supported in a lot of video editing programs. I use AMF in ffmpeg but from my understanding, NVENC is faster and with the 3000 series, better quality.

Same with CUDA vs ROCm.

subtypefiddler · 5 years ago
It's not just CUDA vs ROCm, ROCm has come a long way and is pretty compelling right now. What they lack is the proper hardware acceleration (e.g tensor cores).
floatboth · 5 years ago
H265 quality on AMD's encoder is excellent (has been since it appeared back in Polaris), since Navi it can do 4K60. H264 is not as good but totally usable.
quantummkv · 5 years ago
But will even the TI versions work here? A 3090TI or 3080TI for example would have to launch at the most at the same price as their base models to stay competitive. That would result in a massive PR headache.

Also, would the TI versions actually help? Even a 100$ markup on a 3080TI would bring it close to the 6900 XT pricing. And the 3080TI cannot come close the 3090/6900 XT performance for that markup or otherwise it would risk cannibalizing Nvidia's own products.

Nvidia's only hope at this point is that either AMD fudged the benchmarks by a large margin or AMD gets hit with the same inventory issues.

dannyw · 5 years ago
NVIDIA can resort to paying developers to only optimise for NVIDIA performance and/or cripple AMD performance.
dathinab · 5 years ago
From what I heard the 3090 has some driver features disabled which some of their GPU compute focused cards have normally enabled, they could probably sell a 3090TI for +200$ by just enabling that feature :=/
overcast · 5 years ago
When has there ever been enough stock on highend GPUs?
Tuna-Fish · 5 years ago
It's not just that there is not enough to meet the usual demand, the supply is definitely much lower than typical. I see numbers for a major European distributor network, and there is maybe a fifth of the supply of what there was for the 2000 series, same time after launch.

Something is very definitely wrong.

mdoms · 5 years ago
Before Bitcoin stock was rarely a problem.
freeflight · 5 years ago
Prior to the big cryptomining/ML hypes.

Back then these highend-GPUs used to be prestige projects that mostly existed for the marketing of "We have the fastest GPU!", the real money on consumer markets was made with the bulk of the volume in the mid-range.

bcrosby95 · 5 years ago
Yes. The exception was when bitcoin prices went crazy and miners were buying them all up.
deeeeplearning · 5 years ago
Appears to be some evidence that most of the 3xxx cards were bought out by bots. Most sites were sold out more or less instantly and there's now a large supply available on ebay/amazon/etc for way over retail prices.
read_if_gay_ · 5 years ago
Obviously YMMV but as an example, I don't remember it being that hard to get my 980Ti back when it was somewhat new.
m463 · 5 years ago
Right before a new generation comes out?

Deleted Comment

Koliakis · 5 years ago
Add to that, it's likely they won't have as many supply issues as nVidia because they're using TSMC and TSMC is totally on the ball when it comes to yields (unlike Samsung).
koffiezet · 5 years ago
If you read between the lines, it's AMD's allocation of TSMC capacity that pushed nVidia out to samsung...

TSMC has to spit out Zen3, RX6000, plus the custom versions for XBX and PS5, all around the same time...

jonplackett · 5 years ago
I was thinking this too - is this really just another TSMC victory - like it is with AMD on TSMC vs Intel still not making it to 10nm, and Apple making gains on A14 (soon to be compared VS intel no doubt) - it's the process that's driving efficiency for sure.

How much is this one really about TSMC 7nm vs Samsung 8nm?

Deleted Comment

tmaly · 5 years ago
26.8 Billion transistors, that boggles my mind.

How do you even begin to think about testing hardware like that for correctness?

I remember my days as an intern at Intel. I remember someone using genetic algorithms to try to construct a test suite for some chip. But it was no where near that transistor count.

devonkim · 5 years ago
A long time ago when I was writing Verilog and VHDL what people were doing beyond the 1Bn gate (I still keep using gates rather than transistors as a unit, funny) scale were statistical sampling methods to try to uncover certain stuck / flakey gates so certain more risky or critical paths were tested a bit better. Then there's also error correction mechanisms that are more common nowadays that can compensate to some extent. We don't test for every single possible combinatoric like was possible so long ago, full stop. Furthermore, GPUs aren't necessarily to be tested in the same ways as CPUs. GPUs don't really branch and hold state with exotic addressing modes like how CPUs can operate, so testing compute shaders is in many respects closer to testing not a state machine as much as a big, wide bit bus. Heck, one doesn't need to even bother hooking up any output devices like HDMI and HDCP decoders to a GPU as long as the PCB is hooked up fine during assembly and the peripheral chips are tested to spec, too.

And for an extra measure of automation there are test suites that are programmed into hardware that can drive chips at line speed on a test bench.

dragontamer · 5 years ago
> How do you even begin to think about testing hardware like that for correctness?

From my understanding: Binary decision diagrams on supercomputers. (https://en.wikipedia.org/wiki/Binary_decision_diagram)

Verification hardware is a major business for all these major CPU players today. Fortunately, solvers on this NP-complete problem have benefited not only from faster computers, but huge advancements in algorithmic improvements in the past 20 years.

thebruce87m · 5 years ago
You mean in production? Digital logic is easy, use scan/ATPG: https://en.m.wikipedia.org/wiki/Scan_chain

Basically chain all the logic together in a special test mode.

Analog blocks get tests written specifically for them and either tested via external test hardware or linked internally.

For example, if you have a DAC and and ADC, provide a link internally and drive the ADC from the DAC to test both. You can also test comparators etc from using a combination of the DAC and ADC, trim bandgaps etc.

If you’re real smart, you do this at probe (wafer), massively in parallel.

marcosdumay · 5 years ago
You mean testing the hardware after it's produced against fabrication defects or testing the design before production against the specification?

Both are very different and very interesting on their own. For the design, verification can be done by computers and it's much more powerful than testing anyway. For the hardware, I imagine there are cores inside the chip with the sole task of testing sub-circuits (fab defects are much less varied than design flaws), but I stopped following that stuff at the era of single billion transistors.

andy_ppp · 5 years ago
You break it down into smaller parts and test those as discrete units.
Tomte · 5 years ago
Not just that: you have dozens, hundreds or thousands of unit A, and of unit B, and of unit C. You test each of these units once, and then some measure of interplay.
blawson · 5 years ago
Technologies like Chisel can make this a lot easier I think: https://www.chisel-lang.org

I'm not super familiar with it, but one day perhaps hardware verification becomes as easy as software testing someday?

read_if_gay_ · 5 years ago
> I remember someone using genetic algorithms to try to construct a test suite for some chip.

Could you expand a bit on how that worked? Seems like an interesting application.

tmaly · 5 years ago
The idea was to try to find a smaller set of tests that could provide equivalent test coverage to that of a larger number of brute force generated tests.
Animats · 5 years ago
It's sort of funny. During the period when GPUs were useful for cryptocurrency mining, demand pushed the price of NVidia GPU boards up above $1000, about 2x the usual price. This was a previously unknown price point for gamer-level GPU boards. Once mining demand declined, NVidia kept the price of their new high-end products above $1000, and sales continued. Now $1000+ is the new normal.
jbay808 · 5 years ago
I think demand from machine learning applications has also helped keep prices high.
globular-toast · 5 years ago
It's actually insane. I haven't been running a high end gpu for several years now. The last one I had was an 8800GT. When I got that gpu I felt like I had "made it" in terms of being able to afford the hardware I always wanted as a child. Now with all this news about new gpus I decided to look on ebay for a mid-range AMD chip from the previous generation. Still going for £200-300! I can't justify spending that much on games. Honestly are gamers just expected to have no other hobbies any more?
arc-in-space · 5 years ago
I mean, it's a one-time purchase that lasts you 2-4 years minimum. If anything, gaming is on the reasonably cheap side of hobbies, especially if you still otherwise have uses for a desktop, like most people here do. Compare getting that mid-range AMD GPU to a Netflix subscription for two years(which I am guessing costs about as much, or at least in the same range), and I think it starts to look more favourable. You do still need to buy actual games but their costs can be amortized across sales, and the fact that people typically buy few games and play each one for a long time.
lhl · 5 years ago
A quick search shows the 8800GT was released in Oct 2007 and was 180 GBP. [0] Using an inflation calculator this comes out to 256 GBP today, [1] so about the same price as the cards you're looking at. That being said, it looks like even in 2019 average weekly wages (in GBP) have declined since 2008, so accounting for inflation, most Britons are maybe just simply poorer now [2] (2020 of course has been even worse. [3])

Anyway, ignoring all that, IMO gaming is still more affordable than it's ever been - you can get a used Xbox One S for about $100 on eBay, or for PC gaming, you can get a used RX 470 or GTX 1060 for a bit less than that, which are fine GPUs for 1080p gaming performance. Also, even relatively modern AAA games can now be played at low-medium settings at 720 or 1080p30 without a dedicated GPU on a late-model Ryzen APU.

[0] https://www.theregister.com/Print/2007/11/12/review_nvidia_g...

[1] https://www.hl.co.uk/tools/calculators/inflation-calculator

[2] https://www.bbc.com/news/business-49328855

[3] https://tradingeconomics.com/united-kingdom/wage-growth

deafcalculus · 5 years ago
Die size is up too. At over 600 mm2, it’s a huge chip.
jahabrewer · 5 years ago
Who cares? The RTX 3090 is not for gamers.
snvzz · 5 years ago
Who's it for, then?

They've got no ML drivers and capped FP32/64 performance.

detaro · 5 years ago
What's the story with ML stuff on AMD consumer GPUs nowadays? Gotten better, or still "buy NVidia for that"?
dragontamer · 5 years ago
ROCm is spotty for AMD GPUs. Definitely don't buy these GPUs until AFTER ROCm has declared support.

The NAVI line was never officially supported by ROCm (though ROCm 3.7 in August seemed to get some compiles working for the 5700 XT, a year after its release).

-------

Generally speaking, ROCm support only comes to cards that have a "Machine Intelligence" analog. MI50 is similar to Radeon VII, so both cards were supported in ROCm. The Rx 580 was similar to MI8, and therefore both were supported.

The Rx 550 had no similar MI card, and had issues that are yet resolved today. Rx 570 had no MI card, but apparently its similar enough to the 580 that I'm not hearing many issues.

In effect: AMD only focuses their software development / support on Machine Intelligence cards. The cards that happen to line up to the MI-line happen to work... but the cards outside of the MI line are spotty and inconsistent.

philjohn · 5 years ago
AMD is segmenting to Gaming (RDNA) and Compute (CDNA) - CDNA is essentially a continuation of Vega, which was a beast for compute.
bitL · 5 years ago
Both PyTorch and TensorFlow support ROCm, but there are still bugs that prevent running all CUDA code on AMD. Now that AMD has money, they can ramp up SW development there.
dogma1138 · 5 years ago
The issue is that AMD doesn't support ROCm on it's newer consumer GPUs. It also doesn't want to release a ROCm compatible driver for Windows, and you can't run ROCm in WSL/WSL2.

Direct ML is still in its infancy and is quite slow right now, AMD really needs to step up their game if they want to compete.

bart_spoon · 5 years ago
The gap is closing, but Nvidia is still a clear leader.
snvzz · 5 years ago
Just be careful the 3070/3080/3090 lineup are gamer cards and do not have ML drivers.

FP32/64 performance is going to be capped, as with prior generations.

Deleted Comment

deeeeplearning · 5 years ago
Miles away basically, they're still taking baby steps.
xvf22 · 5 years ago
Performance is better than I expected, especially for 300w and while not verified externally, it looks like AMD is finally competetive with the top end. If they can get the required share of TSMC volume, then Nvidia may feel a bit of pain since Samsung seems unable to meet the demand.
Tuna-Fish · 5 years ago
Note that the 6900xt benchmark was running in "rage mode", and therefore was not inside 300W. Then again, the card they are comparing to is ~380W, so there is some room to play with while still being more power-efficient than the competition.
filmor · 5 years ago
"Rage" mode, what a nice name choice coming from ex-ATi :)
013a · 5 years ago
Also important to note that it had Smart Memory Access on, which requires a Ryzen CPU (and furthermore, did they say specifically a Ryzen 5xxx CPU?)
jsheard · 5 years ago
The performance numbers shown were cherry picked though, and conspicuously didn't include anything with raytracing enabled.

Leaked 3DMark raytracing benchmarks showed Big Navi lagging far behind Ampere so I wonder how that's going to bear out in real games.

freeflight · 5 years ago
The cherry picking happens pretty much with all of these kinds of presentations.

Nvidia did that by enabling DLSS and RTX in everything, that's how they ended up with "Up to 2 x 2080 performance" which in practice only seems to be the case with Mincecraft RTX running on 30 fps instead of 15 fps.

throwaway2048 · 5 years ago
Very few games have implemented raytracing
nodonut · 5 years ago
A low price and energy draw with ultra 4k gaming, meets the wants/needs for >90% of hobbyists. Assuming no issues with future testing/drivers, the 6000 line is forcing nvidia into a more niche market. Namely, Ml, ray tracing, and "flex" buyers.

I wouldn't be that worried if I were nvidia-- catering to the whales is good business, but I think we're looking at amd winning the lion share of the market

redisman · 5 years ago
AMDs 3080 competitor is $50 less and 3070 competitor $70 more. The flagship is cheaper. In the end, most PC gamers balk at >$200-300 GPUs. These are all niche cards so far.
singhkays · 5 years ago
Surprised they were able to catch up to Nvidia's latest cards after foregoing high-end for a while. "Catch-up" seems to be AMD's forte lately.

But we'll need to wait for real world testing to see how accurate these claims are.

mrweasel · 5 years ago
Yeah, I want to see actual tests before I’m convinced that they’re on par with Nvidia. If AMD turns out to have caught up with Nvidia I will be very impressed, that cannot be easy to do while managing the same on the CPU side at the same time.
devonkim · 5 years ago
There's enough break-downs I'm seeing where these new cards aren't really caught up with the newest RTX cards because DLSS 2.0 is the big differentiator and stuff like nVidia Broadcast just aren't there for AMD. The other suspicious comparisons are AMD benchmarks against RTX 2080 Ti when the newest RTX cards _all_ blow away the RTX 2080 Ti in almost every metric.

However, what I'm seeing out of this mess is that AMD is absolutely competitive on a watt-per-performance basis now. The other problem is that AMD is so far behind nVidia in software (read: AI research mindshare) that it's not clear if we'll be able to see that many titles take on raytracing in future titles or adopt the work necessary to do ML-based upscaling with AMD as the baseline software stack rather than DLSS.

CivBase · 5 years ago
I don't think I'd characterize what they've been doing in the CPU market as "catching up" anymore. They caught up to Intel a few years ago. Now they are just asserting dominance for the immediate future.

They've definitely been far behind Nvidia for a while, though. I haven't seriously considered an AMD GPU in almost a decade outside of budget gaming rigs, and even then I ended up going with a used Nvidia card. Hopefully this is enough to give them a serious foothold in the high-end GPU market so they can give Nvidia competition for years to come.

p1necone · 5 years ago
They caught up/surpassed Intel in multithreaded performance with the first gen of Ryzen (of course depending on price point), but until Zen 3 Intel still had the advantage in single threaded perf. And because a lot of games either heavily peg one thread only, or only scale to a small number of threads this meant Intel still had the advantage on most games at a lot of price points. It's only with Zen 3 that Ryzen is beating Intel in single threaded perf as well.
singhkays · 5 years ago
The reason I say "catch-up" is because before Zen 3, Intel still had an IPC lead. With Zen 3, it's the first time in a decade that AMD is able to claim IPC lead.
01100011 · 5 years ago
Good showing from AMD. It will be interesting to see how Nvidia responds. I'm curious if AMD will also suffer from supply issues.

Does AMD allow running in "rage mode" without voiding the warranty? Is that something that a 3rd party mfg will offer to cover?

zamalek · 5 years ago
I assume so, given that Ryzen runs in "rage mode" all the time (and RDNA2 is the first architecture where people from the Zen team have been involved).

I also assume that you are going to need a high-end case (notwithstanding the PSU) to provide adequate cooling because, if it is anything like Ryzen, it will react strongly to the cooling available to it.

It will likely be noisy and, as always with OC, not all cards will have the same headroom: it allows them to avoid promising headroom that may not exist on your individual card.

kllrnohj · 5 years ago
> given that Ryzen runs in "rage mode" all the time

It doesn't. "Rage mode" would be equivalent to Ryzen's PBO. Which is definitely not on by default.

It's likely equivalent to just dragging the power limit slider to the highest it goes on MSI afterburner. Letting boost go out of power spec limits, but nothing more.

Which can still give decent gains (look at how power starved Ampere is, for example). But the only noteworthy thing here is just it's in the official control panel instead of a 3rd party app.

snvzz · 5 years ago
NVIDIA has simply made an expensive mistake[0] hoping TSMC would offer them lower prices or that Samsung's new process would be ready (it was not), resulting in their worst lineup in a hell of a long time[1].

[0]: https://www.youtube.com/watch?v=tXb-8feWoOE

[1]: https://www.youtube.com/watch?v=VjOnWT9U96g

arvinsim · 5 years ago
That "rage mode" nomenclature might potentially come back to bite them.
redisman · 5 years ago
It’s also a really weird “feature” to highlight. Oh wow you named moving the power slider slightly up - something users have been able to do for years and it takes a few seconds
zanny · 5 years ago
AMDs been buying 7nm wafers from TSMC for two years now. They should be better equipped to meet demand, or at least more knowledgeable from experience about what kind of supply they can field.

Not sure how in the world they plan on supplying both their new cpu and gpu series with a holiday season launch, though.

01100011 · 5 years ago
Exactly. On the one hand they are well established with TSMC, on the other hand, they are trying to meet demand for both CPUs and GPUs simultaneously.

Who knows what they'll decide from a business perspective? I wonder how the margins compare between CPUs and GPUs? They could, say, plan on limiting their higher end GPU SKUs which gives them temporary bragging rights in the GPU space but reserves capacity for CPUs.