Is modern Radeon support on Linux still great? I've heard that NV Geforce on Linux was kinda bad since ever, but it has massively improved over the past 3-4 years. Is Radeon still the way to go for Linux?
EDIT: My usecase is a Linux gaming PC. I was fortunate enough to score an RX 6800 during the pandemic, but moved from Windows 10 to Linux a month ago. Everything seems solid, but I'm looking to upgrade the whole PC soon-ish. (Still running a Ryzen 1800X, but Ryzen 9000 looks tempting.)
I have both a modern Radeon and Nvidia card in my PC.
Radeon overall offers a smoother desktop experience, and works fine with gaming, including proton-enabled games on Steam. OpenCL via rocm works decent, with some occasional crashes, in Darktable. Support for AI-frameworks like jax is however very lacking, and I spent many days struggling to get it to work well, before giving up and buying a nvidia card. geohot's rants on X are indicative of my experience.
The nvidia card works great for AI-acceleration. However, there is micro-stuttering when using the desktop (gnome shell in my case) which I find extremely annoying. It's a long-known issue related to the power manager.
In the end, I use the nvidia card as a dedicated AI-accelerator and the Radeon for everything else. I think this is the optimal setup for Linux today.
Both AMD and Intel provide well maintained open-source drivers for many years[1]. Nvidia doesn't. You need to use closed-source drivers with Nvidia in practice, which causes issues with kernel upgrades, various issues (for which you don't get help, because the kernel is now marked as tainted) and problems with Wayland. The later are caused by the fact, that Nvidia refused[2][3] to support Wayland for a long time. Blaming that implicit sync was a disadvantage for Nvidia is not appropriate. They could've participated a long time ago and either adapt to implicit sync or help adding explicit sync years ago.
Furthermore:
* AMD gave us Vulkan as standard, which builds upon AMD's Mantle.
* AMD gave us FreeSync i.e. VRR, which is usable by others.
* AMD gave us FSR, which is usable by others.
What did Nvidia?
* High prices.
* Proprietary and expensive GSYNC.
* Refused to publish documentation and open-source drivers for many years.
* Own video-decoding libraries.
It is the documentation, drivers and Wayland. But the complete track record of Nvidia is bad. Their only argument are some benchmarks wins and press-coverage. The most important feature to all users is, reliability. And, Linus Torvalds[4] said everything.
If Nvidia has changed its company politics and proofs that over years through good work it is maybe possible the reevaluate this. I don't see this within the next years.
> Both AMD and Intel provide well maintained open-source drivers for many years[1]. Nvidia doesn't. You need to use closed-source drivers with Nvidia in practice.
I helped a friend install Linux last year; they had an NVIDIA card, aaaand they had problems. Both in X and Wayland. I'm was on a 6000 series at the time and it worked great, as you know. I am on a 7000 series now, and it's as smooth as ice. AMD is the way to go for Linux.
There's that new NVIDIA driver cooking, but I'm not sure what the status is on it.
BTW, even Ryzen 3 is a substantial upgrade over 1. Any X3D processor (I think 5 is when they first appeared) is even more so. Ryzen 5 is also less sensitive to RAM timings. I applaud you for avoiding latest-and-greatest syndrome, but even upgrading to an older CPU generation would be a significant boost.
> I applaud you for avoiding latest-and-greatest syndrome, but even upgrading to an older CPU generation would be a significant boost.
Being out of a job for a while makes me very loathe to upgrade hardware that still works OK. That 1800X still does what I want it to do, and does it 'fast enough', though how far into the future that will last is unclear. Cyberpunk 2077 being the most demanding game that I've played probably helps. :D
I run Arch on my home desktop so I get the very latest stuff pretty quickly, the latest drivers have improved the nvidia situation a LOT. I've found everything pretty stable for the most part over the past month. Multi-monitor VRR was still having problems for me but I just installed a driver update that supposedly has bug-fixes for it so I'll be trying that out later tonight.
I had a laptop with a mobile GeForce RTX 4060 - it was a pain to run Linux on, mainly because of Nvidia.
I got rid of it and built myself a PC (not just because of Nvidia, gaming laptops are just a disappointing experience). And went all in with Team Red - Ryzen and Radeon. It's a joy to run Linux on, I work on it and have played several games - 0 problems.
Nvidia may be the leader in performance, but if it means you're stuck with Windows - I'd say it's not worth it.
I built my current home PC the day 4090 was release, and set up Fedora on it. It was usable at first, with some hiccups in certain updates, where I needed to stay with an older kernel for some time.
Wayland on Nvidia was a mess, though.
But as time goes by, things get better and better.
Older AMD cards have excellent Linux support across the board.
Brand new AMD cards have great support -- in mainline Linux and MESA. But if you're using a kernel and MESA from a distro, you're gonna have a hard time for the first ~8 months. You'll either have to run a rolling release distro or make your own kernel, linux-firmware and MESA packages. I had to do this when I got a 7900XT card a few months after launch while running Ubuntu.
This is the Linux driver model working as intended, and it sucks.
I suspect you may also encounter issues with Flatpaks, since from my understanding, these use the MESA userspace from a runtime so they wont work until the Flatpak runtime they use updates their MESA. I don't think it helps to have a working MESA on the host system. I'm not 100% certain about this though, would love some feedback from people with a better understanding of these things.
Like another poster, I tried to help a friend move to Linux recently and he had an nvidia card, and it was nothing but problems. He eventually went back to windows and I don't blame him.
I use Radeon 7900GRE for gaming on linux mint without any problems, even tried some demoscene entries through steam (they didn't work with plain wine). Managed to run LM-Studio and deepseek-llama-70B, but installing ROCm support was a little more involved than "just download this and run", had to do several steps and install some missing packages, took me half hour.
I resorted to scripts from smxi.org for installing the Nvidia drivers on Debian and my GTX 770, but AMD RX 7800 XT is just fine with some firmware enabled.
Ubuntu seems to be even better. I turned my system into a Frankendebian for a day to try out ROCm, the AMD install scripts work out of the box on newer Ubuntus.
Claims about the Nvidia drivers on Linux being bad are largely a myth. Nvidia has long been the gold standard on Linux (for decades). The main reason for complaints is that Nvidia was slow to support xwayland acceleration (mostly due to Linux mainline making it harder for them - see dmabuf being GPL symbol exported), but that has since been fixed.
> support xwayland acceleration (mostly due to Linux mainline making it harder for them)
This is not true. There were many differences with how Nvidia had been doing things in their drivers (partially due to them reusing code on all platforms) and how Mesa/Wayland was already doing things, namely with explicit/implicit sync and GBM/EGL streams, but there was no intention to make things hard for Nvidia.
Mesa/Wayland ended up supporting explicit sync so Nvidia's driver could work.
Contrary to what other people are saying, I've had no significant issues with nvidia drivers on Linux either, but I use X not Wayland so that may be why. Play tons of games, even brand new releases, without issues.
In my experience, with KDE Plasma, Nvidia is quite awful. Animations are laggy, and the desktop slows down when a new window opens. As well, there are a bunch of small issues I’ve only seen when using Nvidia.
Dunno, I have an Nvidia card in my laptop, while it "works" on Linux (even Wayland), it also heats the laptop up so incredibly hot that the laptop's thermal settings just turn it off. No way to throttle it, increase fan speed, etc..., at least not without a bunch of hackery I don't want to deal with. All for performance that's in many cases worse than the integrated Intel card (since the integrated card has access to 16gb ram versus the GPU's 4gb).
nVidia drivers for laptops on Linux are still garbage tier. I either have a choice of no hardware acceleration and working external displays on noveau (which toasts my CPU) or no external displays, but working hardware acceleration, on nvidia drivers which is useless for me.
I am definitely not buying an nVidia GPU ever again.
Even on Arch it was usually quite fine the last 10 years or so. And if you go back that far you could've certainly have a lot of fun with fglrx (I know I did).
In a commercial setting, with a supported distro, it's really very solid for desktop use.
Agreed. I've been using Linux desktops with nvidia cards for like 25 years. Every now and then I think "Well, everyone insists that Radeons work better on Linux, I'll give one a try", and every single time it turns out to be worse and I end up switching back.
Yeah the drivers aren't open source but, like, they work?
Yeah I may have been held back from switching to Wayland for a while but... honestly I did not care.
With all that said, I would really like to see AMD compete particularly in the AI space where nvidia seems to be able to charge totally wild prices that don't seem justified...
Every Nvidia card has CUDA support from day one, regardless of who the market is for it. I wouldn't much as much if all their slides weren't covered in AI, AI, AI, and it didn't ship with some stupid LLM chatbot to help you tune your GPU.
When I explained my non-technical friend who is addicted to ChatGPT that she could run models locally her eyes got lighten up and she wants to buy a graphics card, but she is not doing any gaming.
I think having CUDA available in consumer cards probably played a big part in it being the de facto "AI framework." AMD would be wise to do the same thing.
Developers introduction to a technical ecosystem is more than likely using the card they already own. And "It'll likely be supported eventually" just signals that they're not serious about investing in their software. AMD is selling enough CPUs to finance a few developers.
The ROCm libraries were built for RDNA 4 starting in ROCm 6.3 (the current ROCm release). I'm not sure whether that means it will be considered officially supported in ROCm 6.3 or if it's just experimental in that release.
Can't Vulkan backends do the job? Not to defend AMD, but so long that perf/dollar stays above NVIDIA just any how, isn't that more than bare minimum effort for them?
RX 7900 (RDNA 3 flagships) are officially supported. I think the 6900 (RDNA 2) was supported at one point as well, and in both cases other cards with the same architecture could usually be made to work as well.
The hardware itself seems like it will be welcome in the market. But I find AMD GPU launches so frustrating, the branding is all over the place and it feels like there is never any consistent generational series. Instead you get these seemingly random individual product launches that slot somewhere in the market, but you never know what comes next.
In comparison nVidia has had pretty consistent naming since 200 series, with every generation feeling at least somewhat complete. Only major exception was (mostly) skippin 800 series. Not saying they are perfect by any means in this regard, but AMD just feels like a complete mess.
Checking wikipedia, Radeon has recently gone through (in order):
* HD 7000 series
* HD 8000 series
* 200 series
* 300 series
* 400/500 series
* RX Vega
* RX 5000 series
* RX 6000 series
* RX 7000 series
* RX 9000 series
Like what happened to 8000 series? And also isn't it confusing to partially change the naming scheme? Any guesses what the next generation will be called?
AMD seriously needs to fire its marketing team. Think how you would do it if you could introduce a naming schema and think of how to name technical features - it would be very easy to improve on what AMD is doing.
They've got a pretty amazing apu that just came out and competes with the M series macbooks based on early benchmarks. The name of this APU: AMD Ryzen AI MAX 300 "Strix Halo"
> Like what happened to 8000 series? And also isn't it confusing to partially change the naming scheme? Any guesses what the next generation will be called?
I read somewhere that Radeon jumped to 9000-something for this generation to align with current Ryzen 9000 series CPUs (Ryzen 8000 wasn't a thing either, last I checked). Lets see if next gen Radeons match next gen Ryzens.
EDIT: confirmed, found slides from CES posted in articles like these:
The 8000 series are iGPUs, part of the AMD Strix Halo APUs (e.g. AMD Ryzen AI Max+ 395). [1]
I do agreed it's confusing though, when they were already using a three digit naming convention for iGPUs (like the Radeon 880M - the M made it clear it was a mobile chip).
In this release, AMD has aligned with nVidia naming. It feels like an acknowledgement they've screwed this up and are giving up. So it feels like a genuine improvement. Hopefully they maintain this alignment, but I'm sure they won't.
The Radeon VII which is really just a RX Vega variant made on TSMC rather than Samsung nodes but has entirely new branding for that model that doesn't mention Vega in the name.
The Radeon R9 Fury, Fury X, R9 Nano and Radeon Pro Duo which are really just 300 series but all of those have entirely new branding just for those models that don't mention the 300 series convention in their names. You also can't tell at a glance which are better.
It is a fucking mess. They cannot for the life of them stick to a convention.
So about 20% better value than Nvidia comparing MSRPs. Not terrible but that’s what they offered last generation and got 10% market share as a result. If the market price of Nvidia cards remains 20-50% above MSRP while AMD can hit their prices, then maybe we have a ballgame.
MSRP doesn't mean anything right now, it's just a made up number at this point. Let's see the real sticker prices once the cards hit the scalpers cough-cough I mean shelves.
I'm pretty torn to self-host AI 70 B models on Ryzen AI Max with 128gb of ram. The market seems to be evolving fast. Outside of Apple, this is the first product to really compete in this category Self-host AI. So... I think a second generation will be significantly better than what's currently on offer today. Rationale below...
For a max spec processor with ram at $2,000, this seems like a decent deal given today's market. However, this might age very fast for three reasons.
Reason 1: LPDDR6 may debut in the next year or two this could bring massive improvements to memory bandwidth and capacity for soldered on memory.
LPDDR6 vs LPDDR5 - Data bus width - 24 bits, 16 bits Burst length - 24 bits, 15 bits Memory bandwidth - Up to 38.4 GB/s, Up to 6.7 GB/s
- Camm ram may or may not be maintain signal integrity as memory bandwidth increases. Until I see it implemented for a AI use-case in a cost-effective manner, I am skeptical.
Reason 2: - It's a laptop chip with limited PCI lanes and reduced power envelope. Theoretically, a desktop chip could have better performance, more lanes, socketable (Although, I don't think I've seen a socketed CPU with soldered RAM)
Reason 3: In addition, what does hardware look like being repurposed in the future compared to alternatives?
- Unlike desktop or server counterparts which can have a higher cpu core count, PCEe/IO Expansion, this processor with its motherboard is limited on re-purposing later down the line as a server to self-host other software besides AI. I suppose could be turned into a overkill, NAS with ZFS and HBA Single Controller Card in new case.
- Buying into the framework desktop is pretty limited based on the form factor. Next generation might be able to include a 16x slot fully populated, a 10G nic. That seems about it if they're going to maintain the backward compatibility philosophy given the case form factor.
Rx 9070 seems perfect for compact builds on a power budget but I can't see a single two-slot, two-fan card from partners so far. They all look like massive, three slot cards.
The high-end Framework has 128GB, just like Project DIGITS. VRAM size has always been the problem, so I'm not comparing these APU SBCs to PCIe GPUs.
The comment was more about the timing of the announcements. Maybe Framework should have waited a year and then use an RDNA 4-based APU, developed and marketed in close coordination with AMD. The competitors are Apple and Nvidia, where both build their own chips and get to design their entire systems.
It seems very silly to me to change the naming scheme just for the 9000 series when they're going to have to change it again for the next series. Well I suppose they could pull an Intel and go with RX 10070 XT next time. I guess we can be thankful that they didn't call it the AMD Radeon AI Max+ RX 9070 XT.
EDIT: My usecase is a Linux gaming PC. I was fortunate enough to score an RX 6800 during the pandemic, but moved from Windows 10 to Linux a month ago. Everything seems solid, but I'm looking to upgrade the whole PC soon-ish. (Still running a Ryzen 1800X, but Ryzen 9000 looks tempting.)
Radeon overall offers a smoother desktop experience, and works fine with gaming, including proton-enabled games on Steam. OpenCL via rocm works decent, with some occasional crashes, in Darktable. Support for AI-frameworks like jax is however very lacking, and I spent many days struggling to get it to work well, before giving up and buying a nvidia card. geohot's rants on X are indicative of my experience.
The nvidia card works great for AI-acceleration. However, there is micro-stuttering when using the desktop (gnome shell in my case) which I find extremely annoying. It's a long-known issue related to the power manager.
In the end, I use the nvidia card as a dedicated AI-accelerator and the Radeon for everything else. I think this is the optimal setup for Linux today.
Rationale:
Both AMD and Intel provide well maintained open-source drivers for many years[1]. Nvidia doesn't. You need to use closed-source drivers with Nvidia in practice, which causes issues with kernel upgrades, various issues (for which you don't get help, because the kernel is now marked as tainted) and problems with Wayland. The later are caused by the fact, that Nvidia refused[2][3] to support Wayland for a long time. Blaming that implicit sync was a disadvantage for Nvidia is not appropriate. They could've participated a long time ago and either adapt to implicit sync or help adding explicit sync years ago.
Furthermore:
What did Nvidia? It is the documentation, drivers and Wayland. But the complete track record of Nvidia is bad. Their only argument are some benchmarks wins and press-coverage. The most important feature to all users is, reliability. And, Linus Torvalds[4] said everything. If Nvidia has changed its company politics and proofs that over years through good work it is maybe possible the reevaluate this. I don't see this within the next years.[1] Well. Decades?
[2] https://web.archive.org/web/20101112213158/http://www.nvnews...
[3] https://www.omgubuntu.co.uk/2010/11/nvidia-have-no-plans-to-...
[4] https://www.youtube.com/watch?v=tQIdxbWhHSM
Is that what this is? https://github.com/NVIDIA/open-gpu-kernel-modules
* since 20 years ECC on simple unbuffered RAM on Desktop.
There's that new NVIDIA driver cooking, but I'm not sure what the status is on it.
BTW, even Ryzen 3 is a substantial upgrade over 1. Any X3D processor (I think 5 is when they first appeared) is even more so. Ryzen 5 is also less sensitive to RAM timings. I applaud you for avoiding latest-and-greatest syndrome, but even upgrading to an older CPU generation would be a significant boost.
Being out of a job for a while makes me very loathe to upgrade hardware that still works OK. That 1800X still does what I want it to do, and does it 'fast enough', though how far into the future that will last is unclear. Cyberpunk 2077 being the most demanding game that I've played probably helps. :D
I got rid of it and built myself a PC (not just because of Nvidia, gaming laptops are just a disappointing experience). And went all in with Team Red - Ryzen and Radeon. It's a joy to run Linux on, I work on it and have played several games - 0 problems.
Nvidia may be the leader in performance, but if it means you're stuck with Windows - I'd say it's not worth it.
Wayland on Nvidia was a mess, though.
But as time goes by, things get better and better.
Brand new AMD cards have great support -- in mainline Linux and MESA. But if you're using a kernel and MESA from a distro, you're gonna have a hard time for the first ~8 months. You'll either have to run a rolling release distro or make your own kernel, linux-firmware and MESA packages. I had to do this when I got a 7900XT card a few months after launch while running Ubuntu.
This is the Linux driver model working as intended, and it sucks.
I suspect you may also encounter issues with Flatpaks, since from my understanding, these use the MESA userspace from a runtime so they wont work until the Flatpak runtime they use updates their MESA. I don't think it helps to have a working MESA on the host system. I'm not 100% certain about this though, would love some feedback from people with a better understanding of these things.
I've played quite a few games on Linux. Performance is still better on Windows though.
But instead of random anecdotes, here's valves word on it: https://www.pcguide.com/news/nvidia-drivers-are-holding-back...
Ubuntu seems to be even better. I turned my system into a Frankendebian for a day to try out ROCm, the AMD install scripts work out of the box on newer Ubuntus.
Deleted Comment
This is not true. There were many differences with how Nvidia had been doing things in their drivers (partially due to them reusing code on all platforms) and how Mesa/Wayland was already doing things, namely with explicit/implicit sync and GBM/EGL streams, but there was no intention to make things hard for Nvidia.
Mesa/Wayland ended up supporting explicit sync so Nvidia's driver could work.
I am definitely not buying an nVidia GPU ever again.
In a commercial setting, with a supported distro, it's really very solid for desktop use.
CUDA vs ROCm (Rusticl?) is probably another story, though.
Yeah the drivers aren't open source but, like, they work?
Yeah I may have been held back from switching to Wayland for a while but... honestly I did not care.
With all that said, I would really like to see AMD compete particularly in the AI space where nvidia seems to be able to charge totally wild prices that don't seem justified...
https://www.phoronix.com/news/AMD-ROCm-RX-9070-Launch-Day
Edit: it just doesn't matter for launch day. It'll likely be supported eventually.
When I explained my non-technical friend who is addicted to ChatGPT that she could run models locally her eyes got lighten up and she wants to buy a graphics card, but she is not doing any gaming.
Yet their slides show INT8 and INT8 with Sparsity performance improvements. As well as "Supercharged AI Performance".
Deleted Comment
rocminfo:
**** Agent 2 **** Name: gfx1201 Uuid: GPU-cea119534ea1127a Marketing Name: AMD Radeon Graphics Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000)
[32.624s](rocm-venv) a@Shark:~/github/TheRock$ ./build/dist/rocm/bin/rocm-smi
======================================== ROCm System Management Interface ======================================== ================================================== Concise Info ================================================== Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU% (DID, GUID) (Edge) (Avg) (Mem, Compute, ID) ================================================================================================================== 0 2 0x73a5, 59113 N/A N/A N/A, N/A, 0 N/A N/A 0% unknown N/A 0% 0% 1 1 0x7550, 24524 36.0°C 2.0W N/A, N/A, 0 0Mhz 96Mhz 0% auto 245.0W 4% 0% ================================================================================================================== ============================================== End of ROCm SMI Log ===============================================
Deleted Comment
In comparison nVidia has had pretty consistent naming since 200 series, with every generation feeling at least somewhat complete. Only major exception was (mostly) skippin 800 series. Not saying they are perfect by any means in this regard, but AMD just feels like a complete mess.
Checking wikipedia, Radeon has recently gone through (in order):
* HD 7000 series
* HD 8000 series
* 200 series
* 300 series
* 400/500 series
* RX Vega
* RX 5000 series
* RX 6000 series
* RX 7000 series
* RX 9000 series
Like what happened to 8000 series? And also isn't it confusing to partially change the naming scheme? Any guesses what the next generation will be called?
I read somewhere that Radeon jumped to 9000-something for this generation to align with current Ryzen 9000 series CPUs (Ryzen 8000 wasn't a thing either, last I checked). Lets see if next gen Radeons match next gen Ryzens.
EDIT: confirmed, found slides from CES posted in articles like these:
https://www.pcworld.com/article/2568373/amds-radeon-rx-9070-...
https://www.techspot.com/news/106208-amd-reveals-rdna4-archi...
https://www.notebookcheck.net/AMD-previews-RDNA-4-with-new-R...
Ryzen 6000 series was all mobile.
Price is very competitive, and as long as benchmarks and QC are on point. Then this is a massive win for AMD.
NVDA as a “market leader” dropped the ball with the 50 series. So many issues that have turned me away from their products.
I do agreed it's confusing though, when they were already using a three digit naming convention for iGPUs (like the Radeon 880M - the M made it clear it was a mobile chip).
[1] https://www.notebookcheck.net/AMD-Radeon-RX-8040S-Benchmarks...
https://www.notebookcheck.net/AMD-Radeon-RX-8040S-Benchmarks...
The Radeon VII which is really just a RX Vega variant made on TSMC rather than Samsung nodes but has entirely new branding for that model that doesn't mention Vega in the name.
The Radeon R9 Fury, Fury X, R9 Nano and Radeon Pro Duo which are really just 300 series but all of those have entirely new branding just for those models that don't mention the 300 series convention in their names. You also can't tell at a glance which are better.
It is a fucking mess. They cannot for the life of them stick to a convention.
It's like a really dumb Easter egg.
For a max spec processor with ram at $2,000, this seems like a decent deal given today's market. However, this might age very fast for three reasons.
Reason 1: LPDDR6 may debut in the next year or two this could bring massive improvements to memory bandwidth and capacity for soldered on memory.
LPDDR6 vs LPDDR5 - Data bus width - 24 bits, 16 bits Burst length - 24 bits, 15 bits Memory bandwidth - Up to 38.4 GB/s, Up to 6.7 GB/s
- Camm ram may or may not be maintain signal integrity as memory bandwidth increases. Until I see it implemented for a AI use-case in a cost-effective manner, I am skeptical.
Reason 2: - It's a laptop chip with limited PCI lanes and reduced power envelope. Theoretically, a desktop chip could have better performance, more lanes, socketable (Although, I don't think I've seen a socketed CPU with soldered RAM)
Reason 3: In addition, what does hardware look like being repurposed in the future compared to alternatives?
- Unlike desktop or server counterparts which can have a higher cpu core count, PCEe/IO Expansion, this processor with its motherboard is limited on re-purposing later down the line as a server to self-host other software besides AI. I suppose could be turned into a overkill, NAS with ZFS and HBA Single Controller Card in new case.
- Buying into the framework desktop is pretty limited based on the form factor. Next generation might be able to include a 16x slot fully populated, a 10G nic. That seems about it if they're going to maintain the backward compatibility philosophy given the case form factor.
Deleted Comment
RDNA 4 has a 2x performance gain over 3.5 (4x with Sparsity) at FP16.
It just makes it all harder (the picking and choosing). Let's see what Project DIGITS brings once it launches.
The comment was more about the timing of the announcements. Maybe Framework should have waited a year and then use an RDNA 4-based APU, developed and marketed in close coordination with AMD. The competitors are Apple and Nvidia, where both build their own chips and get to design their entire systems.