Isn't a lot of the "arc is so bad judgement" quite per-mature.
I mean it's all based on the state of the driver used with pre-release
GPUs and often the worst GPU of the lineup.
You could say this are beta drivers, but that is somehow not something people mention.
I mean sure there was a lunch of arc mobile GPUs but only in some specific region which was neither the US nor EU, it's as far as I can tell bit like a closed beta release from the dynamics involved.
So shouldn't we wait until the "proper" market launch of the dedicated gpus in the US before taking it apart as a catastrophe?
And sure older Games might not run as well, may some will never (which doesn't mean they don't run good enough to be played nicleye). Maybe except on steam because of the emulation of older direct X versions being base on Vulcan, that will be interesting.
Intel has literally sent out engineers to LTT and Gamers Nexus to make this exact point. That being said, the cards are already out on the market and people can still buy them, so it's not like tech reviewers can hold off based on promised future performance.
FWIW I've heard DXVK actually can run on Windows, but you can't use it in AppX containers. Perhaps Intel bundling it with Ark's drivers would be a better option moving forward?
It's okay that those cards are reviewed according to their current state. They are pretty bad. But that doesn't mean the second or third generation of those cards has to be bad, when they remove the resizable bar requirement (so they can be used in older systems) and improve the drivers. DXVK would be a great route, but I doubt Intel can push this on a driver level - it would be more of a Steam or Windows thing.
I assume those cards will be a great option for Linux systems soon, where DXVK is the default anyway. It's good to have an alternative to AMD.
Yeah it seems pretty unrealistic to expect Intel to catch up to the NVIDIA and AMD duopoly on their first generation of (modern) cards.
Intel seems to have rightly recognized that the driver advantage is a huge moat for those guys - they have to instead compete on price and focus on having good support for titles that will get them the the biggest chunk of the market.
That said, man, if they could have released these a year ago the wind would have been at their back way more than it is now with GPU prices trending back towards MSRP.
idk man, Intel is one of the biggest richest tech (chip) companies in the world, I don't feel like I have to give them chances. This is pretty unprofessional.
In a similar vein I find reviews in general from most outlets are written only for the release day and are often either not revised if in written form, or not addressed in later videos.
Case in point Baldur's Gate Dark Alliance 2 just saw a re-release where the devs didn't do much but port it to other systems and render the game at a higher resolution. It had a release day bug that caused the game to crash so [some review sites pan the game](https://xboxera.com/2022/07/21/review-baldurs-gate-dark-alli...) giving it low scores of sub-5-out-of-10. The devs fixed it the next day. Now is that particular site going to revise their review? No. It sits at a 4/10 with a small note at the top saying the glitch has been fixed.
Court of public opinion is obviously a thing with Intel, and there's a lot of long-established fanboyism with "team green" and "team red", so there's a lot of people looking to Intel to fail.
Also I hope it's only a matter of time until companies embrace abstraction layers like dxvk.
It's better then them assuming issues will be fixed by launch day. I'm still pissed at PC Gamer for giving Hellgate London a glowing review despite it being a buggy mess and making zero mention of it. That said should a barebones $30 port of a mediocre even at time 20 year old game get that high a score to begin with?
I agree that we should wait for the final release to make any judgements but these drivers have been in development for three years or so. They're not going to magically improve in the next few months.
It was announced that the pricing would be based on the performance of tier-3 titles (the ones not receiving special attention), so it could be a good deal before the drivers improve.
Intel's GPUs have been exclusively UMA since the i740, so this doesn't seem like a particularly rare mistake to make if it was from someone used to all their previous ones.
Unified Memory Architecture, basically the normal integrated graphics using system RAM and VRAM.
It was really amusing to see Apple hype that term (which has been around in the PC industry for decades and synonymous with low cost and performance) so heavily around the release of the M1.
The contrasting term is "DIS", i.e. discrete graphics.
I'm not one to shy away from dunking on Intel and they very often deserve it but I feel that Intel is getting a lot of undue shit in regards to Arc. I'm actually very excited for Arc and will probably buy one to play with if the price is right if for no other reason that Intel's Linux support has in the past been pretty solid.
Launching with broken drivers certainly did not help.
Promising to price their GPUs based on their performance in "Tier 3"* games will certainly help them win a lot of consumer goodwill though, especially with Intel targeting the critically underserved low and midrange gaming GPU market.
* For those OOTL, Intel has grouped all games in to three tiers based on driver optimization and graphics API usage. Tier 1 is DX12/Vulkan titles they have specifically optimized for. Tier 2 includes all other DX12/Vulkan titles. Tier 3 is DX11/OpenGL/older DX. Nvidia and AMD have had a decade+ to optimize their drivers for higher level APIs like DX11, including hundreds of game/engine specific optimizations. The result is ARC GPUs performing notably worse in DX11 titles than equivalent hardware from Nvidia and AMD.
Intel's promised pricing structure means that Your $250 ARC GPU will perform about as well in most Tier 3 games as a $250 RTX or Radeon. Meanwhile, Tier 1 and 2 DX12/Vulkan software will likely outperform competing devices in the same price range.
> Launching with broken drivers certainly did not help.
I get what you mean but people seem to be forgetting that Ryzen launched in an absolutely abysmal state and it took them a few iterations to get to the industry leader that we have today. I think Intel taking the loss leader perspective on this and essentially using people willing to buy as beta testers will pan out for them in the long run. They have been more transparent about it than AMD was with Ryzen or than Nvidia has been with literally anything which I appreciate.
Honestly, I like it. People are judging the products for how they work right now, which appears to be not very well. If Arc is eventually improved through software, I hope that a new round of reviews will come out then which tells the updated story. If reviews of Arc were undeservedly positive because because there's a chance that driver improvements will make Arc better in the future, that would have been pretty disingenuous to people who wanna buy the product now.
If you want good reviews at launch, then launch a finished product.
Why _would_ you expect anything different to happen at a massive company?
It's not like the team who write the drivers are likely to know of the team working on optimizing compilers, profilers, or anything at all really.
My experience has been that especially in companies working in diverse disciplines across disparate codebases, very little is shared. A team of 8 in a tiny company is just as likely to make the same mistakes as the team of 8 in a bigger company. At large companies with more unified codebases and disciplines, maybe one person or team has added some process which helps identify egregious performance issues at some point in the past. But such shared process or tooling would be really hard at a company like Intel where one team makes open-source Linux drivers while another makes highly specialized RTL design software, for example.
>Why _would_ you expect anything different to happen at a massive company?
Because a massive company has enough money around to put the processes in place and hire skilled people to do both deep[0] testing and system[1] testing.
[1] The definition of "system testing" I'm using: "Testing to assess the value of the system to people who matter." Those include stakeholders, application developers, end users, etc.
Maybe they did profile it and this fix is the result. Or maybe Vulkan raytracing on Linux for an unreleased GPU is lower priority and they just recently got around to noticing it.
Massive companies are more prone to silly errors like this.
Source: I work for a similar massive company. You would not believe the amount of issues similar to this. This one is gettin attention because it happened in open source code.
Does the company have people whose job description includes looking for deeper problems such as this one?
I don't know what your position or political standing in the company is, but I assume that with the tech job market the way it is, if you still work there you care about the company to some degree. So perhaps bringing this issue up with (more) senior management is the way to go.
And if they say there is no budget, or that it would take a bureaucratic nightmare to make space for it in the budget, ask them what the budget is for dealing with PR disasters such as this one.
That's really ignorant given that Intel has thousands of software engineers supporting hundreds of opensource projects you use daily. Including Linux where Intel has consistently a top ten contributor for years.
This mistake could easily have been in other vendors Linux GPU drivers, they in the end don't have nearly the same priority (and in turn resources) as the Windows GPU drivers. And it's a very easy mistake to find. And I don't know if anyone even cared about ray tracing with Intel integrated graphics on Linux desktops (and in turn no one profiled it deeply). I mean ray tracing is generally something you will do much less likely on a integrated GPU. And it's a really easy mistake to make.
And sure I'm pretty sure their software department(s?) have a lot of potential for improvement, I mean they probably have been hampered by the same internal structures which lead to Intel faceplanting somewhat hard recently.
Even so, the very first thing anybody learns about GPU programming is to use the VRAM on the card whenever possible, and to minimize transfers back and forth between VRAM and main memory. This is a super basic mistake that should have been caught by some kind of test suite, at least.
Intel's high-level software teams are okay, and their hardware teams are great, but their firmware teams are a bit of a garbage fire. I assume that nobody really wants to work on firmware, and the organization does not encourage it.
I'm not sure it seems like something you'd easily find through profiling? The change was changing a memory allocation to use GPU memory rather than system memory. Allocating system memory probably isn't noticeably slower than allocating GPU memory, so the line that's at fault wouldn't show up when profiling. Instead, memory access in GPU-side raytracing code is just a bit slower when accessing the allocated memory.
So you would have to profile GPU-side code, which is probably really hard; and you'd have to find slow memory accesses, not slow code or slow algorithms, which is even harder. And those memory accesses may be spread out, so that each instruction which uses the slow memory won't stand out; the effect may only be noticeable in aggregate.
People working at big companies are ALWAYS worried about releasing lots of code that they need to fulfill some monthly or quarterly goals. These ideas that they have time to profile, improve, or check results are inconsistent with reality. When you see real code produced at big companies, it is barely good enough to satisfy the requirements, forget about any sense of high quality.
Not just Intel but programmers in general have got to demand better tools and use the tools they have. This is an obvious problem if you can see it. It needs to be on every programmers checklist to profile.
That could be a GPU memory leak in an application, no? When an application allocates GPU memory, that's taken from main system memory on integrated chops, and the Intel driver would be responsible for that.
When looking at drivers or OSes, the hardware provides performance and the software takes it away. You should consider the ideal performance of the hardware as the baseline and measure the overhead relative to that.
Fair indeed, but I suspect the result would likely be the same. It's a bit like making a mess and cleaning it up - good on 'you', but don't expect praise
The article jumps right into that:
> This is something to be celebrated, of course. However, on the flip side, the driver was 100X slower than it should have been because of a memory allocation oversight.
Maybe this is where the phrase "It's a wash" comes from
Anyway, I'm hopeful things mature quickly - having worked with Intel people, I'm cautiously hopeful
Many years ago I released some software with a major algorithm replacement that delivered a roughly 10x speed improvement.
Within six months I discovered a bug in the new algorithm whose removal delivered roughly another 10x improvement. (That’s the most dramatic single speed up of a previously well designed algorithm I have delivered in my career.)
Numerical algorithm bugs can be tricky to detect when sandwiched between dramatic improvements like that!
Only thing I can think of is that they reused to code from their integrated graphics drivers which likely don't support GPU memory. So the default is the option that works on the millions of "GPUs" they have already shipped.
I mean it's all based on the state of the driver used with pre-release GPUs and often the worst GPU of the lineup.
You could say this are beta drivers, but that is somehow not something people mention.
I mean sure there was a lunch of arc mobile GPUs but only in some specific region which was neither the US nor EU, it's as far as I can tell bit like a closed beta release from the dynamics involved.
So shouldn't we wait until the "proper" market launch of the dedicated gpus in the US before taking it apart as a catastrophe?
And sure older Games might not run as well, may some will never (which doesn't mean they don't run good enough to be played nicleye). Maybe except on steam because of the emulation of older direct X versions being base on Vulcan, that will be interesting.
FWIW I've heard DXVK actually can run on Windows, but you can't use it in AppX containers. Perhaps Intel bundling it with Ark's drivers would be a better option moving forward?
It's okay that those cards are reviewed according to their current state. They are pretty bad. But that doesn't mean the second or third generation of those cards has to be bad, when they remove the resizable bar requirement (so they can be used in older systems) and improve the drivers. DXVK would be a great route, but I doubt Intel can push this on a driver level - it would be more of a Steam or Windows thing.
I assume those cards will be a great option for Linux systems soon, where DXVK is the default anyway. It's good to have an alternative to AMD.
Intel seems to have rightly recognized that the driver advantage is a huge moat for those guys - they have to instead compete on price and focus on having good support for titles that will get them the the biggest chunk of the market.
That said, man, if they could have released these a year ago the wind would have been at their back way more than it is now with GPU prices trending back towards MSRP.
Case in point Baldur's Gate Dark Alliance 2 just saw a re-release where the devs didn't do much but port it to other systems and render the game at a higher resolution. It had a release day bug that caused the game to crash so [some review sites pan the game](https://xboxera.com/2022/07/21/review-baldurs-gate-dark-alli...) giving it low scores of sub-5-out-of-10. The devs fixed it the next day. Now is that particular site going to revise their review? No. It sits at a 4/10 with a small note at the top saying the glitch has been fixed.
Court of public opinion is obviously a thing with Intel, and there's a lot of long-established fanboyism with "team green" and "team red", so there's a lot of people looking to Intel to fail.
Also I hope it's only a matter of time until companies embrace abstraction layers like dxvk.
https://www.phoronix.com/news/Intel-Vulkan-RT-100x-Improve
It was really amusing to see Apple hype that term (which has been around in the PC industry for decades and synonymous with low cost and performance) so heavily around the release of the M1.
The contrasting term is "DIS", i.e. discrete graphics.
Promising to price their GPUs based on their performance in "Tier 3"* games will certainly help them win a lot of consumer goodwill though, especially with Intel targeting the critically underserved low and midrange gaming GPU market.
* For those OOTL, Intel has grouped all games in to three tiers based on driver optimization and graphics API usage. Tier 1 is DX12/Vulkan titles they have specifically optimized for. Tier 2 includes all other DX12/Vulkan titles. Tier 3 is DX11/OpenGL/older DX. Nvidia and AMD have had a decade+ to optimize their drivers for higher level APIs like DX11, including hundreds of game/engine specific optimizations. The result is ARC GPUs performing notably worse in DX11 titles than equivalent hardware from Nvidia and AMD.
Intel's promised pricing structure means that Your $250 ARC GPU will perform about as well in most Tier 3 games as a $250 RTX or Radeon. Meanwhile, Tier 1 and 2 DX12/Vulkan software will likely outperform competing devices in the same price range.
I get what you mean but people seem to be forgetting that Ryzen launched in an absolutely abysmal state and it took them a few iterations to get to the industry leader that we have today. I think Intel taking the loss leader perspective on this and essentially using people willing to buy as beta testers will pan out for them in the long run. They have been more transparent about it than AMD was with Ryzen or than Nvidia has been with literally anything which I appreciate.
If you want good reviews at launch, then launch a finished product.
It's not like the team who write the drivers are likely to know of the team working on optimizing compilers, profilers, or anything at all really.
My experience has been that especially in companies working in diverse disciplines across disparate codebases, very little is shared. A team of 8 in a tiny company is just as likely to make the same mistakes as the team of 8 in a bigger company. At large companies with more unified codebases and disciplines, maybe one person or team has added some process which helps identify egregious performance issues at some point in the past. But such shared process or tooling would be really hard at a company like Intel where one team makes open-source Linux drivers while another makes highly specialized RTL design software, for example.
Because a massive company has enough money around to put the processes in place and hire skilled people to do both deep[0] testing and system[1] testing.
[0] https://www.developsense.com/blog/2017/03/deeper-testing-1-v...
[1] The definition of "system testing" I'm using: "Testing to assess the value of the system to people who matter." Those include stakeholders, application developers, end users, etc.
Deleted Comment
Deleted Comment
Source: I work for a similar massive company. You would not believe the amount of issues similar to this. This one is gettin attention because it happened in open source code.
I don't know what your position or political standing in the company is, but I assume that with the tech job market the way it is, if you still work there you care about the company to some degree. So perhaps bringing this issue up with (more) senior management is the way to go.
And if they say there is no budget, or that it would take a bureaucratic nightmare to make space for it in the budget, ask them what the budget is for dealing with PR disasters such as this one.
This mistake could easily have been in other vendors Linux GPU drivers, they in the end don't have nearly the same priority (and in turn resources) as the Windows GPU drivers. And it's a very easy mistake to find. And I don't know if anyone even cared about ray tracing with Intel integrated graphics on Linux desktops (and in turn no one profiled it deeply). I mean ray tracing is generally something you will do much less likely on a integrated GPU. And it's a really easy mistake to make.
And sure I'm pretty sure their software department(s?) have a lot of potential for improvement, I mean they probably have been hampered by the same internal structures which lead to Intel faceplanting somewhat hard recently.
So you would have to profile GPU-side code, which is probably really hard; and you'd have to find slow memory accesses, not slow code or slow algorithms, which is even harder. And those memory accesses may be spread out, so that each instruction which uses the slow memory won't stand out; the effect may only be noticeable in aggregate.
This being a mistake/bug, I think it's obviously underperforming rather than being optimized.
The article jumps right into that:
> This is something to be celebrated, of course. However, on the flip side, the driver was 100X slower than it should have been because of a memory allocation oversight.
Maybe this is where the phrase "It's a wash" comes from
Anyway, I'm hopeful things mature quickly - having worked with Intel people, I'm cautiously hopeful
Within six months I discovered a bug in the new algorithm whose removal delivered roughly another 10x improvement. (That’s the most dramatic single speed up of a previously well designed algorithm I have delivered in my career.)
Numerical algorithm bugs can be tricky to detect when sandwiched between dramatic improvements like that!
> This is why there shouldn't be such a bit. :) Default everything to local, have an opt-out bit!
yep. why was the default the slow path?
Only thing I can think of is that they reused to code from their integrated graphics drivers which likely don't support GPU memory. So the default is the option that works on the millions of "GPUs" they have already shipped.
That said, the 810/815 that came after did have a (to my knowledge little-used) "display cache" feature, but they were otherwise still a UMA design.