Readit News logoReadit News
tester756 · 3 months ago
https://www.phoronix.com/review/intel-arc-pro-b50-linux

>Overall the Intel Arc Pro B50 was at 1.47x the performance of the NVIDIA RTX A1000 with that mix of OpenGL, Vulkan, and OpenCL/Vulkan compute workloads both synthetic and real-world tests. That is just under Intel's own reported Windows figures of the Arc Pro B50 delivering 1.6x the performance of the RTX A1000 for graphics and 1.7x the performance of the A1000 for AI inference. This is all the more impressive when considering the Arc Pro B50 price of $349+ compared to the NVIDIA RTX A1000 at $420+.

swiftcoder · 3 months ago
IIRC, the RTX A1000 is an RTX 3050 8GB with ~10% of the shader cores disabled, retailing for double the price of a 3050?

I guess it's a boon for Intel that NVidia repeatedly shoots their own workstation GPUs in the foot...

eYrKEC2 · 3 months ago
They may not be disabling them maliciously -- they may be "binning" them -- running tests on the parts and then fusing off/disabling broken pieces of the silicon in order to avoid throwing away a chip that mostly works.
zamadatix · 3 months ago
Comparing price to performance in this space might not make much sense as it would seem. One of the (very few) interesting qualities in the A1000 is that it's single slot, low profile, workstation GPU. Intel kept the "powered by the PCIe slot" aspect, but made it dual slot and full height. Needing a "workstation" GPU in a tiny form factor (i.e. not meant to slot and power full sized GPUs) was something one could squeeze on price for, but the only selling point of this is the price.
tizio13 · 3 months ago
I think you might be mistaken on the height of the card, if you look at the ports they are mini-DP on a low profile bracket. The picture also states that it includes both types of brackets.
derefr · 3 months ago
I'm still waiting for one of Nvidia/AMD/Intel to realize that if they make an inference-focused Thunderbolt eGPU "appliance" (not just a PCIe card in an eGPU chassis, but a sealed, vertically-integrated board-in-box design), then that would completely free them from design constraints around size/shape/airflow in an ATX chassis.

Such an appliance could plug into literally any modern computer — even a laptop or NUC. (And for inference, "running on an eGPU connected via Thunderbolt to a laptop" would actually work quite well; inference doesn't require much CPU, nor have tight latency constraints on the CPU<->GPU path; you mostly just need enough arbitrary-latency RAM<->VRAM DMA bandwidth to stream the model weights.)

(And yeah, maybe your workstation doesn't have Thunderbolt, because motherboard vendors are lame — but then you just need a Thunderbolt PCIe card, which is guaranteed to fit more easily into your workstation chassis than a GPU would!)

onlyrealcuzzo · 3 months ago
Theoretically, they should be able to launch something more competitive in that form-factor next year, though, right?
bsder · 3 months ago
Put 32GB on that card and everybody would ignore performance issues.

With 16GB everybody will just call it another in the long list of Intel failures.

Moto7451 · 3 months ago
This is not that sort of workstation.

My first software job was at a place doing municipal architecture. The modelers had and needed high end GPUs in addition to the render farm, but plenty of roles at the company simply needed anything with better than what the Intel integrated graphics of the time could produce in order to open the large detailed models.

In these roles the types of work would include things like seeing where every pipe, wire, and plenum for a specific utility or service was in order to plan work between a central plant and a specific room. Stuff like that doesn’t need high amounts of VRAM since streaming textures in worked fine. A little lag never hurt anyone here as the software would simply drop detail until it caught up. Everything was pre-rendered so it didn’t need large amounts of power to display things. What did matter was having the grunt to handle a lot of content and do it across three to six displays.

Today I’m guessing the integrated chips could handle it fine but even my 13900K’s GPU only does DisplayPort 1.4 and up to only three displays on my motherboard. It should do four but it’s up to the ODMs at that point.

For a while Matrox owned a great big slice of this space but eventually everyone fell to the wayside except NVidia and AMD.

tracker1 · 3 months ago
It's already got 2x the ram and roughly 1.5x the performance of the more expensive NVidia competitor... I'm not sure where you are getting your expectations from.
snowram · 3 months ago
I wonder why everyone keep saying "just put more VRAM" yet no cards seem to do that. If it is that easy to compete with Nvidia, why don't we already have those cards?
colechristensen · 3 months ago
#3 player just released something that compares well with price/performace ratio compared to #1 player's release from a year and a half ago... yep
tinco · 3 months ago
No. The A1000 was well over $500 last year. This is the #3 player coming out with a card that's a better deal than what the #1 player currently has to offer.

I don't get why there's people trying to twist this story or come up with strawmen like the A2000 or even the RTX5000 series. Intel's coming into this market competitively, which as far as I know is a first, and it's also impressive.

Coming into the gaming GPU market had always been too ambitious a goal for Intel, they should have started with competing in the professional GPU market. It's well known that Nvidia and AMD have always been price gouging this market so it's fairly easy to enter it competitively.

If they can enter this market successfully and then work their way up on the food chain then that seems like good way to recover from their initial fiasco.

blagie · 3 months ago
Well, no. It doesn't. The comparison is to the A1000.

Toss in a 5060 Ti into the compare table, and we're in an entirely different playing field.

There are reasons to buy the workstation NVidia cards over the consumer ones, but those mostly go away when looking at something like the new Intel. Unless one is in an exceptionally power-constrained environment, yet has room for a full-sized card (not SFF or laptop), I can't see a time the B50 would even be in the running against a 5060 Ti, 4060 Ti, or even 3060 Ti.

KeplerBoy · 3 months ago
"release from a year and a half ago", that's technically true but a really generous assessment of the situation.

We could just as well compare it to the slightly more capable RTX A2000, which was released more than 4 years ago. Either way, Intel is competing with the EoL Ampere architecture.

tossandthrow · 3 months ago
... At a current day cheaper price.

There are huge markets that does not care about SOTA performance metrics but needs to get a job done.

moffkalast · 3 months ago
> 1.7x the performance of the A1000 for AI inference

That's a bold claim when their acceleration software (IPEX) is barely maintained and incompatible with most inference stacks, and their Vulkan driver is far behind it in performance.

mythz · 3 months ago
Really confused why the Intel and AMD both continue to struggle and yet still refuse to offer what Nvidia wont, i.e. high ram consumer GPUs. I'd much prefer paying 3x cost for 3x VRAM (48GB/$1047), 6x cost for 6x VRAM (96GB/$2094), 12x cost for 12x VRAM (192GB/$4188), etc. They'd sell like hotcakes and software support would quickly improve.

At 16GB I'd still prefer to pay a premium for NVidia GPUs given its superior ecosystem, I really want to get off NVidia but Intel/AMD isn't giving me any reason to.

fredoralive · 3 months ago
Because the market of people who want huge RAM GPUs for home AI tinkering is basically about 3 Hacker News posters. Who probably won’t buy one because it doesn’t support CUDA.

PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use, we don’t really get Crysis style system crushers anymore.

bilekas · 3 months ago
> PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use, we don’t really get Crysis style system crushers anymore.

This isn't really true from the recreational card side, nVidia themselves are reducing the number of 8GB models as a sign of market demand [1]. Games these days are regularly maxing out 6 & 8 GB when running anything above 1080p for 60fps.

The prevalence of Unreal Engine 5 also recently with a low quality of optimization for weaker hardware is causing games to be released basically unplayable for most.

For recreational use the sentiment is that 8GB is scraping the bottom of the requirements. Again this is partly due to bad optimizations, but games are being played in higher resolutions also, which required more memory for larger texture sizes.

[1] https://videocardz.com/newz/nvidia-reportedly-reduces-supply...

epolanski · 3 months ago
I really doubt your claim considering how many people I've seen buy 5k Macbooks Pros with 48+ GB of ram for local inference.

500$ 32GB consumer GPU is an obvious best seller.

Thus let's call it how it is: they don't want to cannibalize their higher end GPUs.

paool · 3 months ago
Maybe today, but the more accessible and affordable they become, the more likely people can start offering "self hosted" options.

We're already seeing competitors of AWS but only targeting things like Qwen , deepseek, etc.

There's Enterprise customers who have compliance laws and literally want AI but cannot use any of the top models because everything has to be run on their own infrastructure.

Rohansi · 3 months ago
> PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use

That's pretty funny considering that PC games are moving more towards 32GB RAM and 8GB+ VRAM. The next generation of consoles will of course increase to make room for higher quality assets.

jantuss · 3 months ago
Another use for high RAM GPUs is the simulation of turbulent flows for research. Compared to CPU, GPU Navier-Stokes solvers are super fast, but the size of the simulated domain is limited by the RAM.
FirmwareBurner · 3 months ago
>Because the market of people who want huge RAM GPUs for home AI tinkering is basically about 3 Hacker News posters

You're wrong. It's probably more like 9 HN posters.

fnord77 · 3 months ago
Marketing is misreading the room. I believe there's a bunch of people buying no video cards right now that would if there were high vram options available
wpm · 3 months ago
This isn’t a gaming card, so what the PS5 does or has is not relevant here.
daemonologist · 3 months ago
This card does have double the VRAM of the more expensive Nvidia competitor (the A1000, which has 8 GB), but I take your point that it doesn't feel like quite enough to justify giving up the Nvidia ecosystem. The memory bandwidth is also... not great.

They also announced a 24 GB B60 and a double-GPU version of the same (saves you physical slots), but it seems like they don't have a release date yet (?).

jjkaczor · 3 months ago
Yeah am currently building a new PC , and have been waiting and waiting for B60 Pro release dates from partners - here is an offering from ASRock:

https://www.asrock.com/Graphics-Card/Intel/Intel%20Arc%20Pro...

HelloNurse · 3 months ago
They don't have a release date yet because they need to see sales results to commit to such a new product.
cmxch · 3 months ago
Maxsun does offer a high VRAM (48GB) dual Arc Pro B60, but the only US availability has it on par with a 5090 at ~$3000.
PostOnce · 3 months ago
I think that's actually two GPUs on one card, and not a single GPU with 48GB VRAM
Ekaros · 3 months ago
I am not sure there is significant enough market for those. That is selling enough consumer units to cover all design and other costs. From gamer perspective 16GB is now a reasonable point. 32GB is most one would really want and even that not at more than say 100 more price point.

This to me is the gamer perspective. This segment really does not need even 32GB, let alone 64GB or more.

drra · 3 months ago
Never underestimate bragging rights in gamers community. Majority of us run unoptimized systems with that one great piece of gear and as long as the game runs at decent FPS and we have some bragging rights it's all ok.
imiric · 3 months ago
> I am not sure there is significant enough market for those.

How so? The prosumer local AI market is quite large and growing every day, and is much more lucrative per capita than the gamer market.

Gamers are an afterthought for GPU manufacturers. NVIDIA has been neglecting the segment for years, and is now much more focused on enterprise and AI workloads. Gamers get marginal performance bumps each generation, and side effect benefits from their AI R&D (DLSS, etc.). The exorbitant prices and performance per dollar are clear indications of this. It's plain extortion, and the worst part is that gamers accepted that paying $1000+ for a GPU is perfectly reasonable.

> This segment really does not need even 32GB, let alone 64GB or more.

4K is becoming a standard resolution, and 16GB is not enough for it. 24GB should be the minimum, and 32GB for some headroom. While it's true that 64GB is overkill for gaming, it would be nice if that would be accessible at reasonable prices. After all, GPUs are not exclusively for gaming, and we might want to run other workloads on them from time to time.

While I can imagine that VRAM manufacturing costs are much higher than DRAM costs, it's not unreasonable to conclude that NVIDIA, possibly in cahoots with AMD, has been artificially controlling the prices. While hardware has always become cheaper and more powerful over time, for some reason, GPUs buck that trend, and old GPUs somehow appreciate over time. Weird, huh. This can't be explained away as post-pandemic tax and chip shortages anymore.

Frankly, I would like some government body to investigate this industry, assuming they haven't been bought out yet. Label me a conspiracy theorist if you wish, but there is precedent for this behavior in many industries.

zdw · 3 months ago
I doubt you'd get linear scaling of price/capacity - the larger capacity modules are more expensive per GB than smaller ones, and in some cases are supply constrained.

The number of chips on the bus is usually pretty low (1 or 2 of them on most GPUs), so GPUs tend to have to scale out their memory bus widths to get to higher capacity. That's expensive and takes up die space, and for the conventional case (games) isn't generally needed on low end cards.

What really needs to happen is someone needs to make some "system seller" game that is incredibly popular and requires like 48GB of memory on the GPU to build demand. But then you have a chicken/egg problem.

Example: https://wccftech.com/nvidia-geforce-rtx-5090-128-gb-memory-g...

YetAnotherNick · 3 months ago
> I'd much prefer paying 3x cost for 3x VRAM

Why not just buy 3 card then? These cards doesn't require active cooling anyways and you can just fit 3 in decent sized case. You will get 3x VRAM speed and 3x compute. And if your usecase is llm inference, it will be a lot faster than 1x card with 3x VRAM.

zargon · 3 months ago
We will buy 4 cards if they are 48 GB or more. At a measly 16 GB, we’re just going to stick with 3090s, P40s, MI50s, etc.

> 3x VRAM speed and 3x compute

LLM scaling doesn’t work this way. If you have 4 cards, you may get 2x performance increase if you use vLLM. But you’ll also need enough VRAM to run FP8. 3 cards would only run at 1x performance.

_zoltan_ · 3 months ago
because then instead of RAM bandwidth now you're dealing with PCIe BW which is way less.
0x500x79 · 3 months ago
I think it's a bit of planned obsolescence as well. The 1080ti has been a monster with it's 11GB VRAM up until this generation. A lot of enthusiasts basically call out that Nvidia won't make that mistake again since it led to longer upgrade cycles.
akvadrako · 3 months ago
AMD Strix Halo has about 100GB of VRAM for around $1500 if that's all you care about.
kristopolous · 3 months ago
You want an M3 ultra Mac studio
ginko · 3 months ago
That only runs Mac OS so it's useless.
doctorpangloss · 3 months ago
they don't manufacture RAM, so none of the margin goes to them
nullc · 3 months ago
Even if they put out some super high memory models and just pass the ram through at cost it would increase sales -- potentially quite dramatically and increase their total income a lot and have a good chance of transitioning to being a market leader rather than an also-ran.

AMD has lagged so long because of the software ecosystem but the climate now is that they'd only need to support a couple popular model architectures to immediately grab a lot of business. The failure to do so is inexplicable.

I expect we will eventually learn that this was about yet another instance of anti-competitive collusion.

kube-system · 3 months ago
They sell the completed card, which has margin. You can charge more money for a card with more vram.
wewewedxfgdf · 3 months ago
The new CEO of Intel has said that Intel is giving up competing with Nvidia.

Why would you bother with any Intel product with an attitude like that, gives zero confidence in the company. What business is Intel in, if not competing with Nvidia and AMD. Is it giving up competing with AMD too?

jlei523 · 3 months ago

  The new CEO of Intel has said that Intel is giving up competing with Nvidia.
No, he said they're giving up competing against Nvidia in training. Instead, he said Intel will focus on inference.

That's the correct call in my opinion. Training is far more complex and will span multi data centers soon. Intel is too far behind. Inference is much simpler and likely a bigger market going forward.

IshKebab · 3 months ago
I disagree - training enormous LLMs is super complex and requires a data centre... But most research is not done at that scale. If you want researchers to use your hardware at scale you also have to make it so they can spend a few grand and do small scale research with one GPU on their desktop.

That's how you get things like good software support in AI frameworks.

SadTrombone · 3 months ago
AMD has also often said that they can't compete with Nvidia at the high end, and as the other commenter said: market segments exist. Not everyone needs a 5090. If anything, people are starved for options in the budget/mid-range market, which is where Intel could pick up a solid chunk of market share.
pshirshov · 3 months ago
Regardless of what they say, they CAN compete in training and inference, there is literally no alternative to W7900 at the moment. That's 4080 performance with 48Gb VRAM for half of what similar CUDA devices would costs.
Mistletoe · 3 months ago
I’m interested in buying a GPU that costs less than a used car.
ksec · 3 months ago
>What business is Intel in, if not competing with Nvidia and AMD.

Foundry business. The latest report on Discreet Graphics Market share Nvidia has 94%, AMD at 6% and Intel at 0%.

I may still have another 12 months to go. But in 2016 I made a bet against Intel engineers on Twitter and offline suggesting GPU is not a business they want to be in, or at least too late. They said at the time they will get 20% market share minimum by 2021. I said I would be happy if they did even 20% by 2026.

Intel is also losing money, they need cashflow to compete in Foundry business. I have long argued they should have cut off GPU segment when Pat Gelsinger arrives, turns out Intel bound themselves to GPU by all the government contract and supercomputer they promised to make. Now that they have delivered it all or mostly they will need to think about whether to continue or not.

Unfortunately unless US point guns at TSMC I just dont see how Intel will be able to compete, as Intel needs to be a leading edge position in order to command the margin required for Intel to function. Right now in terms of density Intel 18A is closer to TSMC N3 then N2.

baq · 3 months ago
The problem is they can’t not attempt or they’ll simply die of irrelevance in a few years. GPUs will eat the world.

If NVidia gets complacent as Intel has become when they had the market share in the CPU space, there is opportunity for Intel, AMD and others in NVidias margin.

MangoToupe · 3 months ago
> Unfortunately unless US point guns at TSMC

They may not have to, frankly, depending on when China decides to move on Taiwan. It's useless to speculate—but it was certainly a hell of a gamble to open a SOTA (or close to it—4 nm is nothing to sneeze at) fab outside of the island.

grg0 · 3 months ago
Zero confidence why? Market segments exist.

I want hardware that I can afford and own, not AI/datacenter crap that is useless to me.

ryao · 3 months ago
I thought that he said that they gave up at competing with Nvidia at training, not in general. He left the door open to compete on inference. Did he say otherwise more recently?
mathnode · 3 months ago
Because we don't need data centre hardware to run domestic software.
MangoToupe · 3 months ago
I don't really want an nvidia gpu; it's too expensive and I won't use most of it. This actually looks attractive.
ocdtrekkie · 3 months ago
NVIDIA cards are unironically over $3,500 at the store in some cases...
jasonfrost · 3 months ago
Isn't Intel the only largely domestic fab
high_na_euv · 3 months ago
Wtf? Source on that?
Alifatisk · 3 months ago
A feature I haven't seen someone comment about yet is Project Battlematrix [1][2] with these cards, this allows for multi-GPU AI orchestration. A feature Nvidia offers for enterprise AI workloads (Run:ai), but Intel is bringing this to consumers

1. https://youtu.be/iM58i3prTIU?si=JnErLQSHpxU-DlPP&t=225

2. https://www.intel.com/content/www/us/en/developer/articles/t...

jazzyjackson · 3 months ago
Huh, I didn't realize these were just released, I came across it looking for a GPU that had AV1 hardware encoding and been putting a shopping cart together for a mini-ITX xeon server for all my ffmpeg shenanigans.

I like to Buy American when I can but it's hard to find out which fabs various CPUs and GPUs are made in. I read Kingston does some RAM here and Crucial some SSDs. Maybe the silicon is fabbed here but everything I found is "assembled in Taiwan", which made me feel like I should get my dream machine sooner rather than later

dangus · 3 months ago
I have the answer for you, Intel's GPU chips are on TSMC's process. They are not made in Intel-owned fabs.

There really is no such thing as "buying American" in the computer hardware industry unless you are talking about the designs rather than the assembly. There are also critical parts of the lithography process that depend on US technology, which is why the US is able to enforce certain sanctions (and due to some alliances with other countries that own the other parts of the process).

Personally I think people get way too worked up about being protectionist when it comes to global trade. We all want to buy our own country's products over others but we definitely wouldn't like it if other countries stopped buying our exported products.

When Apple sells an iPhone in China (and they sure buy a lot of them), Apple is making most of the money in that transaction by a large margin, and in turn so are you since your 401k is probably full of Apple stock, and so are the 60+% of Americans who invest in the stock market. A typical iPhone user will give Apple more money in profit from services than the profit from the sale of the actual device. The value is really not in the hardware assembly.

In the case of electronics products like this, almost the entire value add is in the design of the chip and the software that is running on it, which represents all the high-wage work, and a whole lot of that labor in the US.

US citizens really shouldn't envy a job where people are sitting at an electronics bench doing repetitive assembly work for 12 hours a day in a factory wishing we had more of those jobs in our country. They should instead be focused on making high level education more available/affordable so that they stay on top of the economic food chain, where most/all of its citizens are doing high-value work rather than causing education to be expensive and beg foreign manufacturers to open satellite factories to employ our uneducated masses.

I think the current wave of populist protectionist ideology is essentially blaming the wrong causes of declining affordability and increasing inequality for the working class. Essentially, people think that bringing the manufacturing jobs back and reversing globalism will right the ship on income inequality, but the reality is that the reason that equality was so good for Americans m in the mid-century was because the wealthy were taxed heavily, European manufacturing was decimated in WW2, and labor was in high demand.

The above of course is all my opinion on the situation, and a rather long tangent.

jazzyjackson · 3 months ago
Thanks for that perspective. I am just in a place of puzzling why none of this says Made in USA on it. I can get socks and tshirts woven in north carolina which is nice, and furniture made in illinois. That's all a resurgence of 'arts & craft' I suppose, valuing a product made in small batches by someone passionate about quality instead of just getting whatever is lowest cost. Suppose there's not much in the way of artisan silicon yet :)

EDIT: I did think of, what is the closest thing to artisan silicon and thought of the POWER9 CPUs and found out those are made in USA Talos II is also manufactured in the US with the IBM POWER9 processors being fabbed in New York while the Raptor motherboard is manufactured in Texas along with where their systems are assembled.

https://www.phoronix.com/review/power9-threadripper-core9

mschuster91 · 3 months ago
the thing with iPhone production is not about producing iPhones per se, it's about providing a large volume customer for the supply chain below it - basic stuff like SMD resistors, capacitors, ICs, metal shields, frames, god knows what else - because you need that available domestically for weapons manufacturing, should China ever think of snacking Taiwan. But a potential military market in 10 years is not even close to "worth it" for any private investors or even the government to build out a domestic supply chain for that stuff.
bane · 3 months ago
You may want to check that your Xeon may already support hardware encoding of AV1 in the iGPU. I saved a bundle building a media server when I realized the iGPU was more than sufficient (and more efficient) than chucking a GPU in the case.

I have a service that runs continuously and reencodes any videos I have into h265 and the iGPU barely even notices it.

jazzyjackson · 3 months ago
Looks like Core Ultra is the only chip with integrated Arc GPU with AV1 encode. The Xeon series I was looking at, the 1700 socket so the e2400s, definitely don't have iGPU. (The fact that the motherboard I'm looking at only has VGA is probably a clue xD)

I'll have to consider pros and cons with Ultra chips, thanks for the tip.

jeffbee · 3 months ago
What recent Xeon has the iGPU? Didn't they stop including them ~5 years ago?
Havoc · 3 months ago
If you don't need it for AI shenanigans then you're better off with the smaller arcs for under a 100...they can do av1 too
jauntywundrkind · 3 months ago
I don't know how big the impact really is, but Intel is pretty far behind on encoder quality mostly. Oh wait, on most codecs they are pretty far behind, but av1 they seem pretty competitive? Neat.

Apologies for the video link. But a recent pretty in depth comparison: https://youtu.be/kkf7q4L5xl8

dale_glass · 3 months ago
What about the B60, with the 24GB VRAM?

Also, do these support SR-IOV, as in handing slices of the GPU to virtual machines?

wqaatwt · 3 months ago
SR-IOV is allegedly coming in the future (just like the b60).
cmxch · 3 months ago
It’s sort of out there but being scalped by AIBs.
Havoc · 3 months ago
Good pricing for 16gb vram. Can see that finding a use for some home servers.
syntaxing · 3 months ago
Kinda bummed that it’s $50 more than originally said. But if it works well, a single slot card that can be powered by the PCIe slot is super valuable. Hoping there will be some affordable prebuilds so I can run some MoE LLM models.
jeffbee · 3 months ago
Competing workstation cards like the RTX A2000 also do not need power connectors.
syntaxing · 3 months ago
"competing" means 12GB of VRAM at a 600ish price point though...
mrheosuper · 3 months ago
Am i missing anything ?, because it looks like a double slot GPU.