Readit News logoReadit News
Night_Thastus · a year ago
We'll have to wait for first-party benchmarks, but they seem decent so far. A 4060 equivalent $200-$250 isn't bad at all. for I'm curious if we'll get a B750 or B770 and how they'll perform.

At the very least, it's nice to have some decent BUDGET cards now. The ~$200 segment has been totally dead for years. I have a feeling Intel is losing a fair chunk of $ on each card though, just to enter the market.

rbanffy · a year ago
I'd love to see their GPGPU software support under Linux.
zargon · a year ago
knowitnone · a year ago
I don't know the numbers but the manufacture of the chip and the cards can't be that expensive...the design was probably much more expensive. Hopefully they are at least breaking even but hopefully making money. Nobody goes into business to lose money. Shareholders would be pissed!
Night_Thastus · a year ago
>Nobody goes into business to lose money. Shareholders would be pissed!

Uh...yes they do? I mean, look at all the VC startups these days like Uber/Doordash/Rivian/WeWork/Carvana/etc. Some of those have been bleeding hundreds of millions per quarter for years and keep getting more pumped in over and over.

But even when it's not for insane VC reasons, companies burn money to get market share all the time. It's a textbook strategy. They just don't do it forever. Eventually, the shareholders can get impatient and decide to bail on the idea.

declan_roberts · a year ago
I think a graphics card tailored for 2k gaming is actually great. 2k really is the goldilocks zone between 4k and 1080p graphics before you start creeping into diminishing returns.
giobox · a year ago
For sure its been a sweet spot for a very long time for budget conscious gamers looking for best balance of price and frame rates, but 1440p optimized parts are nothing new. Both NVidia and AMD make parts that target 1440p display users too, and have done for years. Even previous Intel parts you can argue were tailored for 1080p/1440p use, given their comparative performance deficit at 4k etc.

Assuming they retail at prices Intel are suggesting in the press releases, you maybe here save 40-50 bucks over an ~equivalent NVidia 4060.

I would also argue like others here that with tech like frame gen, DLSS etc, even the cheapest discrete NVidia 40xx parts are arguably 1440p optimized now, it doesn't even need to be said in their marketing materials. Im not as familiar with AMD's range right now, but I suspect virtually every discrete graphics card they sell is "2k optmized" by the standard Intel used here, and also doesn't really warrant explicit mention.

philistine · a year ago
I'm baffled that PC gamers have decided that 1440p is the endgame for graphics. When I look at a 27-inch 1440p display, I see pixel edges everywhere. It's right at the edge of losing the visibility of individual pixels, since I can't perceive them at 27-inch 2160p, but not quite there yet for desktop distances.

Time marches on, and I become ever more separated from gaming PC enthusiasts.

goosedragons · a year ago
Nvidia markets the 4060 as a 1080p card. It's design makes it worse at 1440p than past X060 cards too. Intel has XeSS to compete with DLSS and are reportedly coming out with their own frame gen competitor. $40-50 is a decent savings in the budget market especially if Intel's claims are to believed and it's actually faster than the 4060.
tacticus · a year ago
the 4060 is an "excellent" nvidia product where they managed to release something slower than the cheaper previous gen 3060
icegreentea2 · a year ago
2k usually refers to 1080p no? The k is the approximate horizontal resolution, so 1920x1080 is definitely 2k enough.
layer8 · a year ago
Actual use is inconsistent. From https://en.wikipedia.org/wiki/2K_resolution: “In consumer products, 2560 × 1440 (1440p) is sometimes referred to as 2K, but it and similar formats are more traditionally categorized as 2.5K resolutions.

“2K” is used to denote WQHD often enough, whereas 1080p is usually called that, if not “FHD”.

“2K” being used to denote resolutions lower than WQHD is really only a thing for the 2048 cinema resolutions, not for FHD.

antisthenes · a year ago
2k Usually refers to 2560x1440.

1920x1080 is 1080p.

It doesn't make a whole lot of sense, but that's how it is.

teaearlgraycold · a year ago
Please say 1440p and not 2k. Ignoring arguments about what 2k should mean, there’s enough use either way that it’s confusing.
laweijfmvo · a year ago
Can it compete with the massive used GPU market though? Why buy a new Intel card when I can get a used Nvidia card that I know will work well?
knowitnone · a year ago
warranty. Plus, do you want a 2nd hand GPU that was used for cryptomining 24/7?
teaearlgraycold · a year ago
To some, buying used never crosses their mind.
leetharris · a year ago
I see what you're saying, but I also feel like ALL Nvidia cards are "2K" oriented cards because of DLSS, frame gen, etc. Resolution is less important now in general thanks to their upscaling tech.

Deleted Comment

rmm · a year ago
I put an a360 Card into an old machine I turned into a plex server. It turned it into a transcoding powerhouse. I can do multiple indepdent streams now without it skipping a beat. Price-performance ratio was off the chart
baq · a year ago
Intel has been a beast at transcoding for years, it’s a relatively niche application though.
ThatMedicIsASpy · a year ago
My 7950X3Ds GPU does 4k HDR (33Mb/s) to 1080p at 40fps (proxmox, jellyfin). If these GPUs would support SR-IOV I would grab one for transcoding and GPU accelerated remote desktop.

Untouched video (star wars 8) 4k HDR (60Mb/s) to 1080p at 28fps

c2h5oh · a year ago
All first gen arc gpus share the same video encoder/decoder, including the sub-$100 A310, that can handle four (I haven't tested more than two) simultaneous 4k HDR -> 1080p AV1 transcodes at high bitrate with tone mapping while using 12-15W of power.

No SR-IOV.

Deleted Comment

kridsdale1 · a year ago
Any idea how that compares to Apple Silicon for that job? I bought the $599 MacBook Air with M1 as my plex server for this reason. Transcodes 4k HEVC and doesn’t even need a fan. Sips watts.
2OEH8eoCRo0 · a year ago
All Intel arc even the $99 A310 has HW accel h265 and AV1 encoding.
machinekob · a year ago
Apple Silicon still don't support AV1 encoding but it is good enough for simple Jellyfin server i'm using one myself
aryonoco · a year ago
Apple's hardware encode/decode for AV1 is quite literally, shit.
2OEH8eoCRo0 · a year ago
How's the Linux compatibility? I was tempted to do the same for my CentOS Stream Plex box.
bjoli · a year ago
Amazing. It is the first time I have plugged any gpu into my linux box and have it just work. I am never going back to anything else. My main computer uses an a750, and my jellyfin server uses an a310.

No issues with linux. The server did not like the a310, but that is because it is an old dell t430 and it is unsupported hardware. The only thing I had to do was to tweak the fan curve so that it stopped going full tilt.

jeffbee · a year ago
Interesting application. Was this a machine lacking an iGPU, or does the Intel GPU-on-a-stick have more quicksync power than the iGPU?
6SixTy · a year ago
A not inconsequential possibility is that both the iGPU and dGPU are sharing the transcoding workload, rather than the dGPU replacing the iGPU. It's a fairly forgotten feature of Intel Arc, but I don't blame anyone because the help articles are dusty to say the least.
theshrike79 · a year ago
Good to know. I'm still waiting for UNRaid 7.0 for proper Arc support to pull the trigger on one.
jmward01 · a year ago
12GB max is a non-starter for ML work now. Why not come out with a reasonably priced 24gb card even if it isn't the fastest and target it at the ML dev world? Am I missing something here?
fngjdflmdflg · a year ago
I think a lot of replies to this post are missing that Intel's last graphics card wasn't received well by gamers due to poor drivers. The GT 730 from 2014 has more users than all Arc cards combined according to the latest Steam survey.[0] It's entirely possible that making a 24gb local inference card would do better since they can contribute patches for inference libraries directly like they did for llama.cpp, as opposed to a gaming card where the support surface is much larger. I wish Intel well in any case and hope their drivers (or driver emulators) improve enough to be considered broadly usable.

[0] https://store.steampowered.com/hwsurvey/videocard/ - 0.19% share

enragedcacti · a year ago
> Am I missing something here?

Video games

rs_rs_rs_rs_rs · a year ago
It's insane how out of touch people can be here, lol
dgfitz · a year ago
ML is about hit another winter. Maybe intel is ahead of industry.

Or we can keep asking high computers questions about programming.

PittleyDunkin · a year ago
> ML is about hit another winter.

I agree ML is about to hit (or has likely already hit) some serious constraints compared to breathless predictions of two years ago. I don't think there's anything equivalent to the AI winter on the horizon, though—LLMs even operated by people who have no clue how the underlying mechanism functions are still far more empowered than anything like the primitives of the 80s enabled.

seanmcdirmid · a year ago
Haven't people been saying that for the last decade? I mean, eventually they will be right, maybe "about" means next year, or maybe a decade later? They just have to stop making huge improvements for a few years and the investment will dry up.

I really wasn't interested in computer hardware anymore (they are fast enough!) until I discovered the world of running LLMs and other AI locally. Now I actually care about computer hardware again. It is weird, I wouldn't have even opened this HN thread a year ago.

throwaway48476 · a year ago
The survivors of the AI winter are not the dinosaurs but the small mammals that can profit by dramatically reducing the cost of AI inference in a minimum Capex environment.
HDThoreaun · a year ago
Selling cheap products that are worse than the competition is a valid strategy during downturns as businesses look to cut costs
layer8 · a year ago
The ML dev world isn’t a consumer mass market like PC gaming is.
hajile · a year ago
Launching a new SKU for $500-1000 with 48gb of RAM seems like a profitable idea. The GPU isn't top-of-the-line, but the RAM would be unmatched for running a lot of models locally.
ignoramceisblis · a year ago
Okay, Nvidia.

You're right: nobody's doing ML these days. /s

Look at the value of a single website with well over a million users where they publish and run open-weights models on the regular: back in August of 2023, Huggingface's estimated value was $4.5 billion.

bryanlarsen · a year ago
These are $200 low end cards, the B5X0 cards. Presumably they have B7X0 and perhaps even B9X0 cards in the pipeline as well.
zamadatix · a year ago
There has been no hint or evidence (beyond hope) Intel will add a 900 class this generation.

B770 was rumoured to match the 16 GB of the A770 (and to be the top end offering for Battlemage) but it is said to not have even been taped out yet with rumour it may end up having been cancelled completely.

I.e. don't hold your breath for anything consumer from Intel this generation better for AI than tha A770 you could have bought 2 years ago. Even if something slightly better is coming at all there is no hint it will be soon.

Deleted Comment

hulitu · a year ago
> These are $200 low end cards

Hm, i wouldn't consider 200$ low end.

ggregoire · a year ago
> 12GB max is a non-starter for ML work now.

Can you even do ML work with a GPU not compatible with CUDA? (genuine question)

A quick search showed me the equivalence to CUDA in the Intel world is oneAPI, but in practice, are the major Python libraries used for ML compatible with oneAPI? (Was also gonna ask if oneAPI can run inside Docker but apparently it does [1])

[1] https://hub.docker.com/r/intel/oneapi

suprjami · a year ago
There is ROCm and Vulkan compute.

Vulkan is especially appealing because you don't need any special GPGPU drivers and it runs on any card which supports Vulkan.

shrewduser · a year ago
these are the entry level cards, i imagine the coming higher end variants will have the option of much more ram.

Deleted Comment

Implicated · a year ago
I was wondering the same thing. Seems crazy to keep pumping out 12gb cards in 2025.
whalesalad · a year ago
I still don't understand why graphics cards haven't evolved to include sodimm slots so that the vram can be upgraded by the end user. At this point memory requirements vary so much from gamer to scientist so it would make more sense to offer compute packages with user-supplied memory.

tl;dr GPU's need to transition from being add-in cards to being a sibling motherboard. A sisterboard? Not a daughter board.

kimixa · a year ago
One of the reasons GPUs can have multiples of CPU bandwidth is they avoid the difficulties of pluggable dimms - direct soldered can have much higher frequencies at lower power.

It's one of the reasons why ARM Macbooks get great performance/watt, memory being even "closer" than mainboard soldered RAM so getting more of those benefits, though naturally less flexibility.

heraldgeezer · a year ago
GDDR does not exist in sodimm form factor.

Intel and AMD internal GPUs can use normal computer RAM. But they are slower for that reason and many others.

PhasmaFelis · a year ago
> Am I missing something here?

This is a graphics card.

davrosthedalek · a year ago
Sir, this is a Wendy's
mort96 · a year ago
Who cares?
tofuziggy · a year ago
Yes exactly!!
heraldgeezer · a year ago
This is not an ML card... this is a gaming card... Why are you people like this?
Implicated · a year ago
12GB memory

-.-

I feel like _anyone_ who can pump out GPU's with 24GB+ of memory that are capable to use for py-stuff would benefit greatly.

Even if it's not as performant as the NVIDIA options - just to be able to get the models to run, at whatever speed.

They would fly off the shelves.

elorant · a year ago
Would it though? How many people are running inference at home? Outside of enthusiasts I don't know anyone. Even companies don't self-host models and prefer to use APIs. Not that I wouldn't like a consumer GPU with tons of VRAM, but I think that the market for it is quite small for companies to invest building it. If you bother to look at Steam's hardware stats you'll notice that only a small percentage is using high-end cards.
tokioyoyo · a year ago
This is the weird part, I saw the same comments in other threads. People keep saying how everyone yearns for local LLMs… but other than hardcore enthusiasts it just sounds like a bad investment? Like it’s a smaller market than gaming GPUs. And by the time anyone runs them locally, you’ll have bigger/better models and GPUs coming out, so you won’t even be able to make use of them. Maybe the whole “indoctrinate users to be a part of Intel ecosystem, so when they go work for big companies they would vouch for it” would have merit… if others weren’t innovating and making their products better (like NVIDIA).
JohnBooty · a year ago

    Would it though? How many people are running inference at home? 
I don't know how to quantify it, but it certainly seems like a lot of people are buying consumer nVidia GPUs for compute and the relatively paltry amounts of RAM on those cards seems to be the number one complaint.

So I would say that Intel's potential market is "everybody who is currently buying nVidia GPUs for compute."

nVidia's stingy consumer RAM choices also seem to be a fairly transparent ploy to create a protective moat around their insanely-high-profit-margin datacenter GPUs. So that just seems like kind of an obvious thing for Intel or AMD to consider tackling.

(Although, it has to be said, a lot of commenters have pointed out that it's not as easy as just slapping more RAM chips onto the GPU boards; you need wider data busses as well etc.)

ModernMech · a year ago
It's a chicken and egg scenario. The main problem with running inference at home is the lack of hardware. If the hardware was there more people would do it. And it's not a problem if "enthusiasts" are the only ones using it because that's to be expected at this stage of the tech cycle. If the market is small just charge more, the enthusiasts will pay it. Once more enthusiasts are running inference at home, then the late adopters will eventually come along.
cowmix · a year ago
100% - this could be Intel's ticket to capture the hearts of developers and then everything else that flows downstream. They have nothing to lose here -- just do it Intel!
bagels · a year ago
They could lose a lot of money?
rafaelmn · a year ago
You can get that on mac mini and it will probably cost you less than equivalent PC setup. Should also perform better than low end Intel GPU and be better supported. Will use less power as well.
m00x · a year ago
You can just use a CPU in that case, no? You can run most ML inference on vectorized operations on modern CPUs at a fraction of the price.
marcyb5st · a year ago
My 7800x says not really. Compared to my 3070 it feels so incredibly slow that gets in the way of productivity.

Specifically, waiting ~2 seconds vs ~20 for a code snippet is much more detrimental to my productivity than the time difference would suggest. In ~2 seconds I don't get distracted, in ~20 seconds my mind starts wandering and then I have to spend time refocusing.

Make a GPU that is 50% slower than a 2 generations older mid-range GPU (in tokens/s) but on bigger models and I would gladly shell out 1000+$.

So much so that I am considering getting a 5090 if nVdia actually fixes the connector mess they made with 4090s or even a used v100.

evanjrowley · a year ago
Maybe that's not too bad for someone who wants to use pre-existing models. Their AI Playground examples require at minimum an Intel Core Ultra H CPU, which is quite low-powered compared to even these dedicated GPUs: https://github.com/intel/AI-Playground
bongodongobob · a year ago
I don't know a single person in real life that has any desire to run local LLMs. Even amongst my colleagues and tech friends, not very many use LLMs period. It's still very niche outside AI enthusiasts. GPT is better than anything I can run locally anyway. It's not as popular as you think it is.
rubatuga · a year ago
I run a 12GB model on my 3060 and use it to help answer healthcare questions. I'm currently doing a medical residency. (No I don't use it to diagnose). It helps comply with any HIPAA style regulations. I sometimes use it to fix up my emails. Not sure why people are longing for a 128GB card, just download a quantized model and run with LM Studio (https://lmstudio.ai/). At least two of my colleagues are using ChatGPT on a regular basis. LLMs are being used in the ER department. LLMs and speech models are being used in psychiatry visits.
dimensi0nal · a year ago
The only consumer demand for local AI models is for generating pornography
throwaway48476 · a year ago
I want local copilot. I would pay for this.
Havoc · a year ago
Who is the target audience for this?

Well informed gamers know Intel's discrete GPU is hanging by a thread, so they're not hoping on that bandwagon.

Too small for ML.

The only people really happy seem to be the ones buying it for transcoding and I can't imagine there is a huge market of people going "I need to go buy a card for AV1 encoding".

ddtaylor · a year ago
Intel has earned a lot of credit in the Linux space.

Nvidia is trash tier in terms of support and only recently making serious steps to actually support the platform.

AMD went all in nearly a decade ago and it's working pretty well for them. They are mostly caught up to being Intel grade support in the kernel.

Meanwhile, Intel has been doing this since I was in college. I was running the i915 driver in Ubuntu 20 years ago. Sure their chips are super low power stuff, but what you can do with them and the level of software support you get is unmatched. Years before these other vendors were taking the platform seriously Intel was supporting and funding Mesa development.

tristan957 · a year ago
The AMD driver has been great on my Framework 13, but the 6.10 series was completely busted. 6.11 worked fine. I can't remember a series where any of my Intel laptops didn't work for that long.
ryao · a year ago
This is repeated often, but I have had very good support from Nvidia on Linux over the years. AMD on the other hand gives lousy support. File a bug report about a problem and expect to be ignored, especially if it has anything to do with emulation. Intel’s Linux support on the other hand has been very good for me too.
zamalek · a year ago
If it works well on Linux there's a market for that. AMD are hinting that they will be focusing on iGPUs going forward (all power to them, their iGPUs are unmatched and NVIDIA is dominating dGPU). Intel might be the savior we need. Well, Intel and possibly NVK.

Had this been available a few weeks ago I would have gone through the pain of early adoption. Sadly it wasn't just an upgrade build for me, so I didn't have the luxury of waiting.

sosodev · a year ago
AMD has some great iGPUs but it seems like they're still planning to compete in the dGPU space just not at the high end of the market.
sangnoir · a year ago
> Too small for ML.

What do you mean by this - I assume you mean too small for SoTA LLMs? There are many ML applications where 12GB is more than enough.

Even w.r.t. LLMs, not everyone requires the latest & biggest LLM models. Some "small", distilled and/or quantized LLMs are perfectly usable with <24GB

Havoc · a year ago
If you're aiming for usable, then sure that works. The gains in model ability from doubling size is quite noticable at that scale though.

Still...tangibly cheaper than even a 2nd hand 3090 so there is perhaps a market for it

71bw · a year ago
>Well informed gamers know Intel's discrete GPU is hanging by a thread, so they're not hoping on that bandwagon.

If Intel's stats are anything to go by, League runs way better than it did on the last generation and it's the only game that has had issues on the last-gen that's still left running on DX9, CS:GO was another notable one but CS2 has launched since and the game has moved to DX12/VK. This was, literally, the biggest issue they had - drivers were also wonky but they seem to have ironed that out as well.

marshray · a year ago
I'm using an Intel card right now. With Wayland. It just works.

Ubuntu 24.04 couldn't even boot to a tty with the Nvidia Quadro thing that came with this major-brand PC workstation, still under warranty.

mappu · a year ago
> Intel's discrete GPU is hanging by a thread, so they're not hoping on that bandwagon

Why would that matter? You buy one GPU, in a few years you buy another GPU. It's not a life decision.

Havoc · a year ago
>Why would that matter?

The game devs are going to spend all their time & effort targetting amd/nvidia. Custom code paths etc.

It's not a one size fits all world. OpenCL etc abstraction are good at covering up differences, but not that good. So if you're the player with <10% market share you're going to have an uphill battle to just be on par.

epolanski · a year ago
Cheap gaming rigs.

They do well compared to AMD/Nvidia at that price point.

Is it a market worth chasing at all?

Doubt.

Narishma · a year ago
It's for the low end gaming market which Nvidia and AMD have been neglecting for years.
screye · a year ago
all-in-1 machines.

Intel's customers are 3rd party Cpu assemblers like Dell & HP. Many corporate bulk buyers only care if 1-2 of the apps they use are supported. The lack of wider support isn't a concern.

qudat · a year ago
If you go on the intel arc subreddit people are hyped about intel GPUs. Not sure what the price is but the previous gen was cheap and the extra competition is welcomed

In particular, intel just needs to support vfio and it’ll be huge for homelabs.

spookie · a year ago
It's cheap, plenty of market when the others have forgotten the segment.
red-iron-pine · a year ago
something like 60% of the GPUs by volume on Steam are on the mid-to-low end. like a non-trivial number still running 750 Ti's. there are a lot of gamers, and most aren't well-off tech bros.

there is a niche for this, though it remains to be seen if it'll be profitable enough for a large org like Intel

jmclnx · a year ago
>Battlemage is still treated to fully open-source graphics driver support on Linux.

I am hoping these are open in such a manner that they can be used in OpenBSD. Right now I avoid all hardware with a Nvidia GPU. That makes for somewhat slim pickings.

If the firmware is acceptable to the OpenBSD folks, then I will happly use these.

rbanffy · a year ago
They are promising good Linux support, which kind of implies, at least, that everything but opaque blobs are open.
rbanffy · a year ago
For me, the most important feature is Linux support. Even if I'm not a gamer, I might want to use the GPU for compute and buggy proprietary drivers are much more than just an inconvenience.
zokier · a year ago
Sure, but open drivers have been AMDs selling point for a decade, and even nVidia is finally showing signs of opening up. So it's bit dubious if these new Intels really can compete on this front, at least for very long.
whalesalad · a year ago
I welcome a new competitor. Sucks to really only have one valid option on Linux atm. My 6600 is a little long in the tooth. I only have it becuase it is dead silent and runs a 5K display without issue - but I would definitely like to upgrade it for something that can hold its own with ML.