Funny enough, I predicted this in a previous HZ comment 3 months ago when they cut their server systems business.
> "My guess is that they're cutting the portfolio down to feature only chipsets and add-in cards. Just CPUs, GPUs, FPGAs, ASICs, networking. It makes sense if anything - it really focuses on their core business. No more side-projects with questionable profitability.
In the last 2 years since Pat Gelsinger took over, they've cut RealSense, Movidius, Optane Memory, IPO'd MobileEye, and now exited the Server Systems business. The only odd-ball left are their NUC business."
If any hedge funds are looking for analysts, I'm always open to offers...
It's sad to NUCs go, but it was inevitable. They made products for customers, while simultaneously competing with said customers. It's hard to build any decent partnerships on that premise. I'm currently typing this on a Serpent Canyon NUC though, so it's certainly bittersweet.
Sad news. NUCs could have been an excellent platform if they had tried to work a bit more on cooling.
On third-party cases, NUCs were great. But on stock cases, some models were too noisy. Still, they provided great value.
Nonetheless, I agree it's better for Intel to focus on their core business and leave this market to others. Some niche PC makers such as Cirrus7 offer great NUC-like systems.
NUC, depending on the model, had decent cooling. I've owned a few of them over the years. My Skull Canyon having been my longest desktop computer ever and always had great Linux support. Funny enough the fan on that unit was recalled and Intel happily replaced it well out of warranty. But with the advent of Minisforum, Beelink and all the other random options on AliExpress I would gather it's been getting harder and harder to command a premium price because it says Intel on the box. In fact I just ordered a Topton SFF that STH recommended but was waiting until the i3-N305 was available for a new OpnSense build.
I wonder how this impacts emerging markets for Intel. If they've got no outlets to test their own product uptake and get people excited about the brand I feel like that's a missed opportunity and the cost of doing business. I saw a lot of NUC devices in data centers in the VMware hayday because they were so widely popular on TinkerTry and Virtually Gheto (i.e. William Lam) at the time.
I saw security vendors copy that play making SFF firewalls to incent stakeholders to run at home that turned into 7-figure deals over the long term.
I don't think Gelsinger is doing a lot of good for Intel. You didn't need him to cut costs. He's had plenty of time behind the wheel at this point, Intel should be on a more interesting trajectory by now.
If Intel had 'worked more on [thermals]' we probably wouldn't have the M2 chip.
At nearly every phase of Intel's history they've had the second-worst chips for instructions per watt, occasionally taking over first place.
In a world where a large and ever-growing percentage of all CPUs are not fronted by giant cooling fans... It's like Intel decided what they wanted on their tombstone in 1995 and haven't felt the need to change it since.
Things like the Dell Optiplex Micro and Lenovo Mini ThinkCenters can provide a similar experience. Generally I have found that they haven't had any major noise issues but a part of that was being very conservative with clock rates.
NUC isn't exactly the same as those, it is designed to be much more user upgradable and the price was very competitive. But even nowadays it is surprising to see them in the wild because they just aren't that common.
I bought the Intel Euclid (basically a Realsense with integrated compute) and out of the box it had a fan curve that would cause it to thermal throttle within 5 minutes, which would also cause the wifi to drop out.
On a regular consumer device you pick a tradeoff that will make it quieter, but really the only thing anyone used the Euclid for was robotics, where if you're using the CPU at all it's to run SLAM or object recognition nets, in which case you need 100% of the performance 100% of the time...
I've been running a NUC7 for several years and the onboard fan has failed four times now.
Replacing it is a 15 minute job, but the timing is always terrible.
Now I have an alert set for when the CPU is heavily thermal throttling or when the temp sensors hit certain thresholds and I keep a spare fan on hands at all time.
I always saw the NUCs as proof of concept for small form factor PCs and to that end I think they have succeeded. There are a ton of different form factors now.
Didn’t Intel used to be pretty up front that the NUC was a way to push computer manufacturers to be more innovative? Sort of like a widely available reference computer.
By making them without RAM and SSD they were never going to be that profitable. It was nice being able to buy boxes without wasted parts you knew you didn’t want.
The probably is that the amd apu is just better and more balanced now. You get lower tdp + better graphic cards. So ppl would just buy the amd counter part
Nooooo! This is a tragedy. I love my NUC. It's totally handy to have a little computer on my desk to load up with whatever Linux distro I please and play around. NUC is way better than Raspberry PI.
Isn’t this basically the innovator’s dilemma? You can call it focusing the core business, but you can also call it dropping investment in long term cash flows.
The dilemma is when you have tech A: Profitable and large. Tech B: Unprofitable and small, but the tech itself improves at a faster rate than tech A.
GPUs were their true dilemma. GPUs were only good for gamers, who are picky and extremely value sensitive. Whereas CPUs could be sold to datacenters and enterprises, very high profit margins. Hence CPUs were core at intel, no one has the balls to bet on GPUs and go all in.
But CPUs ran out of performance improvements, GPUs continue to scale up because of parallelism.
Then suddenly, new valuable applications started to be based on GPUs, first Crypto, now AI. Now CPUs are completely commoditized, and Nvidia dominates the money-printing GPU market, and worth 8x of intel.
NUCs are not some rapidly improving tech, they are just some minor market that is profitable but never that large.
The NUC was never going to be a mass market product.
Sure Apple sells the Mac Mini. But the entire Mac line is only 10% of Apple’s revenue and most of that is laptops. But Apple has to support its ecosystem as its only supplier. Intel doesn’t
And besides, a NUC based on x86 chips is by definition going to suck at either performance or heat.
> It's sad to NUCs go, but it was inevitable. They made products for customers, while simultaneously competing with said customers.
That is true, but they're already a decade into shipping NUCs. Maybe it wasn't a problem after all. It could also be just a move to show the stock market that they focus on getting leaner.
You can then argue that they've spent the last decade slipping into irrelevance, so perhaps the NUC focus was a distraction from their core issues. They're at an inflection point in the company where either they turn the ship around and successfully become the second largest foundry, closing their technology gap with TSMC, or they continue losing market share. They can't fund new fabs and new nodes without a massive reduction in their cost center.
Depends on the portfolio and strategy, but personally I think Intel is a Buy. They're burning money building new fabs with little indication that they can execute on their node or product roadmaps, but they've really took a chunk out of their cost-centre to the point where even during one of their worst quarters in the last 30+ years in terms of sales, they're still turning a profit. The Saphire Rapids Xeon is out finally out and making money after 3+ years of delays. The client products are the best they've looked in 15 years. They've got some big customers lined up for IFS. Considering the market was happy to price them at ~$64 just two years ago without all that information, makes me believe $33 is a poor market for where they're at. That's near enough their asset price.
A bit off mark on networking, looking at Tofino there. Could have maybe left barefoot alone, but no, buy it, hype it, actually tape out and start delivering nextgen, then cancel before people rack them.
With regards to Tofino, I think they've looked at the portfolio and realized they can fill that niche with their FPGA products. Looks like they've stripped out all the useful IP from Barefoot and repackaged it to be used on a Stratix/Agilex chips, or as tiles on their other devices.
It's worse than that. We built Tofino powered boxes. We have multiple paying customers who bought and deployed them. Intel then cancelled it, screwing us and our customers.
As an everyday user of an Intel Compute Stick I can empathize with this.
I think you're wrong about the Intel portfolio though; Arc GPUs are another line that is going to be unceremoniously cut in a few years.
I mean I could be wrong, but Intel has gotta be looking at Nvidia's Hopper or AMD's MI300 DATA oriented architectures and wondering why the Arc team is trying to make $750 gaming computers. Xe GPUs are going a different direction from consumer Arc GPUs already anyways.
> I mean I could be wrong, but Intel has gotta be looking at Nvidia's Hopper or AMD's MI300 DATA oriented architectures and wondering why the Arc team is trying to make $750 gaming computers.
That's the same strategy AMD has tried with ROCm (de-facto pro-only compute ecosystem) and it hasn't worked. Nobody is going to buy a $5000 accelerator card (or even rent AWS time) just to tinker with your ecosystem unless it's already known to be the bees' knees. NVIDIA built their success by making sure their cards were the first thing people reached for when they wanted to build a high-performance application on a compute accelerator.
Further, there's a lot of redundant work and overlapping lines of business here. You have to develop game drivers for the iGPU for anyone to take you seriously, so at that point why not make the dGPU cards and increase the ROI multiplier of your work? Or are you planning on outsourcing the whole shebang and just licensing a hardware SIP core and a driver stack? There's really only two other names with a credible Windows/Linux driver stack... NVIDIA and AMD. Imagination/PowerVR don't have any windows presence, and Intel already tried this back in the early Atom era and it fucking sucked.
Like yea you've actually pointed out the exact reason they won't cut it: Hopper or MI300 is where the HPC and performance-compute segments are going and you can't be credible in that space without an internal solution for the accelerator half. It doesn't have to be GPUs, or they don't need to have graphics pipelines, but once you are doing all the work to develop the GPGPU side, you might as well make a variant with a pixel pipeline and display outputs and sell it to enthusiasts too.
The semiconductor space has a lot of these overlaps in product verticals, and if you choose not to be in them, you're leaving revenue on the table that is relatively "cheap" to access. The same CPU uarchs that work in enterprise work in consumer, and in the grand scheme of things re-using the same uarch for a low-margin product is fine. You'd never pay to develop the product from scratch just for the consumer market, but once you've done the work, you might as well sell it to consumers too. If you're doing consumer, you probably want to be doing laptop too, but those need iGPUs. Which need drivers, and if you're doing drivers you might as well do dGPUs too.
Intel doesn't need to be exactly AMD, but AMD has gone through this and already cut to the bone on everything they didn't need. You can draw a circle around the product verticals they're in and they've pretty much identified the core business requirements for the CPU/GPU markets.
You're right though that Gelsinger is obviously stripping the business down to his own vision of those essentials. I just think right now the evidence indicates that GPUs are still a part of that vision. You can't be competitive in HPC without GPGPU, you can't do laptop without iGPU, they might as well do gaming GPUs too, especially since a lot of the GPGPU research will overlap. The drivers don't, but you have to do them for iGPU anyway for laptops.
The fabs are the rough one though. They're expensive as shit, but all intel's legacy IP uses them, and the only thing the fabs run is Intel's IP. I think he's serious about building the wall between IP and fab at intel, and about bringing in external business, but right now there is no hope of even a GloFo-style spinoff working. It would absolutely take down both halves of the company to even try.
He's definitely got a tough road, Intel's pain is only just beginning and it's going to be a long time to profitability.
I just said that as joke; you couldn't get me into finance unless you had a gun to my head. I do think long though, Intel makes for an interesting investment. It plays in about the 70th percentile for stock volatility for the year, which for a name like Intel is quite surprising. Analysts put it somewhere between -50% and +100% by this time next year. I've read of hedge funds making big bets on way smaller spreads than that. They're 25% YTD, so it's by know means "nothing".
> They made products for customers, while simultaneously competing with said customers. It's hard to build any decent partnerships on that premise.
I once talked to an AMD engineer and asked why they didn't just build a barebones NAS chassis with some of their ryzen embedded stuff since Ryzen is quite popular but there's very few Ryzen products in that segment. Obviously QNAP and Synology have some decent demand. That's basically what they said, didn't want to compete with customers.
GPUs are sort of the opposite example where I don't really feel any particular attachment to MSI, Gigabyte, Asus, PowerColor, etc. EVGA and Sapphire are the only ones that have managed to claw together some consumer mindshare through their warranties. But the rest are essentially customizing an AMD/NVIDIA reference design PCB and a cooler that's perfectly forgettable and interchangeable with any other offering in their price class. They're not even allowed to do double VRAM configs anymore because that would cut into Radeon Pro and Quadro revenue etc. In that scenario I don't see a lot of value to having another middleman taking a 10% cut, and to me the value of products being available at MSRP, with no diffusion of blame through the supply chain, would be worth losing the partners over.
And while systems integration (NUCs, servers, etc) are clearly something where there's a lot of value from this kind of diversity, motherboards really are not. AMD and Intel have both killed off third-party chipsets like nForce, Abit, etc, and locked everything down to their one ecosystem they control, much like GPUs. It's not quite as pronounced as GPUs, and previously there was a lot of diversity in features, but PCIe 5.0 motherboards tend to reverse this, almost every board has exactly 2 PCIe slots and more or less the exact same featureset elsewhere. And the trend is towards more and more being onboard the CPU itself anyway (AMD chips the chipset is literally just an IO expander and is completely uninvolved in management tasks) and once onboard voltage regulators (FIVR, DLVR, etc) really start to take off in the next 10-15 years the motherboard is only going to become dumber and dumber. And in that world there's less and less of a need for a partner anyway - the CPU is all self-contained and locked down anyway, partners can't experiment and do cool things, they are a dumb pipe that pumps in 2V for the DLVR to step down at point-of-use. So why pay a 10% margin for their "value-add"?
It slays me that AMD is talking a big game about "open platform" and everyone is still locking down third-party chipsets and even the Platform Lock. If you want to lock the chip to the board for security, fine, but locking it to the brand doesn't do anything except ruin the secondhand market. Oh no I have to swap it out with another lenovo chip if I steal it, what exactly does that accomplish? And if you accept the conceit of the PSP there's no reason it has to be permanent anyway, you can allow the PSP to unlock it instead of permanently blowing e-fuses. And again, third-party chipsets are where a ton of innovation happened, it's better for the market if third parties can make those double-VRAM models and undercut AMD/NVIDIA's ridiculous margins on those workstation products, or shanghai a Celeron 300A into being a dual-socket system like Abit. What we have right now is tivoization in support of product segmentation, branded as a "security feature".
also to be clear, management engine is far far worse. the chipset boots the chip for Intel, Ryzen is a SOC always. For intel, it's super intimately involved with the processor bringup in ways that can't be exposed to third parties anymore. They literally can't open anything while the ME is on the chipset, but they're flailing at homeostasis let alone big rearchs of their processor's brainstem against the possibility of third-party control of the bringup.
The chipset is a pure IO expander for AMD. AMD still is doing way better at that, it's just things like X300 ("the chipset is no chipset") being restricted to industrial/embedded, or board partners not being allowed to pursue things they want that break AMD's segmentation. PCIe 4 enablement (including opt-in) on select X370/X470 boards was something partners wanted for example. And there was no technical reason for X399/TRX40 to be segmented and even WRX80 could have been shoehorned on with "everything works just not optimally" level compatibility backwards and forwards. Partners could have done that if it wasn't denied/locked out. They did it on the Socket SP3 flavor.
Partners should ideally just get the freedom to play, and if they can make something work, cool. Let's have more Asrock/Asrock Rack and ICY DOCK design shenanigans again. Clamshell VRAM cards should be sold relatively close to actual cost rather than being gated by both AMD and NVIDIA. Etc. Partners should have the ability to configure the product in any way the product could reasonably be engineered to work. If features are being explicitly segmented by product tier it should be enforced by e-fuse feature-fusing at launch and that's the deal, no taking AVX-512 out after it launched.
This is sad, because the Intel NUCs were some of the last computers with adequate technical documentation.
For most modern computers you cannot be 100% sure about what you get for your money, until you have them in your hands and you can run tests yourself. By then, if you discover that the computer is not exactly what you want, it is too late. You cannot get your money back because the computer is not defective, it is just different from what you expected. There are helpful published product reviews, but those do not cover all products, especially not most of the cheaper versions.
In recent years, many small Chinese companies have introduced various small computers that are much more innovative than the latest Intel NUC models, but they have little or no documentation (at least in English) and they frequently have manufacturing quality problems, so the Intel NUCs will be missed.
Moreover, until now nobody has made good computers with 35 W CPUs and with a volume of less than 0.5 L or good computers with 45 W CPUs and with a volume of less than 0.7 L, except Intel. All the alternatives use either bigger cases or less powerful CPUs.
I had the same feeling of disappointment when Intel exited the consumer motherboard business. I bought one of their Ivy Bridge generation motherboards for my PC build at the time because it was the only one that I could confirm had the features I wanted. The motherboard manual listed details like the part numbers for the SuperIO, Firewire and audio chips, the location of the temperature sensors on the board, the current ratings for the fan headers. I knew exactly what I was getting, and knew there was no reason to shell out for a more "high end" motherboard because there wouldn't have been any improvement to any functionality I cared about.
Man do I miss the days of DFI motherboards when their engineers would pop into the overclocking community and share code snippets of relevant BIOS firmware. The things were so well documented there were something like 3 or 4 third-party BIOS distributions.
Then there was their community and RMA manager Dona. That woman had to be one of the hardest working people I've ever encountered. She must have slept like 5 hours a day. People could post on some obscure forum about a defect and she'd find the post and make it right. One time, when I was around 16 in 2007, I broke a very niche and specific overclocking record on DDR2 latency that still passed Memtest x86, and Dona hooked me the hell up. Suddenly started getting all sorts of companies sending me heatsinks, RAM, PSU's, etc to "review" if I wanted to. Sidenote to this tangent - I have to say, the ultra-fine grit sandpaper kit with 12 sheets in increasing grit, was absolutely my favorite. Learned how to sand a heatsink down to a near mirror finish and shaved a few degrees celsius off my de-capped AMD Opteron 165.
DFI BIOS were famous for exposing the most settings for regulating RAM/ CPU/ Chipset/ GPU/ SATA/ PCI-E/ IDE communication and lanes. This guide/ participating in the DFI overclocking community probably taught me more about the fundamentals of computer science, and how all those electrons move around to make the sand think. https://forums.overclockersclub.com/topic/100835-the-definit...
> For most modern computers you cannot be 100% sure about what you get for your money, until you have them in your hands and you can run tests yourself. By then, if you discover that the computer is not exactly what you want, it is too late. You cannot get your money back because the computer is not defective, it is just different from what you expected.
Can you give an example? Clearly you are concerned with more than just the listed specs. Do you mean like RAM and SSD models? Or even lower level than this?
For most laptops (some gaming machines excepted) you don't find out the CPU and GPU power limits until after you install third-party software to inspect those settings. The turbo power limits affect system performance far more than minor clock speed differences between different Intel SKUs with similar core counts; it's not uncommon to find that the difference between an i5 and an i7 matters less than the unspecified thermal and power delivery limits.
The spec sheets that are provided for pre-built consumer machines will at best identify the two or three most expensive chips and give you a vague idea of what class of part they selected for up to a dozen other components. You will definitely never get the specificity of a PCPartPicker parts list. You won't be able to audit the list of components for ones that are known to be problematic for Linux use.
And as a bonus, "You cannot get your money back because the computer is not defective, it is just different from what you expected" isn't true for them; you can return for any/no reason in the first two weeks.
I've bought many NUCs over the years. I'm literally sitting in my office right now with 8 NUCs set up -- 1 as a workstation, and 7 as servers performing various tasks. I know people have always complained that they are a bit more expensive than the alternatives, but I'm willing to pay a premium for the excellent Linux compatibility and reliability.
That said, I've been expecting this for quite some time. I doubt the NUC line made a lot of money for them -- their stated goal with NUC was for these machines to be demonstrations of what can be done with their hardware. In their current situation, I'm not surprised that they decided this was needed to increase their focus.
I guess I'll be shopping around for alternatives at some point.
>but I'm willing to pay a premium for the excellent Linux compatibility and reliability.
Yeah it is quite strange. Even the Vision Five 2 RISC-V has better software support than your usual ARM SBC. The big caveat is that the opensource drivers only support Vulkan but Imagination ™ has planned on releasing a Vulkan based OpenGL implementation. Of course application software support is worse than ARM because of recompilation but that is a trivial barrier. If the hardware is any good, distros and developers will compile their packages for the new ISA.
I call BS on them being not good market fit, Apple does totally fine with mac mini. also these are just laptops w/o screen and battery. IMO the real reason may be worries around minipcs finally getting good enough that they may cannibalize higher margin PC market. BTW there are other vendors like Beelink that make semi decent tiny computers and I have one of those as my teenage kids pc,works just fine.
> BTW there are other vendors like Beelink that make semi decent tiny computers and I have one of those as my teenage kids pc,works just fine.
The issue is, Intel's aiming more upmarket (based on their pricing) for something that doesn't offer a lot more. Intel's done interesting/fun things, such as building the entire motherboard into a single slot sized unit, but going out and buying a Beelink is a better proposition for most people.
If anything, I think Intel can exit this market because they succeeded. They lead the way, they begat a new form factor, and it's won. But the market has grown much more competitive, and it's hard to see what Intel would do to remain relevant & important in this market. Their current efforts are interesting, and even good, but not really enough to clearly differentiate, not enough to command a huge lead in this market, and certainly not at the somewhat above average prices Intel has been asking.
This follows a lot of other Intel examples. Intel left DRAM market in 1985. Intel left SSDs in 2020, in a similar situation to NUCs here: they basically created the mass consumer market, by pioneering NVMe & creating amazingly high-value products that used to be ultra-expensive proprietary botique items.
This seems like a classic Innovator's Dilemna situation, of Intel having a strong hand creating mass-market products that define the industry, but then being chased out by down-market competition, once the offerings really become a true everyday commodity. I'm not sure if there even is another way Intel ought to behave here.
> If anything, I think Intel can exit this market because they succeeded.
They lead the creation of a market but failed to maintain any competitive edge in it. That's a story of initial success that ultimately ran through their fingers, like so many other examples you've presented. Calling this fully "a success" seems very generous.
> If anything, I think Intel can exit this market because they succeeded.
You could spin the entire unit off so it could compete on it's own terms. There's plenty of success stories to be found this way, and Intel's habit of doing this time after time means that I have very little confidence in any new product they bring to market outside of CPU cores and view them mostly as a consumer electronics company now.
I completely agree with your analysis. I'm writing this on a Beelink (running aftermarket Linux) and couldn't be happier about the value proposition. I started my shopping looking for a NUC, but they were all too pricey for what I wanted/needed.
I think its a bigger issue that the next gen NUCs will be good enough to cannibalize the rest of the desktop market. Look at the recent amd offerings and the mac mini with M1. They can replace my destop fully. The only reason I have a desktop now is for gaming.
I don't think it's the form factor, I think it's more about competing with their own customers. There are similar offerings (including AMD based) from several other vendors. I've been far more interested in the AMD based ones the past few gen... currently using a MiniForum model with a 5900HX for home server duties.
There are some great Chinese mini PCs on Amazon, many of them using Intel processors. Some of these are very compelling against ARM mini PCs such as the Raspberry Pi 400.
Yes. I think that is a solid NUC alternative with one exception: the lack of USB4/ Thunderbolt as standard. Intel was very good about including TB on its NUC line and also did a better job upgrading to 2.5GbE.
They're overpriced for what you are buying. I'm not interested in paying $400 for a bare bones i5 with no ram or disk which adds another $150-200.
I bought a quad i5 nuc on sale off Newegg for $300. I also ordered parts to build a little AMD 6 core APU itx setup for 50 bucks more. If the Nuc wasn't on sale I would not have bothered to even look at it. And for 50 bucks more got waaaaay more compute power. The only reason I even bought the Nuc and not another itx was simply the very small size.
The hardkernel odroid h3 with a quad Celeron, dual 2.5 GbE and m.2 M key costs $130 without the case/PSU. Intel can do better on pricing.
I would bet the majority of Mac minis sold are used as ci/cd servers for iOS and Mac development. I would also bet the Mac mini is the third lowest selling Mac with the Mac ultra and Mac Pro being the least.
I do my day-to-day work on a Mac mini, people are losing out if they're only using them for testing purposes. They're not terribly expensive (when compared to other Apple products), really good form and volume factor, i.e. a Mini can do a lot of stuff even though is, well, not that big, plus a Mini doesn't consume all that much power, to the contrary.
The Mac Minis are a great value compared to the rest of Apple's lineups. The NUC is overpriced, like why would I get that instead of a generic laptop? It was just a poor value for what it offered.
Is there data on how well they actually sell? I always vaguely assumed it was a pretty low-volume product that they couldn't actually get rid of due to niche customer applications. They've historically often been very slow to _update_ it; they definitely don't treat it as one of their more important products.
> Is there data on how well they actually sell? I always vaguely assumed it was a pretty low-volume product that they couldn't actually get rid of due to niche customer applications. They've historically often been very slow to _update_ it; they definitely don't treat it as one of their more important products.
The Mac mini has always been Apple's low-cost trojan horse product for existing PC users who already have a keyboard, mouse and display. The issues with keeping it updated in the past had more to do with the Intel processors available to Apple.
In the short Apple Silicon era, the Mac mini has been upgraded multiple times. It's remains an important product.
How do you know this? I personally decided against a mini when I specced one out and a MBP wound up cheaper. The pricing strategy certainly made me _feel_ like they were pushing me away from the mini.
TBH I dont own a mini but I have used it and it seems fine. online reviews are also decent if you go above baseline 8Gb. but thats targeted towards mac users which are already in higher price bracket so not too bad. plus its better if power consumption and noise level.
overall I feel that there is enough integration density in processors that the laptop form factors will be good enough for most common (& semi advanced) uses. so this seems like an odd decision to move out of that market (technically) but if you factor in business then it starts to makes sense.
Assuming you have an external monitor already, one can get pretty nice deals on Mac minis. I paid less than $1500 for an Apple Silicon Mac mini and a similarly specc'd MBP cost something like $800-$1000 more. Later on, I can continue to upgrade by replacing with a newer Mini or Apple Studio at a lower price (and better cooling) than buying a new laptop.
I wish they had stayed out. People are talking about how they started this great market segment but mini-ITX was a real standard motherboard size started by VIA and thin mini-ITX would be more popular if Intel didn't just make whatever they wanted and set a precedent of consumer trash instead of components for this size class.
The Mac Mini is a totally different market: it 1) runs macOS 2) has Apple Silicon 3) has no competitors for running macOS 4) is the cheapest entry point into iOS development which is a high paying job
Is that supposed to be positive or negative? Because the only thing I'm reading is "I can think of nothing positive to say, but if you absolutely have to use macos, it totally runs that.".
I bought one with an AMD 4800U in early 2022 to be used as a cheap, dedicated Linux device for work (mostly Remote Desktop plus some local docker development environments) and it’s been rock solid for me.
Your mileage may vary, but I’ve been happy enough with mine for the price that I had already decided I would be choosing Beelink for my next mini PC over a NUC.
That said, I have seen some mixed reports of some issues around thermal throttling and eGPU support for some of the newer gaming focused ones, but I think if you have realistic expectations given the form factor (and especially if you’re not going for gaming) then they are fairly reliable and sturdy little devices.
* Also note, I bought mine barebones and added my own ram and SSD. I can’t speak to the quality of what they ship with, but I also haven’t heard any complaints from others with that regard.
I've deployed about two dozen of them for small business clients who needed the form factor.
Everyone has loved them. I wound up getting one to manage my print farm, and no complaints there, either. Knock on wood, but they Just Work, so far. Noise and heat have not been an issue, even with the oldest units.
the one I have for my daughter is okay. gets the job done but wont use for gaming etc. I do however want to get the Beelink GTR7 [1]. that seems amply powered and decent build quality/noise level.
They only targeted enthusiasts but there's incredible competition above and below.
Down market, there are way cheaper 1L business PCs which are mass produced for businesses but work great for consumers wanting small quiet (or not, 65w options also available) systems.
Up market, there's all manner of small form factor gaming PCs and others.
Intel did have a knack for having interesting twists & innovations, but they rarely were must have advantages. Building the whole PC on a card was an interesting & useful move, for their gaming Extreme series. Skull Canyon back in 2016 showed a power density and level of integration that the world didn't know had been possible. The price has always been high, too high for critical success, but I thanks Intel for pushing new things & pushing the market. https://www.anandtech.com/show/10343/the-intel-skull-canyon-...
I buy 2-4 year old Dell (or similar) micro desktops just for this reason. Just enough upgradability (SSD/RAM, sometimes even CPU) and just enough power but runs cool and quiet.
Recently bought a lot of 10 Dell Optiplex Micros for around $1,200 USD all ready to go (9th gen / 16GB DDR4 / 512GB SSD, Win10 Pro) for a small business that works on spreadsheets and basic data entry all day. Replaced old desktop towers and employees were happy to have more free desk/working space.
Every time I looked at getting a NUC when I wanted a SFF system I ran into this problem. If I just needed a low power Linux box I could get a cheapo Atom device or even just a Raspberry Pi. If I needed more power a SFF PC, a laptop, or even a Mac mini was usually an overall better buy accounting for my time invested.
They are only cheaper if you are buying the higher end units, and comparing them to the higher end NUC's like the Skull Canyon
If you get down into the Celeron Models and the price really can not be beat.
I have bought a TON of Celeron NUC's to run as Kioks, Signage, and other Single Application purposes, never found anything from Lenovo, Dell, etc that could beat the price
I always wanted to like these, but they seemed consistently overpriced by like 30%. I'd price one out with all the necessary parts, and by the end it was always like, why am I not just buying a mac mini (or an RPi, for lower-intensity applications)?
I would say there's a huge performance gap between the most powerful Pi and even a Nuc mini from 6 years ago. Hard to consider them in the same ballpark, use-case-wise.
The NUCs had a pretty wide range of processor options, and often fairly-old ones with low core counts from lower-end processor lines were still for sale—but never at what felt like the correct price to me, especially since they usually needed memory and a disk (the ones that came with those seemed like an even worse deal)
I’ve done Coremark scores across a range of different systems including NUCs and Pi’s.
A Pi 4 gets a single thread score of about 10k and a new entry level NUC gets about 20k.
If you look at older systems a Beelink T4 Pro Mini (using a much older CPU) gets about 11k well within the same ballpark as a Pi4. Comparing that to a similar aged Pi a Pi3B gets about 4k score.
For comparison a high end AMD Ryzen gets about 45K.
A NUC with an i3 Raptor Lake CPU and 16 GB DRAM is slightly less than EUR 400, a NUC with an i7 Raptor Lake CPU and 64 GB DRAM is slightly less than EUR 800, both with all taxes included.
If you buy the cheapest MB, the cheapest case and PSU and the cheapest Pentium or Celeron desktop CPU, you can get a desktop that is cheaper than a NUC, but with less peripheral interfaces, i.e. without the Thunderbolt ports included in a NUC, and which will not be faster than a NUC.
On the other hand if you compare a NUC with a laptop that has a similar number of peripheral interfaces and a similar CPU speed, then you discover that the laptops are incredibly overpriced and only mobile workstations or top gaming laptops of $3000 or more can match a NUC.
None of the laptops available from a major vendor has so many peripheral ports as a NUC. None of the laptops that use the same CPU as a NUC has a comparable speed to the NUC. The reason is that NUCs have much better cooling. In the recent NUCs, the CPU can dissipate 35 W indefinitely, without overheating, even if the CPUs have a nominal TDP of only 28 W.
Because of this I have stopped upgrading my Dell Precision mobile workstation and I have replaced it with a NUC together with a 17" portable monitor and a compact keyboard. This combo has less than half the price of a comparable mobile workstation, it is much lighter, by more than 1 kg than a 17" laptop, it needs less volume in my backpack, and because it has more peripheral ports I carry less dongles.
I have always used my laptop on a desk, connected to the mains power, wherever I have to go in a business trip, so using a NUC changes nothing from this POV.
They make for nice little quiet home linux servers if you need more power than an RPi. A mac mini would be a good choice, too, but then you're stuck on macOS.
We bought a couple of i7 NUCs every time a new generation was launched, mainly to have access to the latest GPU features.
That is, we hoped to use Intel's MFX Media SDK for transcoding h264 (or even h265) video. Unfortunately both the SDK and the hardware were too much of a moving target, going from software to hardware only, Windows only then later linux only on CentOS, then only on Ubuntu, then (partially) open source. Dropping support for older generations, breaking API changes. Opaque licensing policies.
At some point we gave up and simply went with ffmpeg. Works great on AMD too, or ARM for that matter.
Still using a NUC as main development machine, extremely fast and mostly quiet.
Haven't tried, but I don't expect it to be competitive. Note that software transcoding means: CPU only, and M1/M2 is still just a bunch or ARM cores.
We've tested on Amazon's Graviton 2 and 3 systems and so-far found them to be on par or slightly cheaper than their x86-64 counterparts. But this is mostly due to AWS aggressive pricing of ARM systems, which is quite the contrary for Apple hardware.
MacMini's currently are approximately twice as expensive as the Intel i7 NUC, Gigabyte Brix and similar hardware we bought. Also, dev tools targeting server software development on Apple hardware (ie. Linux, please!) are still wanting.
Same story. I looked at a NUC as a possible SFF PC, balked at the price, and ended up buying a Lenovo M93p Tiny off a guy on Craigslist for a lot less.
I fell in love with Lenovo and HP SFF/'tiny pcs' a couple years ago. It can be tricky to find ample display outs in a preferred hardware configuration but otherwise they've been cheap, quiet, reliable and efficient. I realized as much as I like the tech and even idea of gaming, I don't game much anymore at all so without the need for a proper GPU these things have really simplified computing in my life to the point I have very capable extra hardware for projects and experiments ready to just plug in and go.
10/10 will buy more again
Same here, I wanted to have a mini desktop to transplant the hard drive of a dying laptop, so I was looking into them, and exactly because of that I gave up.
> "My guess is that they're cutting the portfolio down to feature only chipsets and add-in cards. Just CPUs, GPUs, FPGAs, ASICs, networking. It makes sense if anything - it really focuses on their core business. No more side-projects with questionable profitability. In the last 2 years since Pat Gelsinger took over, they've cut RealSense, Movidius, Optane Memory, IPO'd MobileEye, and now exited the Server Systems business. The only odd-ball left are their NUC business."
If any hedge funds are looking for analysts, I'm always open to offers...
It's sad to NUCs go, but it was inevitable. They made products for customers, while simultaneously competing with said customers. It's hard to build any decent partnerships on that premise. I'm currently typing this on a Serpent Canyon NUC though, so it's certainly bittersweet.
On third-party cases, NUCs were great. But on stock cases, some models were too noisy. Still, they provided great value.
Nonetheless, I agree it's better for Intel to focus on their core business and leave this market to others. Some niche PC makers such as Cirrus7 offer great NUC-like systems.
I wonder how this impacts emerging markets for Intel. If they've got no outlets to test their own product uptake and get people excited about the brand I feel like that's a missed opportunity and the cost of doing business. I saw a lot of NUC devices in data centers in the VMware hayday because they were so widely popular on TinkerTry and Virtually Gheto (i.e. William Lam) at the time.
I saw security vendors copy that play making SFF firewalls to incent stakeholders to run at home that turned into 7-figure deals over the long term.
I don't think Gelsinger is doing a lot of good for Intel. You didn't need him to cut costs. He's had plenty of time behind the wheel at this point, Intel should be on a more interesting trajectory by now.
If Intel had 'worked more on [thermals]' we probably wouldn't have the M2 chip.
At nearly every phase of Intel's history they've had the second-worst chips for instructions per watt, occasionally taking over first place.
In a world where a large and ever-growing percentage of all CPUs are not fronted by giant cooling fans... It's like Intel decided what they wanted on their tombstone in 1995 and haven't felt the need to change it since.
NUC isn't exactly the same as those, it is designed to be much more user upgradable and the price was very competitive. But even nowadays it is surprising to see them in the wild because they just aren't that common.
I bought the Intel Euclid (basically a Realsense with integrated compute) and out of the box it had a fan curve that would cause it to thermal throttle within 5 minutes, which would also cause the wifi to drop out.
On a regular consumer device you pick a tradeoff that will make it quieter, but really the only thing anyone used the Euclid for was robotics, where if you're using the CPU at all it's to run SLAM or object recognition nets, in which case you need 100% of the performance 100% of the time...
Replacing it is a 15 minute job, but the timing is always terrible.
Now I have an alert set for when the CPU is heavily thermal throttling or when the temp sensors hit certain thresholds and I keep a spare fan on hands at all time.
Intel just slapped a chip in a generic case.
By making them without RAM and SSD they were never going to be that profitable. It was nice being able to buy boxes without wasted parts you knew you didn’t want.
Nooooo! This is a tragedy. I love my NUC. It's totally handy to have a little computer on my desk to load up with whatever Linux distro I please and play around. NUC is way better than Raspberry PI.
The dilemma is when you have tech A: Profitable and large. Tech B: Unprofitable and small, but the tech itself improves at a faster rate than tech A.
GPUs were their true dilemma. GPUs were only good for gamers, who are picky and extremely value sensitive. Whereas CPUs could be sold to datacenters and enterprises, very high profit margins. Hence CPUs were core at intel, no one has the balls to bet on GPUs and go all in.
But CPUs ran out of performance improvements, GPUs continue to scale up because of parallelism. Then suddenly, new valuable applications started to be based on GPUs, first Crypto, now AI. Now CPUs are completely commoditized, and Nvidia dominates the money-printing GPU market, and worth 8x of intel.
NUCs are not some rapidly improving tech, they are just some minor market that is profitable but never that large.
Sure Apple sells the Mac Mini. But the entire Mac line is only 10% of Apple’s revenue and most of that is laptops. But Apple has to support its ecosystem as its only supplier. Intel doesn’t
And besides, a NUC based on x86 chips is by definition going to suck at either performance or heat.
That is true, but they're already a decade into shipping NUCs. Maybe it wasn't a problem after all. It could also be just a move to show the stock market that they focus on getting leaner.
RIP NUC :'(
No business decision is inevitable and we should stop acting otherwise.
How would you invest based on this prediction?
With regards to Tofino, I think they've looked at the portfolio and realized they can fill that niche with their FPGA products. Looks like they've stripped out all the useful IP from Barefoot and repackaged it to be used on a Stratix/Agilex chips, or as tiles on their other devices.
It's worse than that. We built Tofino powered boxes. We have multiple paying customers who bought and deployed them. Intel then cancelled it, screwing us and our customers.
I think you're wrong about the Intel portfolio though; Arc GPUs are another line that is going to be unceremoniously cut in a few years.
I mean I could be wrong, but Intel has gotta be looking at Nvidia's Hopper or AMD's MI300 DATA oriented architectures and wondering why the Arc team is trying to make $750 gaming computers. Xe GPUs are going a different direction from consumer Arc GPUs already anyways.
That's the same strategy AMD has tried with ROCm (de-facto pro-only compute ecosystem) and it hasn't worked. Nobody is going to buy a $5000 accelerator card (or even rent AWS time) just to tinker with your ecosystem unless it's already known to be the bees' knees. NVIDIA built their success by making sure their cards were the first thing people reached for when they wanted to build a high-performance application on a compute accelerator.
Further, there's a lot of redundant work and overlapping lines of business here. You have to develop game drivers for the iGPU for anyone to take you seriously, so at that point why not make the dGPU cards and increase the ROI multiplier of your work? Or are you planning on outsourcing the whole shebang and just licensing a hardware SIP core and a driver stack? There's really only two other names with a credible Windows/Linux driver stack... NVIDIA and AMD. Imagination/PowerVR don't have any windows presence, and Intel already tried this back in the early Atom era and it fucking sucked.
Like yea you've actually pointed out the exact reason they won't cut it: Hopper or MI300 is where the HPC and performance-compute segments are going and you can't be credible in that space without an internal solution for the accelerator half. It doesn't have to be GPUs, or they don't need to have graphics pipelines, but once you are doing all the work to develop the GPGPU side, you might as well make a variant with a pixel pipeline and display outputs and sell it to enthusiasts too.
The semiconductor space has a lot of these overlaps in product verticals, and if you choose not to be in them, you're leaving revenue on the table that is relatively "cheap" to access. The same CPU uarchs that work in enterprise work in consumer, and in the grand scheme of things re-using the same uarch for a low-margin product is fine. You'd never pay to develop the product from scratch just for the consumer market, but once you've done the work, you might as well sell it to consumers too. If you're doing consumer, you probably want to be doing laptop too, but those need iGPUs. Which need drivers, and if you're doing drivers you might as well do dGPUs too.
Intel doesn't need to be exactly AMD, but AMD has gone through this and already cut to the bone on everything they didn't need. You can draw a circle around the product verticals they're in and they've pretty much identified the core business requirements for the CPU/GPU markets.
You're right though that Gelsinger is obviously stripping the business down to his own vision of those essentials. I just think right now the evidence indicates that GPUs are still a part of that vision. You can't be competitive in HPC without GPGPU, you can't do laptop without iGPU, they might as well do gaming GPUs too, especially since a lot of the GPGPU research will overlap. The drivers don't, but you have to do them for iGPU anyway for laptops.
The fabs are the rough one though. They're expensive as shit, but all intel's legacy IP uses them, and the only thing the fabs run is Intel's IP. I think he's serious about building the wall between IP and fab at intel, and about bringing in external business, but right now there is no hope of even a GloFo-style spinoff working. It would absolutely take down both halves of the company to even try.
He's definitely got a tough road, Intel's pain is only just beginning and it's going to be a long time to profitability.
$INTC has done a whole lot of nothing for the past 9 months, I don’t think any hedge funds care short or long.
I once talked to an AMD engineer and asked why they didn't just build a barebones NAS chassis with some of their ryzen embedded stuff since Ryzen is quite popular but there's very few Ryzen products in that segment. Obviously QNAP and Synology have some decent demand. That's basically what they said, didn't want to compete with customers.
GPUs are sort of the opposite example where I don't really feel any particular attachment to MSI, Gigabyte, Asus, PowerColor, etc. EVGA and Sapphire are the only ones that have managed to claw together some consumer mindshare through their warranties. But the rest are essentially customizing an AMD/NVIDIA reference design PCB and a cooler that's perfectly forgettable and interchangeable with any other offering in their price class. They're not even allowed to do double VRAM configs anymore because that would cut into Radeon Pro and Quadro revenue etc. In that scenario I don't see a lot of value to having another middleman taking a 10% cut, and to me the value of products being available at MSRP, with no diffusion of blame through the supply chain, would be worth losing the partners over.
And while systems integration (NUCs, servers, etc) are clearly something where there's a lot of value from this kind of diversity, motherboards really are not. AMD and Intel have both killed off third-party chipsets like nForce, Abit, etc, and locked everything down to their one ecosystem they control, much like GPUs. It's not quite as pronounced as GPUs, and previously there was a lot of diversity in features, but PCIe 5.0 motherboards tend to reverse this, almost every board has exactly 2 PCIe slots and more or less the exact same featureset elsewhere. And the trend is towards more and more being onboard the CPU itself anyway (AMD chips the chipset is literally just an IO expander and is completely uninvolved in management tasks) and once onboard voltage regulators (FIVR, DLVR, etc) really start to take off in the next 10-15 years the motherboard is only going to become dumber and dumber. And in that world there's less and less of a need for a partner anyway - the CPU is all self-contained and locked down anyway, partners can't experiment and do cool things, they are a dumb pipe that pumps in 2V for the DLVR to step down at point-of-use. So why pay a 10% margin for their "value-add"?
It slays me that AMD is talking a big game about "open platform" and everyone is still locking down third-party chipsets and even the Platform Lock. If you want to lock the chip to the board for security, fine, but locking it to the brand doesn't do anything except ruin the secondhand market. Oh no I have to swap it out with another lenovo chip if I steal it, what exactly does that accomplish? And if you accept the conceit of the PSP there's no reason it has to be permanent anyway, you can allow the PSP to unlock it instead of permanently blowing e-fuses. And again, third-party chipsets are where a ton of innovation happened, it's better for the market if third parties can make those double-VRAM models and undercut AMD/NVIDIA's ridiculous margins on those workstation products, or shanghai a Celeron 300A into being a dual-socket system like Abit. What we have right now is tivoization in support of product segmentation, branded as a "security feature".
The chipset is a pure IO expander for AMD. AMD still is doing way better at that, it's just things like X300 ("the chipset is no chipset") being restricted to industrial/embedded, or board partners not being allowed to pursue things they want that break AMD's segmentation. PCIe 4 enablement (including opt-in) on select X370/X470 boards was something partners wanted for example. And there was no technical reason for X399/TRX40 to be segmented and even WRX80 could have been shoehorned on with "everything works just not optimally" level compatibility backwards and forwards. Partners could have done that if it wasn't denied/locked out. They did it on the Socket SP3 flavor.
Partners should ideally just get the freedom to play, and if they can make something work, cool. Let's have more Asrock/Asrock Rack and ICY DOCK design shenanigans again. Clamshell VRAM cards should be sold relatively close to actual cost rather than being gated by both AMD and NVIDIA. Etc. Partners should have the ability to configure the product in any way the product could reasonably be engineered to work. If features are being explicitly segmented by product tier it should be enforced by e-fuse feature-fusing at launch and that's the deal, no taking AVX-512 out after it launched.
Deleted Comment
For most modern computers you cannot be 100% sure about what you get for your money, until you have them in your hands and you can run tests yourself. By then, if you discover that the computer is not exactly what you want, it is too late. You cannot get your money back because the computer is not defective, it is just different from what you expected. There are helpful published product reviews, but those do not cover all products, especially not most of the cheaper versions.
In recent years, many small Chinese companies have introduced various small computers that are much more innovative than the latest Intel NUC models, but they have little or no documentation (at least in English) and they frequently have manufacturing quality problems, so the Intel NUCs will be missed.
Moreover, until now nobody has made good computers with 35 W CPUs and with a volume of less than 0.5 L or good computers with 45 W CPUs and with a volume of less than 0.7 L, except Intel. All the alternatives use either bigger cases or less powerful CPUs.
Then there was their community and RMA manager Dona. That woman had to be one of the hardest working people I've ever encountered. She must have slept like 5 hours a day. People could post on some obscure forum about a defect and she'd find the post and make it right. One time, when I was around 16 in 2007, I broke a very niche and specific overclocking record on DDR2 latency that still passed Memtest x86, and Dona hooked me the hell up. Suddenly started getting all sorts of companies sending me heatsinks, RAM, PSU's, etc to "review" if I wanted to. Sidenote to this tangent - I have to say, the ultra-fine grit sandpaper kit with 12 sheets in increasing grit, was absolutely my favorite. Learned how to sand a heatsink down to a near mirror finish and shaved a few degrees celsius off my de-capped AMD Opteron 165.
DFI BIOS were famous for exposing the most settings for regulating RAM/ CPU/ Chipset/ GPU/ SATA/ PCI-E/ IDE communication and lanes. This guide/ participating in the DFI overclocking community probably taught me more about the fundamentals of computer science, and how all those electrons move around to make the sand think. https://forums.overclockersclub.com/topic/100835-the-definit...
Fun trip down memory lane.
Can you give an example? Clearly you are concerned with more than just the listed specs. Do you mean like RAM and SSD models? Or even lower level than this?
The spec sheets that are provided for pre-built consumer machines will at best identify the two or three most expensive chips and give you a vague idea of what class of part they selected for up to a dozen other components. You will definitely never get the specificity of a PCPartPicker parts list. You won't be able to audit the list of components for ones that are known to be problematic for Linux use.
Apple
The volume of a Mac Mini is 1.4 L.
A Mac Mini is huge in comparison with a classic slim NUC and it has a double volume in comparison with a tall NUC or with a Skull Canyon NUC.
At the size of a Mac Mini it is trivial to cool even a 65 W CPU, because that is possible even in a smaller 1 L case.
So no, what I have said about Intel is correct.
I've bought many NUCs over the years. I'm literally sitting in my office right now with 8 NUCs set up -- 1 as a workstation, and 7 as servers performing various tasks. I know people have always complained that they are a bit more expensive than the alternatives, but I'm willing to pay a premium for the excellent Linux compatibility and reliability.
That said, I've been expecting this for quite some time. I doubt the NUC line made a lot of money for them -- their stated goal with NUC was for these machines to be demonstrations of what can be done with their hardware. In their current situation, I'm not surprised that they decided this was needed to increase their focus.
I guess I'll be shopping around for alternatives at some point.
Yeah it is quite strange. Even the Vision Five 2 RISC-V has better software support than your usual ARM SBC. The big caveat is that the opensource drivers only support Vulkan but Imagination ™ has planned on releasing a Vulkan based OpenGL implementation. Of course application software support is worse than ARM because of recompilation but that is a trivial barrier. If the hardware is any good, distros and developers will compile their packages for the new ISA.
The issue is, Intel's aiming more upmarket (based on their pricing) for something that doesn't offer a lot more. Intel's done interesting/fun things, such as building the entire motherboard into a single slot sized unit, but going out and buying a Beelink is a better proposition for most people.
If anything, I think Intel can exit this market because they succeeded. They lead the way, they begat a new form factor, and it's won. But the market has grown much more competitive, and it's hard to see what Intel would do to remain relevant & important in this market. Their current efforts are interesting, and even good, but not really enough to clearly differentiate, not enough to command a huge lead in this market, and certainly not at the somewhat above average prices Intel has been asking.
This follows a lot of other Intel examples. Intel left DRAM market in 1985. Intel left SSDs in 2020, in a similar situation to NUCs here: they basically created the mass consumer market, by pioneering NVMe & creating amazingly high-value products that used to be ultra-expensive proprietary botique items.
This seems like a classic Innovator's Dilemna situation, of Intel having a strong hand creating mass-market products that define the industry, but then being chased out by down-market competition, once the offerings really become a true everyday commodity. I'm not sure if there even is another way Intel ought to behave here.
They lead the creation of a market but failed to maintain any competitive edge in it. That's a story of initial success that ultimately ran through their fingers, like so many other examples you've presented. Calling this fully "a success" seems very generous.
> If anything, I think Intel can exit this market because they succeeded.
You could spin the entire unit off so it could compete on it's own terms. There's plenty of success stories to be found this way, and Intel's habit of doing this time after time means that I have very little confidence in any new product they bring to market outside of CPU cores and view them mostly as a consumer electronics company now.
Serve the Home has an entire series on small systems:
* https://www.servethehome.com/tag/tinyminimicro/
I bought a quad i5 nuc on sale off Newegg for $300. I also ordered parts to build a little AMD 6 core APU itx setup for 50 bucks more. If the Nuc wasn't on sale I would not have bothered to even look at it. And for 50 bucks more got waaaaay more compute power. The only reason I even bought the Nuc and not another itx was simply the very small size.
The hardkernel odroid h3 with a quad Celeron, dual 2.5 GbE and m.2 M key costs $130 without the case/PSU. Intel can do better on pricing.
Deleted Comment
It’s also the exact purpose of my home mac mini
Is there data on how well they actually sell? I always vaguely assumed it was a pretty low-volume product that they couldn't actually get rid of due to niche customer applications. They've historically often been very slow to _update_ it; they definitely don't treat it as one of their more important products.
The Mac mini has always been Apple's low-cost trojan horse product for existing PC users who already have a keyboard, mouse and display. The issues with keeping it updated in the past had more to do with the Intel processors available to Apple.
In the short Apple Silicon era, the Mac mini has been upgraded multiple times. It's remains an important product.
"The Mac Mini remains a product in our lineup" was the phrase that stuck into everyone's mind on how Apple views it.
How do you know this? I personally decided against a mini when I specced one out and a MBP wound up cheaper. The pricing strategy certainly made me _feel_ like they were pushing me away from the mini.
overall I feel that there is enough integration density in processors that the laptop form factors will be good enough for most common (& semi advanced) uses. so this seems like an odd decision to move out of that market (technically) but if you factor in business then it starts to makes sense.
Besides, I'd use an external display anyways, not working at caffées.
I haven’t owned a laptop in years. I travel with my iPad pro.
Your mileage may vary, but I’ve been happy enough with mine for the price that I had already decided I would be choosing Beelink for my next mini PC over a NUC.
That said, I have seen some mixed reports of some issues around thermal throttling and eGPU support for some of the newer gaming focused ones, but I think if you have realistic expectations given the form factor (and especially if you’re not going for gaming) then they are fairly reliable and sturdy little devices.
* Also note, I bought mine barebones and added my own ram and SSD. I can’t speak to the quality of what they ship with, but I also haven’t heard any complaints from others with that regard.
Everyone has loved them. I wound up getting one to manage my print farm, and no complaints there, either. Knock on wood, but they Just Work, so far. Noise and heat have not been an issue, even with the oldest units.
1. https://www.bee-link.com/beelink-gaming-pc-gtr6900hx-1994384...
ETA Prime and ServeTheHome do regular reviews of these on YouTube.
I only need iGPUs for a zippy UI, because I don't game on my work devices, and don't want to have any added noise because of a dGPU.
This might be the right move for Intel, but not for users. NUCs are awesome, cost-effective devices.
Down market, there are way cheaper 1L business PCs which are mass produced for businesses but work great for consumers wanting small quiet (or not, 65w options also available) systems.
Up market, there's all manner of small form factor gaming PCs and others.
Intel did have a knack for having interesting twists & innovations, but they rarely were must have advantages. Building the whole PC on a card was an interesting & useful move, for their gaming Extreme series. Skull Canyon back in 2016 showed a power density and level of integration that the world didn't know had been possible. The price has always been high, too high for critical success, but I thanks Intel for pushing new things & pushing the market. https://www.anandtech.com/show/10343/the-intel-skull-canyon-...
Recently bought a lot of 10 Dell Optiplex Micros for around $1,200 USD all ready to go (9th gen / 16GB DDR4 / 512GB SSD, Win10 Pro) for a small business that works on spreadsheets and basic data entry all day. Replaced old desktop towers and employees were happy to have more free desk/working space.
They are only cheaper if you are buying the higher end units, and comparing them to the higher end NUC's like the Skull Canyon
If you get down into the Celeron Models and the price really can not be beat.
I have bought a TON of Celeron NUC's to run as Kioks, Signage, and other Single Application purposes, never found anything from Lenovo, Dell, etc that could beat the price
A NUC with an i3 Raptor Lake CPU and 16 GB DRAM is slightly less than EUR 400, a NUC with an i7 Raptor Lake CPU and 64 GB DRAM is slightly less than EUR 800, both with all taxes included.
If you buy the cheapest MB, the cheapest case and PSU and the cheapest Pentium or Celeron desktop CPU, you can get a desktop that is cheaper than a NUC, but with less peripheral interfaces, i.e. without the Thunderbolt ports included in a NUC, and which will not be faster than a NUC.
On the other hand if you compare a NUC with a laptop that has a similar number of peripheral interfaces and a similar CPU speed, then you discover that the laptops are incredibly overpriced and only mobile workstations or top gaming laptops of $3000 or more can match a NUC.
None of the laptops available from a major vendor has so many peripheral ports as a NUC. None of the laptops that use the same CPU as a NUC has a comparable speed to the NUC. The reason is that NUCs have much better cooling. In the recent NUCs, the CPU can dissipate 35 W indefinitely, without overheating, even if the CPUs have a nominal TDP of only 28 W.
Because of this I have stopped upgrading my Dell Precision mobile workstation and I have replaced it with a NUC together with a 17" portable monitor and a compact keyboard. This combo has less than half the price of a comparable mobile workstation, it is much lighter, by more than 1 kg than a 17" laptop, it needs less volume in my backpack, and because it has more peripheral ports I carry less dongles.
I have always used my laptop on a desk, connected to the mains power, wherever I have to go in a business trip, so using a NUC changes nothing from this POV.
dn2820fykh was great.
That is, we hoped to use Intel's MFX Media SDK for transcoding h264 (or even h265) video. Unfortunately both the SDK and the hardware were too much of a moving target, going from software to hardware only, Windows only then later linux only on CentOS, then only on Ubuntu, then (partially) open source. Dropping support for older generations, breaking API changes. Opaque licensing policies.
At some point we gave up and simply went with ffmpeg. Works great on AMD too, or ARM for that matter.
Still using a NUC as main development machine, extremely fast and mostly quiet.
We've tested on Amazon's Graviton 2 and 3 systems and so-far found them to be on par or slightly cheaper than their x86-64 counterparts. But this is mostly due to AWS aggressive pricing of ARM systems, which is quite the contrary for Apple hardware.
MacMini's currently are approximately twice as expensive as the Intel i7 NUC, Gigabyte Brix and similar hardware we bought. Also, dev tools targeting server software development on Apple hardware (ie. Linux, please!) are still wanting.
...but in practice I never got one, because they were always so expensive. The same hardware can be found in a laptop for much less.