This article states the following power consumption:
> The 10900K has a 125-watt TDP, for example, while AMD's Ryzen 9 3900X's is just 105-watts.
My understanding from other articles (like [1]) is however that Intel had to _massively_ increase the amount of power the CPU consumes when turboing under load:
> Not only that, despite the 125 W TDP listed on the box, Intel states that the turbo power recommendation is 250 W – the motherboard manufacturers we’ve spoken to have prepared for 320-350 W from their own testing, in order to maintain that top turbo for as long as possible.
Somehow it feels like back in the Pentium 4 days again.
Intel's TDP is just at base clock which Intel keeps at 3.7GHz. Pretty it's close to a hoax.
Power increases linearly with frequency and squarely with voltage. Higher freq. needs higher voltage. This applies to pretty much any CPU/GPU. Also AVX loads are significantly more power hungry.
230W would be a low estimate for a fully clocked one.
It would be interesting if someone would take an Intel CPU and an AMD CPU and calculate the electricity bill difference someone would pay in a year considering 40 hours / week run time.
Power increases linearly with frequency and squarely with voltage.
Going by the common use of "squarely," the power usage might also be interpreted as being linearly with voltage. Rather, I understood you meant that power increases proportional to the voltage squared.
(For the lurkers: Friction increases proportional to velocity squared, as does kinetic energy. These are all quite useful things to know, and should be part of the takeaways from your basic science education.)
The real draw under turbo is likely much higher. My i9-9900k for instance has a rated TDP of 95W but it can only boost to about 4.5Ghz all-core when limited to that TDP.
I run it at a 5.2ghz all-core overlock and it pulls nearly 200W under load. (yes, beefy watercooling). From memory it was around 150W @ 5ghz but I haven't played around with the OC settings for a while after I got it tuned right.
What sort of water cooling setup do you have? From all that I've read, the typical AIO water cooling setup is only barely better than a high-end air/fan cooling setup.
> Somehow it feels like back in the Pentium 4 days again.
What we've got here is actually the mysterious, long-overdued Pentium 5 (which was known for its greater-than 5 GHz clock speed and never released due to the huge heat dissipation issue). History repeats, but this time, it actually got released.
This is the cycle with Intel, though. They innovate, blow out the market, and then stagnate until AMD or someone else comes out with an architecture that's a category improvement on whatever Intel's offering.
This looks like they're turning every nob to 11 on the current architecture to try to get something that they can sell until they can come up with whatever will succeed the Core line.
It looks almost identical to the AthlonXP/64 vs Pentium 4 days when Intel just kept bumping the clocks and power consumption. Until they came out with Conroe.
Jim Keller might be doing just that with Intel right now. I only hope whatever they come up with, AMD is ready and able to keep up. Otherwise we're going to see a rerun of the following period too: Intel tweaking the same architecture for years at a time, and squeezing low single digit performance improvements that come with questionable security sacrifices and deeply unpleasant sticker prices.
AMD's not really any better when it comes to fudging specs. At least Intel CPUs actually limit their sustained power draw to the TDP when running in their factory default configuration, even if enthusiast motherboards usually override it.
AMD CPUs OTOH run significantly higher power than their spec'd TDP right out-of-the box. For example, that "105 W" 3900X actually ships with a power limit of 142 W and it is quite capable of sustaining that under an all-core load.
AMD's turbo numbers are also pure fantasy. If AMD rated turbo frequencies the same way Intel did that 3900X would have a max turbo of 4.2 or 4.3 GHz instead of 4.6 GHz. When Intel says their CPU can hit a given turbo frequency they mean it will actually hit that and stay there so long as power/temperature remains within limits. Meanwhile AMD CPUs may-or-may not ever hit their max turbo frequency, and if they do it's only for a fraction of a second while off full load.
The outrage over Intel CPU's power consumption is pretty silly when you realize that the only reason AMD CPUs don't draw just as much is because their chips would explode if you tried to pump that much power into them. If you care about power consumption just set your power limits as desired and Intel CPUs will deliver perfectly reasonable power efficiency and performance.
>>AMD CPUs OTOH run significantly higher power than their spec'd TDP right out-of-the box. For example, that "105 W" 3900X actually ships with a power limit of 142 W and it is quite capable of sustaining that under an all-core load.
I'm pretty sure you've got that exactly reversed, AMD's power draw tends to stay closer to the rated TDP than Intel's.
>>The outrage over Intel CPU's power consumption is pretty silly when you realize that the only reason AMD CPUs don't draw just as much is because their chips would explode if you tried to pump that much power into them. If you care about power consumption just set your power limits as desired and Intel CPUs will deliver perfectly reasonable power efficiency and performance.
I'm not even sure what this means. AMD CPUs are on a more advanced and physically smaller process, with smaller wires, and a different voltage-frequency-response curve. Of course they would "explode" if you try to pump them with voltages that the process isn't designed to operate with, that the Intel CPU with it's larger process can handle. But the power draw isn't really the "point" of the CPU -- performance is.
Imagine thinking that the Pentium 4 or Bulldozer was better than their contemporaries because they were capable of drawing incredible amounts of power.
I'm pretty certain that I remember watching videos from GN, LTT, or Bitwit (can't recall which) where they noted that AMD chips were turboing up to a few hundred MHz above the specced numbers.
> When Intel says their CPU can hit a given turbo frequency they mean it will actually hit that and stay there so long as power/temperature remains within limits.
Have you actually tried to do that? When Intel says their CPU can hit a Turbo frequency, it's usually pure fantasy because they can't actually do it within the power/temp constraints.
Yes, they'll get to 5-whatever GHz on maybe half the cores if you're lucky, but full all-core load at the top stated Turbo? No chip will stay there, it will throttle down, likely by a lot.
And AVX loads will bring it to, or close to, maximum non-turbo clocks, that's how hot they run.
It could maybe be done with extremely good cooling and undervolting, and completely removing power limits (in which case it will probably consume close to 500W if it doesn't fry something first :D).
I'm surprised they _still_ don't have a working 10nm desktop chip. They're 5 years behind schedule (and still counting) at this point! This is just a reskinned 9900k with 2 more cores and a much higher (> 25%) TDP at 125-watts, which is a generously low estimate of how much power it will actually suck. I briefly had their "140 watt" 7820x chip (returned for refund) that would gladly suck down more than 200 watts under sustained load. Intel plays such games with their single core turbo at this point that the 5.3ghz means very little, and it's the same tired architecture they've been rehashing since Sandy Bridge (2011).
This is an incredibly poor showing and if I were an investor I would be seriously questioning the future of Intel as a company.
I've always been puzzled by this whole Jim Keller situation. Is one man literally the only reason we have technological advancements in microprocessors?
Intel's problems are not soley tech related, though their missteps here have not helped the situation
Intel does not know how to compete from a marketing, sales, and business standpoint, for a few decades they only really competed with their own product line, now that they have actual competition in the market they are unable to react to it properly even if they had the tech stack to do so
Base clocks are irrelevant on desktop, it's not Sandy Bridge days anymore. No modern (desktop) processor runs at baseclock in practice.
The 10900KF will do 4.8 GHz all-core at $472, the 3900X will do it at around 4-4.1 GHz[0]. AMD has a slight IPC advantage in productivity, Intel has a slight IPC advantage in gaming[1].
The market for the 10900K/KF is basically someone who wants the fastest gaming rig, but then as a secondary priority wants more cores for productivity. Kind of an overlap with the 3900X, sure, but the 10900K/KF will still outperform it in gaming, where the 3900X tilts a little more towards productivity.
There are of course exceptions, CS:GO loves Zen2's cache and Zen2's latency makes DAWs run like trash, so your specific task may vary.
I'd personally point at the 10700F as being the winner from the specs, 4.6 GHz all-core 8C16T with slightly higher IPC in gaming than AMD, for under $300. That's competitive with the 3700X: little higher cost, and more heat, but more gaming performance too. The 10900 series and 10600 series are a little too expensive for what they offer, the lower chips are too slow to really have an advantage.
But really it's not a good time to buy these anyway. Rocket Lake and Zen3 will probably launch within the next 6 months, if you can stretch another 6 months there will be discounts on Zen2 and more options for higher performance if you want to pay.
These days you can't say anything positive about Intel even if it is factually true. I am personally excited about the 10900K and the competition with AMD is bringing Intel prices down.
People seem to have a deep hatred for Intel, exactly the way they had deep hatred for AMD (pre-Conroe days, when Intel released Conroe in 2005/06, people were rooting so hard for Intel, it was hard to find any positive comments about AMD). I don't get it. Both are multi-billion dollar companies and innovating like crazy. We can't just sit back, and feel the utter awe of what it takes to make computer chips whether it is Intel or AMD. You know, its engineers like anyone else - they have good intentions and work hard to make all of this possible, the fans have this egregious entitlement that is so infuriating and dismissive.
>That's competitive with the 3700X: little higher cost, and more heat, but more gaming performance too.
I wouldn't call the cost « little higher » if you factor in the motherboard (and possibly the cooler). Any AM4 board will run a 3700X whereas Intel's using yet another socket for their new chips.
> I'd personally point at the 10700F as being the winner from the specs, 4.6 GHz all-core 8C16T with slightly higher IPC in gaming than AMD, for under $300
I think the 10700F would only be the winner if you have a motherboard that either isn't following Intel's recommendations for turbo or lets you ignore them.
Otherwise the 10700F's 4.6ghz all-core isn't going to happen with a 65W limit after unknown seconds. That TDP limit is going to really cripple that chip. Whether or not you can ignore that limit will I guess decide whether or not the i7-10700KF is worth the extra $50.
But the 9700 appears to be almost nonexistent. I think the 10700 will end up in the same camp. The 10700K will likely be the only chip with any meaningful coverage/reviews in the i7 lineup.
what do you mean by this? I researched intel vs. amd gaming performance pretty extensively last summer, and I paid particular attention to csgo benchmarks. AFAIK, 9700k/9900k are a little faster than the 3900X in csgo at stock speeds, and a good bit faster on a typical overclock.
As desktop user I don't care about power consumption, I care very little that it has x% more power then last year processors (at least when the x < 100) because current processors have enough power for anything you throw at them, but what I do really care is this:
-Do they still pack that PoS Intel management inside the CPU? And are these new CPU still vulnerable to Meltdown? Because if any of those questions are answered with "yes" then no amount of cores, GHz and performance % is going to change my mind from Ryzen
Well, power consumption is still kinda important on a desktop. All that power has to go somewhere and you need a cooler that can keep up or you'll be losing more performance from thermal throttling.
I agree, it's a sad one. But compared to Intel's flurry of independent security incidents in past years, it's but a bucket in a lake.
I am not praising AMD, don't get me wrong, I am merely choosing the lesser evil. In this case the lesser evil happens to be better and cheaper. ¯\_(ツ)_/¯
I have no horse in the CPU race but I do have a different perspective here. I have a SFF home server with <100W cooling. I am compute-bound, have no discrete GPU, and have no interest in Meltdown as I run only trusted code. My box sports a 6-core i7 8700, and I am tempted by the 10-core i9 10900.
Why not AMD? Answer: iGPU. AMD's best iGPU offering seems to be the Ryzen 9 4900HS, which has fewer cores, and you can't just order it - it seems to be for OEMs only. So staying with Intel.
It feels like we are moving towards ARM becoming the primary consumer CPU, and x86/x64 being used to power only niche use cases (dev, audio/video, gaming etc.) or servers only. Can anyone working in the space confirm/deny?
Define "nowhere close"? Apple's cpu's are competitive with x86 when looking at low power applications. Microsoft has multiple examples of windows running on arm. This seems pretty close in the grand scheme of things.
Aren't we? How many people's primary computing devices are ARM-based devices at this point? I know plenty of people who rarely touch a laptop, yet alone a desktop. Expansion of the computing market is mostly happening in developing markets through predominantly ARM-based devices too.
My Raspberry Pi 4 could be an entirely usable desktop replacement for casual usage if the GPU was a bit better.
My gf already uses her Samsung S8 as primary "computer". Only for a few things does she reach for her laptop, and that's mostly due to the screen and webpages not being mobile friendly.
Isn’t the open rumor that Apple is gonna start switching over their Mac line next year? Apple switching from Intel to Arm across the board would drastically change the situation practically overnight.
Well, if you consider phones, sure. But for people who still need desktop/laptop computers for gaming, programming, video editing, photo editing, and heavy business use (spreadsheets, desktop publishing, etc), then Intel is still, by far, in the lead.
There are ARM Windows laptops. I don't think that they are successful. So I don't really see ARM becoming the primary consumer CPU for computers. It might change in the upcoming years, when Apple will release ARM computers.
Too bad they have only 2 channel memory controllers and same 32/32 L1 cache. That means all that power is still wasted waiting for memory (Max Memory Bandwidth 45.8 GB/s, seriously?).
Not sure why feeling so excited about those processors.
They also still only have 16 PCI-E lanes from the CPU, which is disappointing. Ryzen's 20 isn't exactly lavish, but it's enough for the pair of x16 for the GPU + x4 for the primary NVME drive.
For typical desktop workloads memory bandwidth is not that important. They likely will release Xeon-W counterparts later with similar frequencies but higher memory channels and PCI-E lanes for those who need it.
Does anybody need those 5.3 GHz at 300 watts? I believe the next big thing is "massively parallel distributed processing": thousands or maybe even millions of tiny processing units with their own memory and reasonable set of instructions run in parallel and communicate with each other over some sort of high bandwidth bus. It's like a datacenter on a chip. A bit like GPU, but bigger. I think this will take ML and various number crunching fields to the next level.
Those are desktop processors. Most applications won’t even use 64 cores. But I agree this is this future, but today faster single core will speed up your day a lot.
There is, but keep in mind this is yet another refresh of Skylake, not really a "new" CPU.
Supposedly the new socket is PCI-E 4.0 capable, which the next CPU "Rocket Lake" will supposedly enable.
But since it's still 16 lanes from the CPU only, that means you'll only get PCI-E 4.0 to likely a single slot. And likely not any of the M.2 drives, which typically are connected to the chipset instead.
PCIe 4.0 is likely to be short lived given that PCIe 5.0 seems to be around the corner.
On the AMD side where they already have 4.0, few care for it (I do though - just ordered a mobo and processor that use it) - 4.0 nvmes aren't faster for common use cases, and it mostly matters when going multi-gpu which is only seen in some professional use (and even then you can use a threadripper and just have a lot of 3.0 lanes). Then next ~year ryzen 5000 which will require a new socket seems likely to already support 5.0. I'm guessing Intel will just altogether skip 4.0.
> The 10900K has a 125-watt TDP, for example, while AMD's Ryzen 9 3900X's is just 105-watts.
My understanding from other articles (like [1]) is however that Intel had to _massively_ increase the amount of power the CPU consumes when turboing under load:
> Not only that, despite the 125 W TDP listed on the box, Intel states that the turbo power recommendation is 250 W – the motherboard manufacturers we’ve spoken to have prepared for 320-350 W from their own testing, in order to maintain that top turbo for as long as possible.
Somehow it feels like back in the Pentium 4 days again.
[1]: https://www.anandtech.com/show/15758/intels-10th-gen-comet-l...
Power increases linearly with frequency and squarely with voltage. Higher freq. needs higher voltage. This applies to pretty much any CPU/GPU. Also AVX loads are significantly more power hungry.
230W would be a low estimate for a fully clocked one.
Going by the common use of "squarely," the power usage might also be interpreted as being linearly with voltage. Rather, I understood you meant that power increases proportional to the voltage squared.
(For the lurkers: Friction increases proportional to velocity squared, as does kinetic energy. These are all quite useful things to know, and should be part of the takeaways from your basic science education.)
I run it at a 5.2ghz all-core overlock and it pulls nearly 200W under load. (yes, beefy watercooling). From memory it was around 150W @ 5ghz but I haven't played around with the OC settings for a while after I got it tuned right.
When you overclock, no wonder you get more power draw than the official spec says.
What we've got here is actually the mysterious, long-overdued Pentium 5 (which was known for its greater-than 5 GHz clock speed and never released due to the huge heat dissipation issue). History repeats, but this time, it actually got released.
This looks like they're turning every nob to 11 on the current architecture to try to get something that they can sell until they can come up with whatever will succeed the Core line.
AMD just has a good architecture and are using good fabbing facilities.
Intel can have the best architecture in the world but that won't matter much if they are stuck with their fabbing at the current 14 nm process.
Jim Keller might be doing just that with Intel right now. I only hope whatever they come up with, AMD is ready and able to keep up. Otherwise we're going to see a rerun of the following period too: Intel tweaking the same architecture for years at a time, and squeezing low single digit performance improvements that come with questionable security sacrifices and deeply unpleasant sticker prices.
Than maybe they'll come up with a core design again.
AMD CPUs OTOH run significantly higher power than their spec'd TDP right out-of-the box. For example, that "105 W" 3900X actually ships with a power limit of 142 W and it is quite capable of sustaining that under an all-core load.
AMD's turbo numbers are also pure fantasy. If AMD rated turbo frequencies the same way Intel did that 3900X would have a max turbo of 4.2 or 4.3 GHz instead of 4.6 GHz. When Intel says their CPU can hit a given turbo frequency they mean it will actually hit that and stay there so long as power/temperature remains within limits. Meanwhile AMD CPUs may-or-may not ever hit their max turbo frequency, and if they do it's only for a fraction of a second while off full load.
The outrage over Intel CPU's power consumption is pretty silly when you realize that the only reason AMD CPUs don't draw just as much is because their chips would explode if you tried to pump that much power into them. If you care about power consumption just set your power limits as desired and Intel CPUs will deliver perfectly reasonable power efficiency and performance.
I'm pretty sure you've got that exactly reversed, AMD's power draw tends to stay closer to the rated TDP than Intel's.
>>The outrage over Intel CPU's power consumption is pretty silly when you realize that the only reason AMD CPUs don't draw just as much is because their chips would explode if you tried to pump that much power into them. If you care about power consumption just set your power limits as desired and Intel CPUs will deliver perfectly reasonable power efficiency and performance.
I'm not even sure what this means. AMD CPUs are on a more advanced and physically smaller process, with smaller wires, and a different voltage-frequency-response curve. Of course they would "explode" if you try to pump them with voltages that the process isn't designed to operate with, that the Intel CPU with it's larger process can handle. But the power draw isn't really the "point" of the CPU -- performance is.
Imagine thinking that the Pentium 4 or Bulldozer was better than their contemporaries because they were capable of drawing incredible amounts of power.
I'm pretty certain that I remember watching videos from GN, LTT, or Bitwit (can't recall which) where they noted that AMD chips were turboing up to a few hundred MHz above the specced numbers.
Have you actually tried to do that? When Intel says their CPU can hit a Turbo frequency, it's usually pure fantasy because they can't actually do it within the power/temp constraints.
Yes, they'll get to 5-whatever GHz on maybe half the cores if you're lucky, but full all-core load at the top stated Turbo? No chip will stay there, it will throttle down, likely by a lot.
And AVX loads will bring it to, or close to, maximum non-turbo clocks, that's how hot they run.
It could maybe be done with extremely good cooling and undervolting, and completely removing power limits (in which case it will probably consume close to 500W if it doesn't fry something first :D).
This is an incredibly poor showing and if I were an investor I would be seriously questioning the future of Intel as a company.
If the rumors are true there _never_ will be a 10nm desktop chip, the process is broken. It seems 7nm will be "it", late next year.
Intel does not know how to compete from a marketing, sales, and business standpoint, for a few decades they only really competed with their own product line, now that they have actual competition in the market they are unable to react to it properly even if they had the tech stack to do so
I'll take one AMD Ryzen 9 3900X, thanks MicroCenter!
The 10900KF will do 4.8 GHz all-core at $472, the 3900X will do it at around 4-4.1 GHz[0]. AMD has a slight IPC advantage in productivity, Intel has a slight IPC advantage in gaming[1].
The market for the 10900K/KF is basically someone who wants the fastest gaming rig, but then as a secondary priority wants more cores for productivity. Kind of an overlap with the 3900X, sure, but the 10900K/KF will still outperform it in gaming, where the 3900X tilts a little more towards productivity.
There are of course exceptions, CS:GO loves Zen2's cache and Zen2's latency makes DAWs run like trash, so your specific task may vary.
I'd personally point at the 10700F as being the winner from the specs, 4.6 GHz all-core 8C16T with slightly higher IPC in gaming than AMD, for under $300. That's competitive with the 3700X: little higher cost, and more heat, but more gaming performance too. The 10900 series and 10600 series are a little too expensive for what they offer, the lower chips are too slow to really have an advantage.
But really it's not a good time to buy these anyway. Rocket Lake and Zen3 will probably launch within the next 6 months, if you can stretch another 6 months there will be discounts on Zen2 and more options for higher performance if you want to pay.
[0] https://www.pcgamesn.com/amd/ryzen-9-3900x-overclock
[1] https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-...
People seem to have a deep hatred for Intel, exactly the way they had deep hatred for AMD (pre-Conroe days, when Intel released Conroe in 2005/06, people were rooting so hard for Intel, it was hard to find any positive comments about AMD). I don't get it. Both are multi-billion dollar companies and innovating like crazy. We can't just sit back, and feel the utter awe of what it takes to make computer chips whether it is Intel or AMD. You know, its engineers like anyone else - they have good intentions and work hard to make all of this possible, the fans have this egregious entitlement that is so infuriating and dismissive.
I wouldn't call the cost « little higher » if you factor in the motherboard (and possibly the cooler). Any AM4 board will run a 3700X whereas Intel's using yet another socket for their new chips.
It's relevant for the TDP number.
> I'd personally point at the 10700F as being the winner from the specs, 4.6 GHz all-core 8C16T with slightly higher IPC in gaming than AMD, for under $300
I think the 10700F would only be the winner if you have a motherboard that either isn't following Intel's recommendations for turbo or lets you ignore them.
Otherwise the 10700F's 4.6ghz all-core isn't going to happen with a 65W limit after unknown seconds. That TDP limit is going to really cripple that chip. Whether or not you can ignore that limit will I guess decide whether or not the i7-10700KF is worth the extra $50.
But the 9700 appears to be almost nonexistent. I think the 10700 will end up in the same camp. The 10700K will likely be the only chip with any meaningful coverage/reviews in the i7 lineup.
what do you mean by this? I researched intel vs. amd gaming performance pretty extensively last summer, and I paid particular attention to csgo benchmarks. AFAIK, 9700k/9900k are a little faster than the 3900X in csgo at stock speeds, and a good bit faster on a typical overclock.
Reading this makes me feel like I need to build a new PC now lol. Any idea what percentage of desktop PCs are still on Sandy Bridge?
-Do they still pack that PoS Intel management inside the CPU? And are these new CPU still vulnerable to Meltdown? Because if any of those questions are answered with "yes" then no amount of cores, GHz and performance % is going to change my mind from Ryzen
Sadly AMD CPUs have something pretty similar too: https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...
I am not praising AMD, don't get me wrong, I am merely choosing the lesser evil. In this case the lesser evil happens to be better and cheaper. ¯\_(ツ)_/¯
Why not AMD? Answer: iGPU. AMD's best iGPU offering seems to be the Ryzen 9 4900HS, which has fewer cores, and you can't just order it - it seems to be for OEMs only. So staying with Intel.
My gf already uses her Samsung S8 as primary "computer". Only for a few things does she reach for her laptop, and that's mostly due to the screen and webpages not being mobile friendly.
Dead Comment
Perhaps, I need to take that as an axiom, right? i7 5820K was non-typical for that matter then.
And yes, Xeon-W with 5.3GHz frequency? Tell me more :)
Supposedly the new socket is PCI-E 4.0 capable, which the next CPU "Rocket Lake" will supposedly enable.
But since it's still 16 lanes from the CPU only, that means you'll only get PCI-E 4.0 to likely a single slot. And likely not any of the M.2 drives, which typically are connected to the chipset instead.
On the AMD side where they already have 4.0, few care for it (I do though - just ordered a mobo and processor that use it) - 4.0 nvmes aren't faster for common use cases, and it mostly matters when going multi-gpu which is only seen in some professional use (and even then you can use a threadripper and just have a lot of 3.0 lanes). Then next ~year ryzen 5000 which will require a new socket seems likely to already support 5.0. I'm guessing Intel will just altogether skip 4.0.
You should go AMD if you want PCIe4 currently. Which is a bit funny as Intel sells certain peripherals that supports it.