For anyone looking at a new AM4 build with high speed memory, be warned that Linus' review stated he could not POST with memory clocked past 2666MHz[0]
Really not wanting to poop on anyone's parade but an off lease Dell/HP/Lenovo workstation used from eBay will beat the pants off anything else for homelab. You can get an E3-1240 V2 with 8Gb RAM for the kingly sum of ... $200. Including motherboard, chassis and PSU.
ECC is a feel good feature, but is mostly useless. Memory either is corrupted (which can be detected by running MemTest) or it works perfectly fine without ECC. I have over 50 10+ year old Itanium servers with 192GB ECC RAM each, there are 2 ECC errors logged total, while running 24/7.
To word your statement differently: without ECC, your memory either works fine or is corrupted. But you don't know which one it is, until you run memtest. How often do you do that?
Personal anecdote: Upgraded RAM in my PC, ran memtest, everything fine. A year later, I need to move and so I consolidate all my data on a big hard disk. As the disk is new, I run some verification and notice some non-matching checksums. Long story short, the new RAM had become defective, and a new memtest showed a handful of errors after a few hours. Had I had ECC on that machine, I probably would have seen notifications about memory errors even before that day (plus the errors would likely have been corrected).
If Ryzen allows us to build a powerful PC with ECC for little more money than a non-ECC one, I'll happily pay for this "feel good feature". The current ECC-capable alternative from the Intel universe (Xeon CPU & motherboard) is simply too expensive for most people.
I run a few more modern servers and we see corrected ECC errors fairly often, but typically not randomly distributed across the boxes. This makes me suspect bad memory devices. We've replaced modules here and there with positive results.
Anyway as a former memory subsystem designer and a former employee of a memory chip maker, I like ECC. It provides some protection against bus noise errors even if you have full confidence in the memory chips themselves.
> Memory either is corrupted (which can be detected by running MemTest) or it works perfectly fine without ECC.
While there is reason to think soft errors are much less common than often claimed and therefore ECC memory has been much less necessary than people think (at least within the atmosphere), there's now a pretty good reason to prefer ECC memory because of rowhammer.
That's well below median of around 1+/year per server, and might indicate you are not logging all ECC errors found or just not using most of your memory heavily. However, ECC is much more useful for early detention of memory problems than actual recovery from memory problems. Further, if you get do large scale measurements you will get vastly higher numbers as the average is dominated by the worst offenders.
If you value the integrity of your data, particularly in longer term archive / recall / processing scenarios, then you value having ECC RAM as /part/ of a solution for detecting and correcting errors.
How much hardware have you used? I regularly find machine that crash, see errors reported in mcelog, and when I run memtest no errors. I'm guessing that memtest doesn't use a challenging access patterns, or something about powersaving/clocking that runs it a bit slower.
In any case I LOVE ECC. Sure random radiation caused bit flips are relatively rare. But it's REALLY nice to see high levels of ECC errors when something died and not have to wonder if it's the flash drive, power supply, graphics card etc. Sure sometimes its the dimm, sometimes it's the motherboard, sometimes it's the CPU. But with ECC errors you can much more quickly/easily tell if it changes when you swap things around.
Not that it's likely to matter for these systems but there has been some recent work on using ECC to recover from errors caused by refreshing less often. This would use less power so I suspect we'll see someone try it in phones or laptops soon.
ECC is great for detecting errors more than correcting them. Then one can do thorough memtests and decide if it's time to replace memory modules. It doesn't really prevent problems (there are unrepairable ECC errors too) as much as it leads to early detection and repair. Well worth it when you're optimizing for MTTR.
It's not a silver bullet if the user is just paranoid about data integrity (nothing beats end-to-end integrity checks built into your system, while ECC only addresses one step in the process).
Wow. AMD has really pulled through on this one. They are really giving Intel a run for their money. This is going to be great for consumers. What is interesting is the HUGE jump in multi-threaded performance on AMD. Very interesting.
> What is interesting is the HUGE jump in multi-threaded performance on AMD. Very interesting.
Indeed. Single threaded performance, however, seems bad. Virtual cores most likely share less of the actual core than on Intel parts, allowing one to compete less with the other and get more done. On my i7 laptop, on anything more demanding, I see half the "cores" idling, probably because one is using the actual underlying core while the other is waiting.
Still, while AMD obviously did a good job, it will not make the Intel guys lose any sleep for now.
The server parts are another story. If they manage to reliably outperform Xeons (even if it's only on a per watt basis), it'll be fun to watch.
Hyperthreading is not a virtual core and as it doesn't exist as a separate core its hard for it to be idling. Bad analogies make for less than adequate understanding.
And? Intel's $1,600 6950X loses to their own $170 i3-7350K in single-threaded benchmarks. When you pay extra for a CPU with more cores, you almost always get less single-threaded compute because of power/heat/design constraints forcing the clock speed down. Dual cores will often beat quad-cores, which will almost always beat hex/octo-cores.
What's happening here is that AMD has priced Ryzen so inexpensively that it actually broke the usual market segmentation by driving high-thread-count workstation CPUs (usually $1k+) into the same price range as high-end gaming CPUs (~$300-400). Think of it as being able to buy an industrial dump truck for the same price as a pickup truck. The dump truck is clearly a better value in terms of hauling capacity, but most people would be far better off with a pickup truck for their daily traveling.
AMD's pickup truck (the 1400X) doesn't launch until later this year. The 7700K will probably still beat it on single-threaded workloads due to the difference in clock speed, but it should get you most of the way there for substantially less money, around $200 if the leaks are accurate.
The R7 1700 is within shouting distance of the 6900k in most serious tasks - rendering, encoding, compilation etc. A $330 65w CPU is a credible alternative to an $1100 140w CPU. That is absolutely remarkable. The cherry on the cake is that the cheapest AM4 motherboards are half the price of the cheapest 2011-v3 boards.
For the first time in a decade, AMD are seriously competing at the high end. That's great news for everyone except Intel.
> But it's still losing in every web and office single threaded benchmark.
By ~15% compared to >160% for Bulldozer.
And this is day zero before any applications have been optimized for the new microarchitecture, compared with Core which everything has been targeting for more than a decade.
They're coming in at half the price of equivalent Intel chips. They don't need to win, necessarily (which I didn't think was going to happen), they need to be competitive for a lot less money.
EDIT, in response to the other edit: sorry about that, I wasn't sure which comparison you were making. Still, the difference is slim enough that if I were in the market, I'd go with this one just to put the heat on Intel.
But it wins in every case where performance truly matters: multi-threaded calculations, archiving, encoding. You know, the things that actually do take up time.
Do you really care if your webpage renders for half a second longer?
By a pretty small margin compared to equivalent Intel CPUs from what I've seen[1]. And the ryzen chips are a whole lot cheaper and beat Intel for parallel tasks.
I'm about to buy a beefy machine for FPGA development, I'm definitely considering AMD. Seems like I'd get better performance for a much lower price. I'm still waiting for more in-depth reviews though.
Ryzen 7 chips focus on heavily multithreaded work. When you are dealing with every day apps and games you are more concerned with IPC (instructions per cycle) and I'm sure we'll see better performance in that arena with Ryzen 3 and 5
I think people run enough browser tabs (+ their office suite + background OS tasks) where the core difference will outshine the single-core speed. It's not hard to overwhelm 4 real cores.
> ... this means that the base memory controller in the silicon should be able to support ECC. We know that it is disabled for the consumer parts, but nothing has been said regarding the Pro parts.
ECC should be a standard feature. If you don't want it you don't have to use it but to disable simply for market segmentation is lame.
Thanks for pointing this out. I found this on the specification page[0] for the ASRock "X370 Killer SLI/ac" AM4 board:
> AMD Ryzen series CPUs support DDR4 2667/2400/2133 ECC & non-ECC, un-buffered memory
So the board seems to accept ECC memory, hopefully that means the memory controller actually performs ECC? I found a reddit comment claiming that the boards accept ECC memory but don't perform error correction[1]. Not a great source but seemingly possible.
Do you have any more sources suggesting that ECC is actually being implemented?
For example, BIOSs often have ECC settings, if we had some screenshots of an AM4 board bios showing ECC setting that would be strong evidence.
IIRC just because it boots with ECC ram doesn't mean it is actually error correcting. I'm under the impression Intel spends considerable amount of money to prove their memory controllers actually error correct & historically AMD hasn't spent the money on documenting theirs.
Is that buffered ECC or unbuffured ECC? Because last time I checked, my AM3 board had support only for unbuffured, which is really hard to find and I just didn't bother.
Yes, I have not been able to find any data from AMD directly that ECC is not or is supported. In any case I am disappointed. Desktops, workstation, servers are all getting more and more memory. The changes for corruption are only going up. And getting an Opteron for each workstation is kind of silly.
Edit: If no motherboard supports its, in Linux one can use ecc_enable_override (similar thing might be available in Windows). That is if the CPU support it. It can USE ECC memory, but we don't know if it can use the ECC checksums, most motherboards have a NO here.
>ECC should be a standard feature. If you don't want it you don't have to use it but to disable simply for market segmentation is lame.
I agree ECC should be a standard feature, but it follows the same marketing patterns (in that industry) as disabling multi-processor support or artificially limiting clock speeds.
I think ECC is in a different category than MHz or core count because with a lower frequency or fewer cores you may get your result slower but it will be the same result as a top end chip.
Without ECC you may get a completely wrong answer when the increasingly small, vulnerable, and numerous dram cell is flipped.
Actually no, enabling external cache coherency (eg. QPI) incurs a significant cost in terms of pin count, power, die space and clock speeds. The same does not transfer to ECC memory.
This is done so that whitebox system builders don't buy consumer level motherboards and CPUs, shove ECC in there, a cheap raid controller, and sell this setup as servers. Next thing you know all the AMD servers on the market are failing because of shoddy parts because consumer level QA is at a lower level than enterprise, not to mentioned the lowered performance of consumer level hardware. Now AMD has a bad reputation for reasons completely unrelated to its chip quality or performance and collapses in the market.
I don't like this situation, but I certainly understand it. AMD needs the server market to raise revenues. This means its server offerings, from soup to nuts, need to be rock solid. Letting whitebox resellers junk up their reputation is not within their interests.
Not to mention, enterprise sales often subsidize consumer sales. Those big enterprise markups go into the same pot of money that allows AMD to sell consumer chips at lower prices. Anything that threatens enterprise sales could affect the consumer space. AMD needs to be very careful here as, compared to intel, it needs to prove the value of its brand every chip generation. They also need to overcome a lot of bad mojo they've developed from questionable benchmarking and marketing of its previous two, or more, generations of chips. Bulldozer, I believe, had half the single core performance compared to the intel equivalant at the time, yet it benchmarked well due to its extra cores. The market doesn't forget shenanigans like these.
That said, the market allows 'workstation' level cpus and motherboards that support ECC. A little legwork can find you decent deals, but no, they're not going to be $59 NewEgg specials and you won't be able to use the consumer Ryzen on it. Consumer CPUs are almost always gimped anyway - smaller caches, disabled features, slower clocks, etc. The ECC limitation isn't too different from this, but it is annoying.
Impressive! While it doesn’t utterly destroy Intel, AMD does offer a MUCH better price/performance ratio. Things have gotten much more interesting indeed. Intel’s dominance is being called into question. As a result, we all profit.
On the other end ARM is also nibbling at Intel with AARCH64 chips that are closing on lower-end Core performance and building in server and high-end workstation grade features like virtualization.
It offers much better price/performance ratio if the seriously multi-threaded use scenarios matter for you (that is where you compete with Intel's seriously overpriced >$1000 chips).
It seems that in common use (web browsing, office, gaming), fewer but stronger cores still shine and even the fastest Ryzen is slower and more expensive than Intel's offering.
Well...for the time being. When the R5 and R3 get released in Q2, we will have 6 and 4 core versions with higher clock speeds so that might help a bit. Plus based on the pricing, I'm gonna bet a 4 core ryzen will be a bit cheaper than a 4 core kaby lake.
No it doesn't. Chips much slower than Ryzen aren't struggling with those common usecases at all. All you're doing is losing 8C/16T and spending the same amount or more to do it.
The 1700 is really well placed, often seems to beat the i7 7700k in benchmarks whilst drawing less power and costing roughly the same/cheaper. Probably even better value if the stock coolers are as solid as they sound. Definitely going to be a popular chip with gamers (I'm very tempted myself).
Hopefully more software starts to make use of multiple cores more effectively. It's not like it's a new feature of chips any more.
Especially in games i think it's pretty clear that even the 1800X is not amazing, really good, but behind a 7700k while being more expensive. If you are not solely a gamer though, it's the better choice!
According to AMD CEO Lisa Su at https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea... : "there are some games that are using code optimized for our competitor... we are confident that we can work through these issues with the game developers who are actively engaging with our engineering teams."
Also, another AMD employee chimed in:
"But what's also clear is that there's a distribution of games that run well, and a distribution of games that run poorly. Call it a "bell curve" if you will. It's unfortunate that the outliers are some notable titles, but many of these game devs (e.g. Oxide, Sega, Bethesda) have already said there's significant improvement that can be gleaned. We have proven the Zen performance and IPC. Many reviewers today proved that, at 1080p in games. There is no architectural reason why the remaining titles should be performing as they are." Source: https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...
Unfortunately you can bet Intel is going to put all the spin they can into "AMD underperforms for gaming", while the software game developers are going to slowly optimize for Zen, up to the point where this perf discrepancy will be fully fixed but people will still have the mindset that "AMD unperforms for gaming"... sigh
Also, super important: another reason games underperform is that Windows load-balances threads across CCXs (CPU complexes). The 8 cores of Ryzen are split across 2 CCXs. The 8MB of L3 cache is also split with 4MB per CCX. So when the OS scheduler arbitrarily decides to move a thread from a core to another, it might accidentally resume the thread on the other CCX, causing the thread to lose all is L3-cached data, incurring a high perf penalty.
I haven't found one yet, but as the chip is rated at 65W and the i7 7700k at 91W, it's pretty likely to draw less (at load anyway). I'd be curious to see what they draw whilst idle though.
If I read the specs correctly, Ryzens don't have GPUs integrated. Are there any motherboards out there with GPUs on board? Or do you always have to add a 150EUR+ graphics card to it (assuming I want at least 2x 4K display support)?
The 1800X does seem like a killer value for the money for us C++ people :)
Low-end GPUs are a very poor value proposition. For $89 you could have an RX 460 with nearly 3x the performance of the R7 250 plus HDMI 2.0b and DisplayPort 1.4.
The fact that it doesn't have an integrated gpu is a GREAT THING. This is an enthusiast/server CPU, if you can't be bothered to buy a cheap dedicated graphics card, wait for the mobile/apu Zen.
It's mostly also a packaging issue - I don't need 3D capabilities, so a basic 4K supporting Intel GPU works well and doesn't require me to add a GPU to a chasis (which can then be mini-ITX).
Hence my question about having GPU integrated to the motherboard like older boards had it.
I think it'd be great if we could rely on every part having a standard, reasonable, GPU, even if we just use it as a clever vector unit and never push a single pixel through it.
I'm someone who's still running an ancient (2007, IIRC) Nehalem i7-920 based workstation for daily use and was eagerly waiting for this launch to finally decide on a new build. After reviewing all the day one benchmark data, I've come to the odd decision that I'm probably going for an i7-7700k, especially if Intel drops the prices by even a tiny amount.
I say "odd" because, apart from basic daily tasks and occasional gaming, my workload mostly consists of scientific computing. Ryzen's performance on AVX2 is precisely as bad as I expected from the architecture (effectively, a 50% penalty) and given the thermal constraints on 6900k's, a lightly overclocked 7700k (usually easy to get to 4.8-5 Ghz after delidding) seems to deliver by far the best price/performance ratio even on multithreaded workloads as long as vectorization is a significant component of the workload. Not to mention the added benefit of delivering the undisputed best single threaded performance money can buy. Ryzen also seems to have issues with memory bandwidth that may or may not be solved by future microcode or MB firmware upgrades, and very limited overclocking headroom.
It's still nice to see AMD bring competition back to the CPU marketplace, and I'm looking forward to their server CPUs in Q2. I'd also love to be proven wrong on anything I said above and reconsider my build options, so please point out any oversights on my part.
Are there any benchmarks that show Ryzen at 50% slower on AVX2? Intel does AVX in one clock cycle, but drastically cuts clockspeeds when AVX instructions are executing. You'll overclock a lot, but it will then reduce the multiplier/clock the second AVX appears to keep from killing the chip.
If we assume a similar AVX IPC (and Ryzen 1/2 with AVX2), you'll double your cores with Ryzen, so the AVX2 throughput per clock would be about the same. 7700 clocks higher, but then downclocks again when AVX are encountered. It seems like in practice, they are about the same for performance with AVX, but Ryzen is ahead in every other multi-threaded bench.
There's already a few benchmarks that are directly relevant, take a look at the FFTW performance, for instance [1] (sourced from [2]), which consists mostly of single precision AVX instructions. The 1800X is handily beaten even by a deliberately downclocked 7700k, which costs a bit over half the price.
But that's only half the story. The thing is, a decent 7700k (after fixing the thermal interface via delidding and using high end air or low to mid end water cooling) is actually capable of running at 4.8-5 Ghz without downclocking for AVX. You might get an unlucky sample and only do 4.6 Ghz, but it's still cost effective. A 6900k, on the other hand, doesn't have the thermal headroom to go much past its stock frequencies on AVX-heavy workloads, unless you go full crazy with sub-ambient cooling.
So, in the end, it looks like the 7700k has at least double the price/performance ratio of a 1800X in AVX-heavy workloads along with vastly superior single thread performance. It looks to me like Ryzen is a great leap in price/performance only insofar as your workloads are highly multithreaded, yet don't make much use of vectorization at all.
You're right about the AVX2 part. If that's important to you. Otherwise I'd have to disagree, waiting all those years and going to a quadcore is sad. This is a monumental shift to see $330 8C/16T chips start to be mainstream. I also hate that Intel went with that cheap TIM that requires delidding. You're in a tough spot given the focus on AVX2 IMO. My thoughts on Ryzen in general though is that the base 1700 is the chip for everyone building completely from scratch in 2017 and beyond. I personally won't be building another quadcore from this point onward. Just feel too much like a sucker to do so and I do think it's foolish (even for gaming! ... where almost everyone's system is going to be GPU bound at 1080P or higher anyway) unless you have some specific need like what you're describing with a massive 50% penalty affecting you.
Ultimately, the number of cores doesn't mean anything by itself, it's the aggregate performance that matters. We'd all happily take a 16 Ghz single core CPU over a 4 Ghz quad core any day, if such a thing was possible.
It's true that I'm in a very niche spot in that most of my workloads are vectorization-heavy, but I don't think it's quite as niche as people think. If you look at the benchmarks, quite a few workloads (check out x265 performance on ffmpeg, or audio encoding, or compression...) are sufficiently vectorized for the 7700k to be more cost efficient than the 1800X. Realizing that does feel a bit sad, though, I agree.
For pure gaming, I'd still get the fastest Kaby Lake i5 and be done with it. It doesn't matter what happens with DX12, etc., games are still going to be GPU bound for the foreseeable future and the cheap i5 won't significantly bottleneck you for many years to come.
What is the deal with 16 PCIe lanes? With the move to PCIe storage, this is disappointing. I am happy to see the price/performance is good (which is surely going to force Intel to be competitive again), but if this is targeting the high-end, I can not understand 16 PCIe lanes.
The CPU has 24 lanes and the chipset can add up to 8 additional lanes, see [1] first table.
It's a pity that they did not include at least 32 lanes on-chip to allow two full x16 GPUs at least on the 1800X, but I hope there is a 1900X in the pipeline with more lanes.
7th) Will all Zen products have all of the instruction sets and platform extensions, or could lower end chips lose features like virtualization?
A7: In the consumer client space we have no plans to turn off virtualization or features.
9th) Does AM4 / consumer ZEN support ECC memory?
A9: ECC is enabled on Ryzen and AM4.
[https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...]
edit: added link
https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...
[0]https://youtu.be/9wJQEHNYE7M?t=4m43s
From what I've been reading about the upcoming Ryzen, no motherboard so far claims to support ECC.
Personal anecdote: Upgraded RAM in my PC, ran memtest, everything fine. A year later, I need to move and so I consolidate all my data on a big hard disk. As the disk is new, I run some verification and notice some non-matching checksums. Long story short, the new RAM had become defective, and a new memtest showed a handful of errors after a few hours. Had I had ECC on that machine, I probably would have seen notifications about memory errors even before that day (plus the errors would likely have been corrected).
If Ryzen allows us to build a powerful PC with ECC for little more money than a non-ECC one, I'll happily pay for this "feel good feature". The current ECC-capable alternative from the Intel universe (Xeon CPU & motherboard) is simply too expensive for most people.
Anyway as a former memory subsystem designer and a former employee of a memory chip maker, I like ECC. It provides some protection against bus noise errors even if you have full confidence in the memory chips themselves.
While there is reason to think soft errors are much less common than often claimed and therefore ECC memory has been much less necessary than people think (at least within the atmosphere), there's now a pretty good reason to prefer ECC memory because of rowhammer.
PS: Radiation also makes a huge difference.
In any case I LOVE ECC. Sure random radiation caused bit flips are relatively rare. But it's REALLY nice to see high levels of ECC errors when something died and not have to wonder if it's the flash drive, power supply, graphics card etc. Sure sometimes its the dimm, sometimes it's the motherboard, sometimes it's the CPU. But with ECC errors you can much more quickly/easily tell if it changes when you swap things around.
http://thememoryguy.com/using-ecc-to-reduce-power/
It's not a silver bullet if the user is just paranoid about data integrity (nothing beats end-to-end integrity checks built into your system, while ECC only addresses one step in the process).
Indeed. Single threaded performance, however, seems bad. Virtual cores most likely share less of the actual core than on Intel parts, allowing one to compete less with the other and get more done. On my i7 laptop, on anything more demanding, I see half the "cores" idling, probably because one is using the actual underlying core while the other is waiting.
Still, while AMD obviously did a good job, it will not make the Intel guys lose any sleep for now.
The server parts are another story. If they manage to reliably outperform Xeons (even if it's only on a per watt basis), it'll be fun to watch.
Right, thats exactly what happens with Hyper-Threading. If you can fully utilize the physical core, Hyper-Threading can't do much.
What's happening here is that AMD has priced Ryzen so inexpensively that it actually broke the usual market segmentation by driving high-thread-count workstation CPUs (usually $1k+) into the same price range as high-end gaming CPUs (~$300-400). Think of it as being able to buy an industrial dump truck for the same price as a pickup truck. The dump truck is clearly a better value in terms of hauling capacity, but most people would be far better off with a pickup truck for their daily traveling.
AMD's pickup truck (the 1400X) doesn't launch until later this year. The 7700K will probably still beat it on single-threaded workloads due to the difference in clock speed, but it should get you most of the way there for substantially less money, around $200 if the leaks are accurate.
For the first time in a decade, AMD are seriously competing at the high end. That's great news for everyone except Intel.
By ~15% compared to >160% for Bulldozer.
And this is day zero before any applications have been optimized for the new microarchitecture, compared with Core which everything has been targeting for more than a decade.
They did good.
EDIT, in response to the other edit: sorry about that, I wasn't sure which comparison you were making. Still, the difference is slim enough that if I were in the market, I'd go with this one just to put the heat on Intel.
Do you really care if your webpage renders for half a second longer?
I'm about to buy a beefy machine for FPGA development, I'm definitely considering AMD. Seems like I'd get better performance for a much lower price. I'm still waiting for more in-depth reviews though.
[1] http://core0.staticworld.net/images/article/2017/03/ryzen_ci...
The 7700k is losing in all aspects below w.r.t. the Ryzen 7 1800X, except absolute single-threaded perf. Intel is losing in:
- absolute multi-threaded performance
- relative multi-threaded performance per $
- memory bandwidth
- L3-cache bound workloads (AMD has twice more L3)
- price of motherboards
Web and office? Single-threaded?
Why are these benchmarks, specially in 2018?
The Ryzen quads will come. I'm not at all sure what your point is here.
ECC should be a standard feature. If you don't want it you don't have to use it but to disable simply for market segmentation is lame.
Might as well ask her about the possibility of coreboot if we are at it.
https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...
Edit: Account made, instructions received. If all goes well I should be asking later today.
Deleted Comment
> AMD Ryzen series CPUs support DDR4 2667/2400/2133 ECC & non-ECC, un-buffered memory
So the board seems to accept ECC memory, hopefully that means the memory controller actually performs ECC? I found a reddit comment claiming that the boards accept ECC memory but don't perform error correction[1]. Not a great source but seemingly possible.
Do you have any more sources suggesting that ECC is actually being implemented?
For example, BIOSs often have ECC settings, if we had some screenshots of an AM4 board bios showing ECC setting that would be strong evidence.
[0] http://www.asrock.com/mb/AMD/X370%20Killer%20SLIac/index.asp...
[1] https://www.reddit.com/r/Amd/comments/5v0cqo/ryzen_supports_...
Has Asrock explicitly stated that it works with all CPUs, or just that all their boards can do it (IF the CPU also supports it)?
http://webcache.googleusercontent.com/search?q=cache:wO1MKok...
Edit: If no motherboard supports its, in Linux one can use ecc_enable_override (similar thing might be available in Windows). That is if the CPU support it. It can USE ECC memory, but we don't know if it can use the ECC checksums, most motherboards have a NO here.
I agree ECC should be a standard feature, but it follows the same marketing patterns (in that industry) as disabling multi-processor support or artificially limiting clock speeds.
Without ECC you may get a completely wrong answer when the increasingly small, vulnerable, and numerous dram cell is flipped.
I don't like this situation, but I certainly understand it. AMD needs the server market to raise revenues. This means its server offerings, from soup to nuts, need to be rock solid. Letting whitebox resellers junk up their reputation is not within their interests.
Not to mention, enterprise sales often subsidize consumer sales. Those big enterprise markups go into the same pot of money that allows AMD to sell consumer chips at lower prices. Anything that threatens enterprise sales could affect the consumer space. AMD needs to be very careful here as, compared to intel, it needs to prove the value of its brand every chip generation. They also need to overcome a lot of bad mojo they've developed from questionable benchmarking and marketing of its previous two, or more, generations of chips. Bulldozer, I believe, had half the single core performance compared to the intel equivalant at the time, yet it benchmarked well due to its extra cores. The market doesn't forget shenanigans like these.
That said, the market allows 'workstation' level cpus and motherboards that support ECC. A little legwork can find you decent deals, but no, they're not going to be $59 NewEgg specials and you won't be able to use the consumer Ryzen on it. Consumer CPUs are almost always gimped anyway - smaller caches, disabled features, slower clocks, etc. The ECC limitation isn't too different from this, but it is annoying.
Very interesting times in Silicon.
It seems that in common use (web browsing, office, gaming), fewer but stronger cores still shine and even the fastest Ryzen is slower and more expensive than Intel's offering.
So it really depends. A lot.
Hopefully more software starts to make use of multiple cores more effectively. It's not like it's a new feature of chips any more.
Also, another AMD employee chimed in:
"But what's also clear is that there's a distribution of games that run well, and a distribution of games that run poorly. Call it a "bell curve" if you will. It's unfortunate that the outliers are some notable titles, but many of these game devs (e.g. Oxide, Sega, Bethesda) have already said there's significant improvement that can be gleaned. We have proven the Zen performance and IPC. Many reviewers today proved that, at 1080p in games. There is no architectural reason why the remaining titles should be performing as they are." Source: https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...
Unfortunately you can bet Intel is going to put all the spin they can into "AMD underperforms for gaming", while the software game developers are going to slowly optimize for Zen, up to the point where this perf discrepancy will be fully fixed but people will still have the mindset that "AMD unperforms for gaming"... sigh
The fix is to have a driver to help Windows treat a CCX almost as if it was its own socket. See https://forums.anandtech.com/threads/official-amd-ryzen-benc...
In theory a workaround would also be to disable cores 4-7 (and keep 0-3) in the BIOS. Sadly no Ryzen reviewers tried it...
tomshardware: http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,49...
The 1800X does seem like a killer value for the money for us C++ people :)
You can get Radeon R7 250 under $50, which absolutely destroys i7-6700k's integrated HD530.
(Technically HDMI 1.4 also supports 4K but only at 24hz, which is fine for TVs but a miserable experience on a PC desktop)
If neither monitor has HDMI 2.0 then an RX470 would be required to get dual DisplayPort outputs.
Deleted Comment
Hence my question about having GPU integrated to the motherboard like older boards had it.
I say "odd" because, apart from basic daily tasks and occasional gaming, my workload mostly consists of scientific computing. Ryzen's performance on AVX2 is precisely as bad as I expected from the architecture (effectively, a 50% penalty) and given the thermal constraints on 6900k's, a lightly overclocked 7700k (usually easy to get to 4.8-5 Ghz after delidding) seems to deliver by far the best price/performance ratio even on multithreaded workloads as long as vectorization is a significant component of the workload. Not to mention the added benefit of delivering the undisputed best single threaded performance money can buy. Ryzen also seems to have issues with memory bandwidth that may or may not be solved by future microcode or MB firmware upgrades, and very limited overclocking headroom.
It's still nice to see AMD bring competition back to the CPU marketplace, and I'm looking forward to their server CPUs in Q2. I'd also love to be proven wrong on anything I said above and reconsider my build options, so please point out any oversights on my part.
If we assume a similar AVX IPC (and Ryzen 1/2 with AVX2), you'll double your cores with Ryzen, so the AVX2 throughput per clock would be about the same. 7700 clocks higher, but then downclocks again when AVX are encountered. It seems like in practice, they are about the same for performance with AVX, but Ryzen is ahead in every other multi-threaded bench.
http://www.intel.com/content/dam/www/public/us/en/documents/...
But that's only half the story. The thing is, a decent 7700k (after fixing the thermal interface via delidding and using high end air or low to mid end water cooling) is actually capable of running at 4.8-5 Ghz without downclocking for AVX. You might get an unlucky sample and only do 4.6 Ghz, but it's still cost effective. A 6900k, on the other hand, doesn't have the thermal headroom to go much past its stock frequencies on AVX-heavy workloads, unless you go full crazy with sub-ambient cooling.
So, in the end, it looks like the 7700k has at least double the price/performance ratio of a 1800X in AVX-heavy workloads along with vastly superior single thread performance. It looks to me like Ryzen is a great leap in price/performance only insofar as your workloads are highly multithreaded, yet don't make much use of vectorization at all.
[1] http://media.bestofmicro.com/ext/aHR0cDovL21lZGlhLmJlc3RvZm1...
[2] http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,49...
It's true that I'm in a very niche spot in that most of my workloads are vectorization-heavy, but I don't think it's quite as niche as people think. If you look at the benchmarks, quite a few workloads (check out x265 performance on ffmpeg, or audio encoding, or compression...) are sufficiently vectorized for the 7700k to be more cost efficient than the 1800X. Realizing that does feel a bit sad, though, I agree.
For pure gaming, I'd still get the fastest Kaby Lake i5 and be done with it. It doesn't matter what happens with DX12, etc., games are still going to be GPU bound for the foreseeable future and the cheap i5 won't significantly bottleneck you for many years to come.
It's a pity that they did not include at least 32 lanes on-chip to allow two full x16 GPUs at least on the 1800X, but I hope there is a 1900X in the pipeline with more lanes.
[1] https://arstechnica.com/gadgets/2017/03/amd-ryzen-review/
http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-...