Readit News logoReadit News
kersplody · 10 months ago
12vhpwr has almost no safety margin. Any minor problem with it rapidly becomes major. 600W is scary, with reports of 800W spikes.

12V2x6 is particularly problematic because any imbalance, such as a bad connection of a single pin, will quickly push things over spec. For example, at 600W, 8.3A are carried on each pin in the connector. Molex Micro-Fit 3.0 connectors are typically rated to 8.5A -- That's almost no margin. If a single connection is bad, current per connector goes to 10A and we are over spec. And this if things are mated correctly. 8.5A-10A over a partially mated pin will rapidly heat up to the point of melting solder. Hell, the 16 gauge wire typically used is pushing it for 12V/8.5A/100W -- that's rated to 10A. Really would like to see more safety margin with 14 gauge wire.

In short, 12V2x6 has very little safety margin. Treat it with respect if you care for your hardware.

ckozlowski · 10 months ago
Great summary. Buildzoid over on YouTube came to a similar conclusion back during the 4xxx series issues[1], and looks like he's released a similar video today[2]. It's worth a watch as he gets well into the electrical side of things.

It's been interesting to think that we're probably been dealing with poor connections on the older Molex connectors for years, but because of the ample margins, it was never an issue. Now with the high power spec, the underlying issues with the connectors in general are a problem. While use of sense pins sorta helps, I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink. That will make connectors more expensive no doubt, but much of the ATX spec and surrounding ecosystem was never designed for "expansion" cards pushing 600-800w.

[1] - 12VHPWR failures (2023) https://youtu.be/yvSetyi9vj8?t=1479 [2] - Current issues: https://www.youtube.com/watch?v=kb5YzMoVQyw

exmadscientist · 10 months ago
> I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink.

There are tons of high-power connectors out there, and they look and work pretty much the same as the current ones (to the untrained eye). They are just more expensive.

Though at 40A+ you tend to see more "banana" type connectors, with a cylindrical piece that has slits cut in it to deform. Those can handle tons of current.

opello · 10 months ago
This [1] is also a good deep dive into the space covering the spec, limits, and materials details. For example:

> The specification for the connector and its terminals to support 450 to 600W is very precise. You are only within spec if you use glass fiber filled thermoplastic rated for 70°C temperatures and meets UL94V-0 flammability requirements. The terminals used can only be brass, never phosphor bronze, and the wire gauge must be 16g (except for the side band wires, of course).

[1] http://jongerow.com/12VHPWR/

amluto · 10 months ago
And yet plenty of things around the house use far more than 800W and work fine. The secret is to use a more reasonable voltage.

30V or 36V or even 48V would leave a decent margin for touch safety and have dramatically lower current and even more dramatically lower resistive loss.

tcdent · 10 months ago
This is the most informative assessment in this thread.

You'd expect to see the capacity to be 125% as is common in other electrical systems.

Ratings for connectors and conductors comes with a temperature spec as well, indicating the intended operating temperature at a load. I'm sure, with this spec being near the limit of the components already, that the operating temperatures near full load are not far from the limit, either.

Couple that with materials that may not have even met that spec from the manufacturer and this is what you get. Cheaper ABS plastic on the molex instead of Nylon, PVC insulation on the wire instead of silicone, and you just know the amount of metal in the pins is the bare minimum, too.

zamalek · 10 months ago
"3rD party connectors" is being waved around by armchair critics. The connectors on the receiving end of all of this aren't some cheap knock-off, they are from a reputable manufacturer and probably exceed the baseline.
zamalek · 10 months ago
> Any imbalance

I watched de-Bauer's analysis this morning, and you've seemingly hit the nail on the head. Even on his test bench it looks like only two of the wires are carrying all of the power (instead of all of them, I think 4 would be nominal?) - using a thermal camera as a measuring tool. The melted specimen also has a melted wire.

Maybe 24V or 48V should be considered, and higher gauge wires - yes.

mjevans · 10 months ago
It would be _lovely_ if instead of the 12V only spec we went to 48V for internal distribution. Though that would require an ecosystem shift. USB-PD 2.0~3.0 would also be better supported https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Deliver...

As others no doubt mention Power (loss, Watts) = I (amsp) * V (volts (delta~change on the wire)).

dV = I*R ==> dV = I * I / R -- That is, other things being equal, amps squared is the dominant factor in how much power loss occurs over a cable. In the low voltage realms most insulators are effectively the same and there's very little change in resistance relative to the voltages involved, so it's close enough to ignore.

600W @ 12V? 50A ==> 1200 * R while at 48V ~12.5A ==> 156.25 * R

A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!); though offhand I've heard DC to DC converters are more efficient in the range of a 1/10th step-down. I'm unsure if ~1/25th would incur more losses there, nor how well common PC PCB processes handle 48V layers.

https://en.wikipedia.org/wiki/Low_voltage#United_States

""" In electrical power distribution, the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V.

The NFPA standard 79 article 6.4.1.1[4] defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases.

Standard NFPA 70E, Article 130, 2021 Edition,[5] omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established.

UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. """

The UK is similar, and the English Wikipedia article doesn't cite any other country's codes, though the International standard generally talks at the power grid distribution level.

magicalhippo · 10 months ago
From what I can gather, one challenge with 24V and higher is that switched-mode converters, such as the buck converters used in the power stage, get a lot more inefficient when operating at high ratios.

You can see this effect in figure 6 in this[1] application note, where it's >90% efficient at ratios down to 10:2.5, but then drops to ~78% at a ratio of 10:1.

So if one goes for higher voltage perhaps 48V would be ideal, and then just accept the GPU needs a two-stage power conversion, one from 48V to 12V and the other as today.

The upside is that this would more easily allow for different ratios than today, for example 48V to 8V, then 8V to 1.2V, so that each stage has roughly the same ratio.

[1]: https://fscdn.rohm.com/en/products/databook/applinote/ic/pow... (page 14)

crote · 10 months ago
> I think 4 would be nominal?

6 or 12, depending on how you count. There are 6 12V supply wires, and 6 GND return wires. All of them should be carrying roughly the same current - just with the GND wires in the opposite direction from the 12V ones.

snuxoll · 10 months ago
> Really would like to see more safety margin with 14 gauge wire.

The wire itself really isn't the issue, the NEC in the US is notoriously cautious and 15A continuous is allowed on 14AWG conductors. Poor connectors that do not ensure good physical contact is a real problem here, and I really fail to understand the horrid design of the 12VHPWR connector. We went decades with traditional PCIe 2x6 and 2x6 power connectors with relatively few issues, and 12VHPWR does what over them? Save a little bulk?

exmadscientist · 10 months ago
This can't be Micro-Fit 3.0, those are only sized to accept up to 18AWG. At least, with hand crimp tooling, and that's dicey enough that I'd be amazed if Molex allowed anything larger any other way. The hand crimper for 18AWG is separate from the other tools in the series, very expensive, and a little bit quirky. Even 18AWG is pushing it with these terminals.

This has to be some other series.

Xelbair · 10 months ago
There were some tests of current draw being imbalanced on each pair.

two cables carried 22A - PSU connector heating up near them to 150C

rest 2-3A. derbauer has video on that on youtube.

1st party cables too.

connector is also a disaster design wise.

choilive · 10 months ago
I would bet a lot of money more than 1 engineer at nvda flagged this as a potential issue. If you were going to run this close to the safety margin, I would at minimum add current sensing on each pin.
amelius · 10 months ago
Yes, and by the way I also think typical GPU cables are way too stiff for such a small and fragile connector.
opello · 10 months ago
Shouldn't there be a bit better margin if we subtract the 75W from the slot itself? Down to ~7.3A/pin in the 600W example.
HexPhantom · 10 months ago
At what point do we stop blaming user error and start admitting the design itself is the problem?
caycep · 10 months ago
RTX 6000 Ada Gen had EPS12V 8 pin or something?
Dalewyn · 10 months ago
What is the reason the spec keeps specifying next to no headroom? Clearly that was the fundamental problem with 12VHPWR and it's being repeated with 12V2X6.

Any engineer worth his salt knows that you should leave plenty of headroom in your designs, you are not supposed to stress your components to (almost) their maximum specifications under nominal use.

bcrl · 10 months ago
I found it hilarious when a friend went to use a Tesla supercharger on his F150 Lightning. As the cable is only long enough to reach the charge port on the corner of a Telsa, he had to block 2 parking spaces and almost 3 chargers to use it. Oops... I hope all the money "saved" on copper was worth it.
nvarsj · 10 months ago
I would love to get some insight from Nvidia engineers on what happened here.

The 3 series and before were overbuilt in their power delivery system. How did we get from that to the 1 shunt resistor, incredibly dangerous fire-hazard design of the 5 series? [1] Even after the obvious problems with 4 series, Nvidia actually doubled down and made it _worse_!

The level of incompetence here is actually astounding to me. Nvidia has some of the top, most well paid EE people in the industry. How the heck did this happen?

1: https://www.youtube.com/watch?v=kb5YzMoVQyw

MBCook · 10 months ago
Maybe having cards drawing >500/600 watts is just a bad idea.

Add in a CPU and such and we’re quickly approaching the maximum amount of power that can be drawn from a standard 12 amp circuit continuously.

shawnz · 10 months ago
What's the proposal here, just convince everyone to stop demanding more powerful hardware?
rangestransform · 10 months ago
Not sure if this is related but a friend there said a lot of people retired
hipsterstal1n · 10 months ago
My boss has a few neighbors who are Nvidia and former Nvidia employees and they're all driving their shiny, fancy sports cars and such and living the high life thanks to the stock they got. It doesn't surprise me they're having churn and / or retiring cutting in to their workforce. Probably not enough to be substantial, but enough to count for something.
Joel_Mckay · 10 months ago
In general, poorly designed cables will use steel instead of tin plated copper conductors. The differences become more relevant at higher power levels, as energy losses within the connectors will be higher. Thus, the temperatures rise above the poorly specified unbranded plastic insulator limits, and a melted mess eventually trips the power supply to turn off.

I would assume a company like NVIDIA wouldn't make a power limit specification mistake, but cheap ATX power-supplies are cheap in every sense of the word.

Have a great day =3

kllrnohj · 10 months ago
derbauer did an analysis of the melted connector. It was nickel & gold, not steel.

buildzodes analysis showed Nvidia removed all ability to load balance the wires which they used to have with the 30xx connector that introduced it, and they had even fancier solutions for the multi 8/6 pin connectors before that.

The specification also has an incredibly low safety factor, much much lower than anything that came before.

They seem to be penny pinching on shunt resistors on a $2000 GPU. Their engineering seems suspect at this point, not worthy of a benefit of the doubt.

ulfbert_inc · 10 months ago
Derbauer and Buildzoid on YouTube made nice informative videos on subject, and no it is not "simply a user error". So glad I went with 7900 XTX - should be all set for a couple of years.
enragedcacti · 10 months ago
Summary of the Buildzoid video courtesy of redditors in r/hardware:

> TL;DW: The 3090 had 3 shunt resistors set up in a way which distributed the power load evenly among the 6 power-bearing conductors. That's why there were no reports of melted 3090s. The 4090/5090 modified the engineering for whatever reason, perhaps to save on manufacturing costs, and the shunt resistors no longer distribute the power load. Therefore, it's possible for 1 conductor to bear way more power than the rest, and that's how it melts.

> The only reason why the problem was considered "fixed" (not really, it wasn't) on the 4090 is that apparently in order to skew the load so much to generate enough heat to melt the connector you'd need for the plug to not be properly seated. However with 600W, as seen on der8auer video, all it takes is one single cable or two making a bit better contact than the rest to take up all the load and, as measured by him, reach 23A.

https://old.reddit.com/r/hardware/comments/1imyzgq/how_nvidi...

c2h5oh · 10 months ago
- Most 12VHPWR connectors are rated to 9.5-10A per pin. 600W / 12V / 6 pin pairs = 8.33A. Spec requires 10% safety factor - 9.17A.

- 12VHPWR connectors are compatible with 18ga or at best 16ga cables. For 90C rated single core copper wires I've seen max allowed amperages of at most 14A for 18ga and 18A for 16ga. Less in most sources.. Near the connectors those wires are so close they can't be considered single core for purpose of heat dissipation..

leeter · 10 months ago
Honestly with 50A of current we should be using connectors that screw into place firmly and have a single wipe or a single solid conductor pin style. Multi-pin connectors will always inherently have issues with imbalance of power delivery. With extremely slim engineering margins this is basically asking for disaster. I stand by what I've said elsewhere: If I was an insurance company I'd issue a notice that fires caused by this connector will not be covered by any issued policy as it does not satisfy reasonable engineering margins.

edit: replaced power with current... we're talking amps not watts

CarVac · 10 months ago
Derbauer: https://www.youtube.com/watch?v=Ndmoi1s0ZaY

Buildzoid: https://www.youtube.com/watch?v=kb5YzMoVQyw

I went 7900 GRE, not even considering Nvidia, because I simply do not trust that connector.

whalesalad · 10 months ago
Enjoying my 7900XTX as well. I really don't understand why nvidia had to pivot to this obscure power connector. It's not like this is a mobile device where that interface is very important - you plug the card in once and forget about it.
lmm · 10 months ago
The new cards need unprecedented amounts of power, the old connectors can't deliver enough.
slightwinder · 10 months ago
> So glad I went with 7900 XTX - should be all set for a couple of years.

Really depends on the use case. For gaming, normal office, smaller AI/ML or video-work, yeah, it's fine. But if you want the RTX 5090 for the VRAM, then the 24GB of the 7900 XTX won't be enough.

wing-_-nuts · 10 months ago
Honestly, the smart play in that case is to buy 2 3090's and connect them with nvlink. Or...and hear me out, at this point you could probably just invest your workstation build budget and use the dividends to pay for runpod instances when you actually want to spin up and do things.

I'm sure there are some use cases for 32gb of vram but most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram. It's been a while since I've seen anything bigger than 24gb but less than 70gb.

dualboot · 10 months ago
So funny because the only reason the 5090 has 32GB of RAM is because AMD put 24GB on the card they released against the 4080.
pshirshov · 10 months ago
Radeon Pro W7900
SketchySeaBeast · 10 months ago
Yeah, I expect my next card with be AMD. I'm happy with my 3080 for now, but the cards have nearly double in price in two generations and I'm not going to support that. I can't abide the prices nor the insane power draw. I'm OK with not having DLSS.
asmor · 10 months ago
It'll probably be fine for years, longer if you can stand looking at AI generated, upscaled frames. Liftup in GPU power is so expensive, we might as well be back to the reign of the 1080. The only thing that'll move the needle will be a new console generation.
hibikir · 10 months ago
I agree with the prices, but you say you have a 3080, which has a higher max power draw than the 5080 or 4080. Power requirements actually went down
Hamuko · 10 months ago
I had two AMD 7900 XTX and they both overheated like mad. Instant 110°C and throttle. Opted for a refund instead of a third GPU.
shmerl · 10 months ago
Sapphire Nitro has a really nice heatsink for 7900 XTX. It works well even under full load.
seanw444 · 10 months ago
Probably reference cards, yeah? I think the common advice is to not buy the reference cards. They rarely cool enough. I made that mistake with the RX 5700 XT, and will never again.
asmor · 10 months ago
It's too bad AMD will stop even aiming for that market. But also, I bought a Sapphire 7900 XTX knowing it'd be in my machine for at least half a decade.
dralley · 10 months ago
People are acting like this is some long-term position. There's no evidence of that. AMD didn't give up on the high end permanently after the RX 480 / 580 generation.

What AMD does need right now is:

* Don't cannibalize advanced packaging (which big RDNA4 required) from high-margin and high-growth AI chips

* Focus on software features like upscaling tech (which is a big multiplier and allows midrange GPUs to punch far above their weight) and compute drivers (which they badly need to improve to have a real shot at taking AI chip marketshare)

* Focus on a few SKUs and execute as well as possible to build mindshare and a reputation for quality

"Big" consumer GPUs are increasingly pointless. The better upscaling gets, the less raw power you need, and 4K gaming is already passable (1440p gaming very good) on the current gen with no obvious market for going beyond that. Both Intel and Nvidia are independently suffering from this masturbatory obsession with "moar power" causing downstream issues. I'm glad AMD didn't go down that road personally.

If "midrange" RDNA4 is around the same strength as "high-end" RDNA3, but $300 cheaper and with much better ray tracing and an upscaling solution at least on par with DLSS 3, then that's a solid card that should sell well. Especially given how dumb the RTX 5080 looks from a value perspective.

emsixteen · 10 months ago
This same thing happening on the 40 series cards was good enough vindication for me not 'upgrading' to that at the time. I'd rather not burn my house down with my beloved inside.

Can't believe the same is happening again.

the__alchemist · 10 months ago
I can relate to this perspective! It's important to step out of the system and recognize priorities like this; balancing risk and reward.

I think, in this particular case, perhaps the risk is not as high as you state? You could imagine a scenario where these connectors could lead to a fire, but I think this is probably a low risk compared to operation of other high-power household appliances.

Bad connector design that's easy to over-current? Yes. Notable house-fire risk? Probably not.

cranium · 10 months ago
There is a real problem with the connector design somewhere: der8auer tested with his own RTX 5090FE and saw two of the cable strands reach concerning temperatures (>150ºC).

Video timestamp: https://youtu.be/Ndmoi1s0ZaY?t=883

gambiting · 10 months ago
I've tested my own 5090FE the same way he did(Furmark for at least 5 minutes at max load, but I actually ran it for 30 minutes just to be mega sure) and with an infrared thermometer the connector is at 45C on the GPU side and 32C on the PSU side. I have no idea what's happening in his video but something is massively wrong, and I don't understand why he didn't just test it with another PSU/cable.
perching_aix · 10 months ago
More peeking and prodding would definitely have been welcome, but it's still a useful demonstration that the issues around the card's power balancing are not theoretical, and can definitely be the culprit behind the reports.
minzi · 10 months ago
Are you using the splitter that nvidia provided or a 600w cable? Also, what PSU?

I've been using mine remotely, so trying to figure out how much I should panic. I'm running off the SF1000 and the cable it came with. Will be a few weeks before I can measure temperatures.

Deleted Comment

dcrazy · 10 months ago
Is NVIDIA breaching any consumer safety laws by pumping twice the rated current through a 24ish gauge wire? Perhaps by violating their UL certification?
RobotToaster · 10 months ago
CE marking in Europe could be an issue. There's potential for a fine or forced recall.
michaelt · 10 months ago
Aren't 12VHPWR cables like https://www.overclockers.co.uk/seasonic-pcie-5.0-12vhpwr-psu... 16AWG ?

Sure, there are problems with the connector. But 600W split over a 12-pin connector is 8.3A per wire, and a 16AWG / 1.5mm² wire should handle that no problem.

zamadatix · 10 months ago
You're correct about 16AWG however "But 600W split over a 12-pin connector is 8.3A per wire" is only what _should_ ideally be occurring, not what Roman aka Der8auer _observed_ to occur. Even with his own 5090, cable, PSU, and test setup:

> Roman witnessed a hotspot of almost 130 degrees Celsius, spiking to over 150 degrees Celsius after just four minutes. With the help of a current clamp, one 12V wire was carrying over 22 Amperes of current, equivalent to 264W of power.

mirthflat83 · 10 months ago
Made some 12v-2x6 custom cables for fun and 99% sure the melting problems are from the microfit female connectors themselves. A lot of resistance going through the neck
mminer237 · 10 months ago
UL is a private company. There're no laws requiring it or penalizing violations. I would think the only legal consequences would be through civil product liability/breach of warranty claims. Plus, like, losing the certification would mean most stores would no longer stock it.
dcrazy · 10 months ago
Many products sold in the United States must be tested in a CPSC-certified lab for conformity, of which UL is the best known. But consumer electronics don’t seem to be among that set, unless they are roped in somehow (maybe for hazardous substances?).
singleshot_ · 10 months ago
Seems like if you filled your house with non-UL compliant stuff and your house burned down, the first fact would be material to your insurance carrier (you know, the Underwiter to which the UL name refers…)
sidewndr46 · 10 months ago
You might want to do some research on what you can buy and legally plug into your own home. It's more or less you get UL listing, or the product isn't available
reaperman · 10 months ago
mpreda · 10 months ago
Step the GPU voltage up to 48V. (anyway you make a new connector that's not compatible with existing PSUs. Why not actually fix a problem at the same time, once and for all! [48V should be enough for anybody, right?])
jmrm · 10 months ago
Not a bad idea IMHO. There are already computers (servers mostly, but also integrated models) who only have 12V power connections and the mainboard does the step-down voltage conversion, and IIRC some companies wanted to do the same to regular desktops.

I would be totally happy if the next gen of computers have 12V outputs to the mainboard and CPU and 48V to the GPU and other power-hungry components. This would make the PCB of those cards a bit bigger, but also would have less power losses and less risk of overheated connectors on the other hand.

B1FF_PSUVM · 10 months ago
> Step the GPU voltage up to 48V.

Meh. Might as well ask for its own AC cable and be done with it.

mpreda · 10 months ago
But I want fewer cables, not more.
Kirby64 · 10 months ago
Until the US changes their AC power connectors, we just don’t have a use case for it frankly. When the entire system is going to always top out at 1200W or so (so you have an extra few hundred watts for monitors and such), we’re pretty limited to maximum amperage.
ielillo · 10 months ago
The USA has 240 volt plugs. They are only used for high power appliances such as AC or ovens. If you want, you could add a plug for your high powered space heater AKA gaming PC.
mpreda · 10 months ago
Now I must use lots of rather thick cables in my desktop (because I run GPUs).

Imagine that the GPU would instead suck up all the power it needs through the PCIe connector, without all those pesky cables. (right now PCIe can provied 75W at 12V, i.e. 6.25A; that same current would provide 300W at 48V).

RetpolineDrama · 10 months ago
I pulled a fresh 20A (120V) circuit just for my 5090 build.
lyu07282 · 10 months ago
It's strange how Nvidia just doubled down on a flawed design for no apparent reason. It doesn't even do anything, the adapter is so short you still have the same mess of cables in the front of the case as before.
formerly_proven · 10 months ago
This connector somehow has it's own Wikipedia page and most of it is about how bad it is. Look at the table at the end: https://en.wikipedia.org/wiki/16-pin_12VHPWR_connector#Relia...

The typical way to use these is also inherently flawed. On the nVidia FE cards, they use a vertical connector which has a bus bar connecting all pins directly in the connector. Meanwhile, the adapter has a similar bus bar where all the incoming 12V wires are soldered on to. This means you have six pins per potential connecting two bus bars. Guess how this ensures relatively even current distribution? It doesn't, at all. It relies completely on just the contact resistance between pins to match.

Contrast this with the old 8-pin design, where each pin would have it's own 2-3 ft wire to the PSU, which adds resistance in series which each pin. That in turn reduces the influence of contact resistance on current distribution. And all cards had separate shunts for metering and actively balancing current across the multiple 8-pin connectors used.

The 12VHPWR cards don't do this and the FE cards can't do this for design reasons. They all have a single 12 V plane. Only one ultra-expensive custom ASUS layout is known to have per-pin current metering and shunts (but it still has a single 12 V plane, so it can't actively balance current), and it's not known whether it is even set up to shut down when it detects a gross imbalance indicating connector failure.

johnwalkr · 10 months ago
Was the old design actively balanced? I think the current of each pin was only monitored.
onli · 10 months ago
I was under the impression it saves them money. Is that correct?

It is also a powerplay. By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

Plus probably some internal arrogance about not admitting failures.

incrudible · 10 months ago
> By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

They are free to use them, they just don’t because it is a stupid connector. The cards that need 600W are gonna need an enormous amount of cooling, therefore they will need a lot of space anyway, no point in making the connector small.

Yes, NVIDIA created an amazingly small 5090 FE, but none of the board partners have followed suit, so most customers will see no benefit at all.

numpad0 · 10 months ago
That's the majority understanding, but I suspect it was a simple "update" into "same" connector - the old one was a product called Molex Mini-Fit, and the new one is their newer Micro-Fit connector.
bayindirh · 10 months ago
> Plus probably some internal arrogance about not admitting failures.

Arrogance is good. Accelerates the "forced correction" (aka cluebat) process. NVIDIA needs that badly.

formerly_proven · 10 months ago
I doubt engineering a new connector (I think it's new? Unlike the Mini-Fit Jr which has been around for like 40-50 years) and standing up a supply chain for it could offset the potentially slightly lower BOM cost of using one specialty connector instead of three MiniFit Jr 8-pins. However, three of those would not have been enough for the 4090, nevermind the 5090.
rcarmo · 10 months ago
It saves them money on a four-digit MSRP. I think they could afford to be less thrifty.
beeflet · 10 months ago
>By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

I suppose but this could be overcome by AMD/Intel shipping an adapter cable

fredoralive · 10 months ago
The connector is a PCI spec, it's not an Nvidia thing, it's just they introduced devices using it first.
NekkoDroid · 10 months ago
Can't have 4 connectors going into 1 video card, that would look ridiculous :/

- Nvidia