I've had this happen back in on a modular 1100 watt psu on my 8800 Ultra..
The system was unstable at high graphics load, and it took me a long time to find the fault, the PCI-E power connector was high-resistance at the PSU end, the connector was totally melted when I found it, the instability was due to the gfx card browning out from insufficient power supply due to the high connector resistance.
A loose connector on my RX580 did the same to me, could never explain the weird resets even with power draw diagnosis until I opened the case and just "wtf"
Sounds like the issue can happen due to high bend in the cable or too many cycles. Given the size of these video cards, placement of the connector and the size of computer cases I don't see why they didn't opt for right angle connectors so there is no bend needed.
edit: Looks like the nvidia designed version has the connector angled front at 45 degrees while the non-nvidia versions in the article don't.
Nvidia has historically used the power cable orientation as a form of market segmentation. Their high end product that goes into server cases have the power connections on one end of the card and they are parallel with the card. Consumer GPUs have the power connectors sticking out on the top of the card, and even in 4U server cases that can cause the power connectors to rub up against the top of the case or be blocked entirely.
I hadn’t even noticed that. And you’re completely right. My A5000 has pin holes on the board for a top mount but it’s power connector is on the side facing towards the front panel of the case.
This 'fact' gets batted around so often and its incredibly misunderstood. That is saying that in the worst case, where everything goes wrong, the cable can last no less than X cycles, ever. Also, it's not 16.
Also, these new connectors have the exact same rating as the prior 8-pin connectors.
Under the typical use case (ie. you install it and leave it there for a few years), 16 cycles is more than enough. Now, if you're an enthusiast running benchmarks or whatever it might cause issues, but isn't really "insane" under the typical use case. Also, I'm not sure where the "16 cycles" number is from, but other commenters linked to an article that says it's rated for 40 cycles.
Right angle connections pointing in which direction? Some PSUs are on the bottom and some are on the top. It would be worse having to connect a right angle and then routing over or around the card to get to the PSU.
If it's angled to the front of the case it'd cover both cases. Interestingly enough the founder's edition (ie: nvidia designed cards) seems to have the connector angled front at 45 degrees. However the non-founders cards in the article have it sticking horizontally to the side like previous cards. Seems something nvidia should have communicated to it's partners.
Interesting that the connector (12VHPWR) in question seems to be safe for only 40 (forty) cycles. That's really not a lot, as these things go.
Of course it's an internal connector and not something you're supposed to plug/unplug all that often, but I still think it sounds like a really low number.
Keep in mind that 40 cycles is the minimum. 99.9% of the plugs will be fine after 40 cycles and probably will do 400 without issue. But you're outside what the manufacturer guarantees it'll last.
And by experience, 40 is absolutely plenty. Almost all plugs in a computer are likely to only experience 10 cycles at most (and half of them only two; during setup and during EOL).
While we're at it, can we get a new motherboard standard that acknowledges the fact that video cards are at least as complex as the motherboard itself? With the strong preference for a ring configuration for central cooling, like the trash can Mac Pros. Or a star one.
Still don't know why a smaller cable was needed. This card is the size of a game console, why does it need a slim cable? I would much rather buy some AIB partner 4090 that has four 8-pin connectors instead of one new connector.
The old mini-fit connector had evolved to be kind of sketchy - a six-pin connector with two pins you hold on the side to turn it into an 8-pin connector? Not pretty.
And the new connector also adds 'sideband pins' so the power supply can tell the card how much power is available. That should reduce problems with random shutdowns due to drawing too much power.
With that said, a connector that melts clearly isn't an improvement.
The 8-pin connector is also just an outdated design. It's unnecessarily bulky, and rated at a measly 150 W. The idea wasn't for you to put 4 of these (32 pins and wires) together to power a single device.
For reference, an Anderson PP45 connector pair (from 1966!) is smaller than the original 8-pin connector and would handle 660 W at 12V reliably at up to 10000 cycles (and can even handle hot-unplug).
"a six-pin connector with two pins you hold on the side to turn it into an 8-pin connector"
No, most modern PSUs come with solid 8 pin connectors (as well as 6+2 pin connectors that you mention). If you watch the YouTube PC building channels, you'll see no one uses the latter for cards with 8 pin connector.
Besides aesthetics and ease of integration in the chassis, air flow / cooling is a non-trivial issue, particularly with the wattages systems are running now. More or wider cables increase chassis air flow impedance which requires higher pressure fans to overcome which translates to a louder overall system.
The Nvidia Founders Edition cards have tiny PCBs, due to their relatively small coolers and the flow through design. There isn't space for 3 8-pin connectors. And yes, that was definitely a choice but they're sticking to it.
Even easier to use the old standard instead of making a new solution to a problem that didn't exist in the first place (except for aesthetic reasons).
The old connectors were solid and had plenty of headroom if used within spec, with the new cards you'd still just need 3-4 to be well within spec - no monitoring circuitry needed.
The PCI-SIG maximum safe rating for the old and janky 8-pin connectors was 150 watt, despite how bulky it was. The 4090 would need 4 of those (5 if you account for when it exceeds 600 watt).
A new connector was absolutely necessary, even for more reasonable GPU power levels. That the current connector is broken is a different problem, and could likely be fixed without manufacturing changes without a new overall connector design.
yes, by all means, add additional components, complexity and digital logic to detect the overheating of an underspecced component, that sound much more reasonable than using a connector that can carry the required current without heating up.
So happy it didn't burn up now!
edit: Looks like the nvidia designed version has the connector angled front at 45 degrees while the non-nvidia versions in the article don't.
Also, these new connectors have the exact same rating as the prior 8-pin connectors.
[1] https://en.overclocking.com/12vhpwr-adaptator-is-a-consumabl...
Of course it's an internal connector and not something you're supposed to plug/unplug all that often, but I still think it sounds like a really low number.
And by experience, 40 is absolutely plenty. Almost all plugs in a computer are likely to only experience 10 cycles at most (and half of them only two; during setup and during EOL).
And the new connector also adds 'sideband pins' so the power supply can tell the card how much power is available. That should reduce problems with random shutdowns due to drawing too much power.
With that said, a connector that melts clearly isn't an improvement.
For reference, an Anderson PP45 connector pair (from 1966!) is smaller than the original 8-pin connector and would handle 660 W at 12V reliably at up to 10000 cycles (and can even handle hot-unplug).
No, most modern PSUs come with solid 8 pin connectors (as well as 6+2 pin connectors that you mention). If you watch the YouTube PC building channels, you'll see no one uses the latter for cards with 8 pin connector.
Besides aesthetics and ease of integration in the chassis, air flow / cooling is a non-trivial issue, particularly with the wattages systems are running now. More or wider cables increase chassis air flow impedance which requires higher pressure fans to overcome which translates to a louder overall system.
In those kinds of cases, cable management and space/size tolerances can be very difficult - swapping four connectors for one makes a big difference
https://en.overclocking.com/12vhpwr-adaptator-is-a-consumabl...
https://imgur.com/a/VqTU1QR
Why would you still put the power connector on the side where the card will definitely get very close to the side of the case?
A new connector was absolutely necessary, even for more reasonable GPU power levels. That the current connector is broken is a different problem, and could likely be fixed without manufacturing changes without a new overall connector design.
Dead Comment