First, this already happened to some extent. Outside of drives and fans, the virtually all of the power used in your system is exclusively 12v; either from the 12v pins on your ATX24 or P4 or EPS12 connectors, or via the PCI-E connectors; very little non-drive or non-fan power is supplied via 3.3 or 5v rails.
Second, almost all PSUs today use 12v DC to 3.3/5v DC converters to increase efficiency given how little 3.3v and 5v is required. The PSU itself is often designed internally as a high efficiency multi-module single rail 12v-only design.
Third, Intel is not in a position to dictate anything. Intel has tried to float many changes in the standards (such as the WTX board standard (with matching incompatible mobo plug), and the BTX board standard (using ATX plugs)), and all failed. The only thing that stuck was using 4 and 8 pin PC Minifit Jr plugs (the same ones ATX, PCI-E and countless other plugs come from); and, annoyingly, somehow not compatible with PCI-E's pinout.
Fourth, Intel is also not the dominant player in the field now, AMD is. They failed to get major changes through as the dominant player, what makes them think anyone cares to hear what they have to say today? It is unlikely AMD will ever ship machines that don't ship with normal ATX unless they just jettison the ATX24 altogether.
Side note: 12v capacity: ATX24 144w, P4 192w, EPS12 336w, PCI-E 6 75w, PCI-E 8 150w, and the PCI-E slot itself 75w.
A modern desktop could theoretically be ran off a dual EPS12 and a PCI-E 8+8 for the GPU; modern mobo designs run the CPU and RAM VRMs off the EPS12 entirely, modern GPU designs run the GPU VRMs off of PCI-E plugs and the VRAM VRMs off the PCI-E slot power, very little power of any voltage is supplied to the system via ATX24.
Another side note: Any board that has a legitimately clean 5v rail for USB is not feeding it off the PSU directly. Any board that has absolutely garbage USB power probably committed this sin.
I agree it's delusional, but to give some hard data, Mercury Research published its quarterly findings on x86 market share as of Q4 2019[0]. Specifically:
- On the server side, AMD market share is only 4.5% (up by 1.4 percentage point from last year)
- For desktop, market share is 18.3%, up by 2.4 pp from last year.
- For laptop, market share is 16.2%, up by 4.0 pp from last year.
- For overall x86 chips, market share is 15.5%, up by 3.2 pp from last year.
So it's pretty clear that AMD isn't quite leading the market, although there is an overall upward trend in market share, and the rate at which AMD is gaining market share seems to be on the rise. Maybe in 4-5 years, AMD will be the dominant player. But it's still too early to call them this.
Sure, market share changes slowly as people keep their PCs for more than 5 years (remember that Ryzen was released 3 years ago and only Ryzen 3000 really beats Intel).
> Intel is also not the dominant player in the field now, AMD is.
Only dominant in enthusiast land. ~20% of desktop but their server marketshare is still in the 5-10% range, with many analysts estimating below 5%. (Numbers from rumors of Mercury's next report, to be released soon). Likely to blow past Intel after the next hardware cycle. I'd guess 2021-22, since epyc zen2 launched in the end of 2019. Big players plan hardware on a multi year schedule (oem, vps/hosting).
As someone who reads a lot of PC specifications in detail, I can assure you, Intel is in an excellent position to dictate PC specifications as it routinely does so both independently and as a very active member of boards that do so.
Yeah, this is just rearranging some things to avoid California's new efficiency regulations. If you move the inefficent stuff to the motherboard, the PSU magically gets more efficient!
Fujitsu have been building PCs for a number of years with proprietary 12v-only PSUs, with SATA power being provided from an auxiliary connector on the motherboard.
Basically every OEM small PC (everything below SFF from Lenovo, HP, Dell, Fujitsu, etc. which do not accept PCIe extension cards and are pretty much built from laptop parts) is now powered from laptop brick PSUs. Which means everything inside the desktop is derived off of a single 19-20V DC line. SFFs still use internal PSU but they're all custom and while I haven't opened up one recently I'm sure they dropped as many of the rails as possible since they have full control of the MoBo/PSU design. The only PCs that use ATX PSUs are full towers in general.
Modularized redundant server PSUs are usually built this way: the modules are 12v only, and dumb as bricks, with the 3.3v and 5v supplied by the PSU's backplane; unfortunately, this leads to server failure because of backplane failure.
Some SAS backplanes for cases that fit a bunch of drives sometimes have their own 5v VRMs and take only a standard 12v connector of some sort. Helps when PSUs typically don't supply enough 5v to feed a few dozen drives, but supply more than enough 12v.
Based on my experience selling USB 3.0 PCIe cards, the ones that supply 5.0V directly off the PSU are more likely to maintain voltage at higher currents than ones with an on-board buck circuit. And it's one less part that can fail. I have not seen much in the way of stability issues either. The 5V from the PSU is clean enough and voltage drop is minimal at these currents over the standard 18 AWG PSU wires.
Generally the ripple is audible on USB-powered audio devices that don't use isolated 5v rails.
The ones that don't have audible issues either have their own VRMs on the boards or they have a sufficiently isolated 5v rail but still use PSU power. A good board should not have issues even under extreme cases. Neither solution is particularly more expensive than the other.
If I need to maintain voltage without droop under high load, I probably should be looking at legitimate chargers, not something powered off the computer.
>Outside of drives and fans, the virtually all of the power used in your system is exclusively 12v; either from the 12v pins on your ATX24 or P4 or EPS12 connectors, or via the PCI-E connectors; very little non-drive or non-fan power is supplied via 3.3 or 5v rails.
I'm curious which fans don't use 12v - all the fans I know of in a typical computer operate on 12v except perhaps GPU fans. Occasionally I'll run a fan on 5v if I need it to be quieter and I don't have any PWM headers available, but that's about it.
Some systems control fans via voltage (the old way of doing it), instead of modern PWM (which has been the standard for about a decade); and even with PWM, it should be viewed as an average of the voltage instead of merely as the peak voltage during pulses.
No matter how you end up controlling your fans, you're not giving them straight unfettered 12v, either you're controlling them via voltage (and they spend most of their time at around 5-7v), or you're controlling them via PWM (with a significantly reduced load cycle).
Some ultralight laptops have switched to 5v for their fans, but that does not seem to be any sort of standard. I have not seen a GPU that uses 5v for fans, and by the time GPUs needed big enough fans to have to control the speed on, they were exclusively PWM, and I am not aware of a GPU that used voltage control on it's fan.
I love computers with 12VDC native power jacks. Why? My solar-powered van (interior only, the engine is still diesel) is 12VDC everywhere, and that's good clean battery DC, not some noisy-ass AC/DC conversion.
I love it so much that 2 of the last 3 computers I built (on an ATX form factor) were 12VDC native. I have external AC/DC power supplies for them, but when travelling/living in a solar-powered environment, I get to just plug them straight into the clean juice.
Sadly, monitors used to be this way too - they came with external power bricks and were 12VDC native. But sometime in the mid-2000's, something changed. My guess is that someone figured out how to shrink the necessary converters to a size that made it more "sensible" to put them inside the monitor. Result: almost impossible now to find a monitor you connect to a 12VDC supply. The one I have in the van took me months to find on ebay back in 2015. The only exception are "TV" monitors specifically made for RVs, which is great and all, except that most of them are crap compared to a modern monitor.
One small problem with plugging 12VDC native computers into a solar system: it's not atypical for the solar charge controller to drive the charge voltage over 14V, at which point most internal "PSUs" designed for 12V will shutdown. You need a voltage regulator between you and the battery-terminal voltage.
Do you find that it's more power efficient to run a custom built machine like that? I would think that using a laptop as your daily driver would stretch your battery bank further, since it's designed from the ground up for battery power. I lived off solar power for a while, and that was my thinking. I never did the math to verify it though.
What I will say is that the machine I used to use in the van lost its video output (the perils of onboard graphics), and I've found it impossible to source a reasonable replacement mobo. As a result, I've switched back to using a laptop (lenovo Y700) in the van.
What I notice as a potential improvement is that it effectively lets me store an extra X hours of power for computing, because I'm adding the laptop battery to the overall storage. Compared to my "house batteries", the laptop batteries are of course very small. But getting (say) 3 hours of work time stored up in the laptop does seem like a plus, especially on cloudier days.
> My guess is that someone figured out how to shrink the necessary converters to a size that made it more "sensible" to put them inside the monitor
I’m under the impression that this was demand/market forces. It used to be much cheaper to have an external brick rather than engineer a power supply into a monitor. However, the engineering cost was lower than the missed sales due to consumers not wanting external power supplies. It’s just another thing to lose. Another unique plug you’ll never be able to reasonably replace with what’s on hand.
"noisy" in this context referred to the output signal, not the acoustic levels.
Also, the 12VDC native monitor that I did buy didn't have a unique plug. It used the fairly standard ring/core 12VDC jack and plug found in many different contexts, though not a lot of digital ones.
I'm just waiting for electric vehicles and their technology to invade the van and RV market.
It would be really cool to have a solar/battery powered vehicle you can power from DC. A bus-sized RV could easily generate 5kw from current solar panels just on it's roof.
Current RVs do have both 110v AC and 12v DC systems, but they're not easily integrated.
12v is too low to power bigger stuff like air conditioners (1500 watts = 125 amp wiring!) and converting DC to AC requires expensive inverters.
A really cool solution would be high-voltage DC powering DC appliances directly and stepping down to clean lower voltage DC close to where it's needed.
Electric vehicles are a long way from the sort of mileage that would allow them to effectively replace vans and RVs. We routinely drive 5-800 miles in a day in our van. This isn't going to be possible until battery replacement takes over from battery charging.
And yes, our van has 120VAC (1kW) available, but I prefer to use it only for things where's there no alternative (e.g. our toaster, or recharging laptops, sigh).
As far rooftop, we have the long Sprinter with about as much solar as you can get on the roof given that you need (at least) an exhaust fan. That gets us to 540W. The most I've ever seen on a van was 3kW and that was an ... insane ... physical setup.
I was recently wondering if 12VDC shouldn't become the new current standard for inside homes. Everything is still 230VAC (in the EU at least, I think the US uses 110V?), which is way too much for a lot of household appliances. The only reason modern LED bulbs are hot and expensive is that they need to convert down to 12V from 230V. It's mostly washing machines and kitchen appliances that need that much (and some of those even need 380V).
Almost everything in the home needs adapters and power bricks to convert it to something sensible, which usually is 12VDC. So why not just make that the standard?
Monitors were still 12V inside last time I checked (5 years ago), they're basically integrating the PSU into the monitors because they use less power and can be smaller/run cooler.
It's very likely you can just bypass the internal PSU on any monitor and power it straight from your battery.
You'll just need to take it apart, which can be hard depending on the housing.
> I built (on an ATX form factor) were 12VDC native
I want this! Mainly to run from batteries efficiently, transforming from DC to AC and then back to DC is so absurd. Can you give any photos/links/recommendations on how to start?
Sadly, I mistyped the above. I used ITX mobos, not ATX. However, you can get started by taking a look here: http://mini-box.com/ and in particular the "picoPSU" section.
In one sense this feels like pushing the efficiency problem under the rug. There's a requirement for power supplies to meet a certain efficiency standard, and generating 3.3v and 5v rails are pushing that number down. Solution? Move that task to the motherboard where they won't count against the power supply efficiency requirement.
Of course, from a purely engineering perspective, it really does make sense and it's overdue.
The motherboard "knows more" than the PSU though, it should be able to better deal with these idle power situations by shutting down things that aren't used anymore and powering them back up when needed. An external PSU would have tighter constraints regarding power availability and wouldn't be able to optimize the power profile quite as aggressively since they don't really know exactly what they're driving.
This is an interesting thought and a different perspective on the situation.
I was thinking something of that kind, that, instead of being able to rely on very high quality components and very high expertise for power supply design being centralized in the PSU, everyone else (motherboard / graphics card / harddrive ) manufactures must now become able to design, and willing to implement, high-efficiency, high-quality power-regulation circuitry in their own products.
This also affects the price of those components, while it probably does not make the PSU i get to buy in a webshop any cheaper.
From an engineering viewpoint, it probably does make sense, especially for the lower voltages, as less copper may be required.
Exactly. I'm pretty sure that once the DC-DC converters are mainly placed in the motherboard, they won't be audited for efficiency anymore and global efficiency will plummet.
Decreasing efficiency of the motherboard would mean it needs more power, meaning they would require larger power supplies in addition to being a new specification.
Makes sense. Should have happened long ago. All those old voltages date from when +5 from the power supply was used directly by the logic. Facebook's Open Compute rack delivers only +12 to the boards and drives, and that's from 2011. There's a power supply in the base of the rack that takes in three-phase AC or 48 VDC or whatever and puts out +12VDC. All other conversion is on the board. This is just catching up desktops to where the data center went years ago.
Ah, they did.[1] The power busbars are in the same place, so you can't have both 12V and 48V in the same rack. The connectors are intentionally incompatible so you can't plug into the wrong voltage.
> PSU vendors don’t want to release ATX12VO products for DIY builders until there are motherboards that support ATX12VO. Motherboard vendors don’t want to create products until power supply makers support them.
Seems like that could be addressed with one of two adapters:
- A passive adapter cable with a female ATX connector (with the 3.3V and 5V pins disconnected) and male ATX12VO connector
- An active adapter cable/board with a female ATX12VO connector, the necessary circuits to step voltages down to 3.3V and 5V, and a male ATX connector.
One or both of these could happen today and immediately solve that chicken-and-egg problem for the custom market, no?
> An active adapter cable/board with a female ATX12VO connector, the necessary circuits to step voltages down to 3.3V and 5V, and a male ATX connector.
I think this will definitely happen, as supplies of classic ATX PSUs start to dry up, and people need to keep powering their old machines. This situation already exists for old AT style vintage machines, being powered by ATX PSUs.
It was bound to happen eventually, there has been a long trend towards point-of-use regulation displacing multiple output supplies. I'm just surprised they went with 12V, and not 19V which has lower rectification losses, less power loss to wire/trace resistance, and a large existing design ecosystem from laptops where 19V has been the norm for years.
To me it seems like the ATX12VO specification is more of an endorsement of the direction PC power supplies seem to be moving in rather than a from-scratch redesign. It would be nice to have higher voltage (19V, 24V or perhaps higher) for a single voltage supply, but it would be a far bigger change to the overall PC ecosystem.
Why is it 19V for laptops? I had suspected it’s to compensate for voltage drop at linear regulators. Batteries are either 11.1(3.7v 4 cells) or 14.8(3.7v 3 cells 1-3 parallel) and these voltages doesn’t seem immediately consistent with ATX12V to me.
The nominal voltage of a battery isn't the maximum voltage, 3.7V lithium cells are typically charged to around 4.2V.
A 4S battery (or equivalent) will have a nominal voltage of 14.8V and a maximum voltage of 16.8V, which leaves 2.2V extra, probably to account for losses.
Why 19V ended up being the typical voltage rather than rounding it up to something like 20V or 24V is beyond me (probably a matter of compliance, try checking IEC 60950-1 or related standards).
I don't think we'd want to put those in our laps if they were dropping that much over linear regulators... Roughly (Dropped Voltage)*(Current through regulator)=(Power Dissipated)
12V makes so much more sense to be sending around - the losses at lower voltages are rough. It would be nice to replace legacy molex connectors with a more modern standard, however.
The SATA power connector is the obvious successor. Most random doodads (e.g. internal fan controllers) have moved from using molex connectors in the past to SATA power connectors now.
Idk if it's MOLEX or whoever is knocking off their designs but the plastics they use are very brittle. I've had the clip on a number of modular cables snap off while inserting it into the motherboard and GPU connectors.
I also think the design is large and dated, and I think the modular nature of the connectors is unnecessary. But I don't know how small a wire gauge you can safely use for the current delivery on some of these 500-1k W PSUs.
ATX power uses 18 AWG copper cable. 18 AWG copper cable has 6.4 ohms of resistance per thousand feet. A 6 foot 18 AWG cable would have .0384 ohms of resistance. Let’s say you has a pretty heavy load of 10A. This would be a 0.33 ohm load. There would be about 10% loss, or 3W.
Not nothing, though at more typical loads the cable loss would be lower. 5V rails are more typically used and they would have close to half the loss.
I wonder how easy this would make server local UPS systems. A battery with a single buck/boost converter should be enough as a power supply. Just need to add a mains fed charge circuit.
First, this already happened to some extent. Outside of drives and fans, the virtually all of the power used in your system is exclusively 12v; either from the 12v pins on your ATX24 or P4 or EPS12 connectors, or via the PCI-E connectors; very little non-drive or non-fan power is supplied via 3.3 or 5v rails.
Second, almost all PSUs today use 12v DC to 3.3/5v DC converters to increase efficiency given how little 3.3v and 5v is required. The PSU itself is often designed internally as a high efficiency multi-module single rail 12v-only design.
Third, Intel is not in a position to dictate anything. Intel has tried to float many changes in the standards (such as the WTX board standard (with matching incompatible mobo plug), and the BTX board standard (using ATX plugs)), and all failed. The only thing that stuck was using 4 and 8 pin PC Minifit Jr plugs (the same ones ATX, PCI-E and countless other plugs come from); and, annoyingly, somehow not compatible with PCI-E's pinout.
Fourth, Intel is also not the dominant player in the field now, AMD is. They failed to get major changes through as the dominant player, what makes them think anyone cares to hear what they have to say today? It is unlikely AMD will ever ship machines that don't ship with normal ATX unless they just jettison the ATX24 altogether.
Side note: 12v capacity: ATX24 144w, P4 192w, EPS12 336w, PCI-E 6 75w, PCI-E 8 150w, and the PCI-E slot itself 75w.
A modern desktop could theoretically be ran off a dual EPS12 and a PCI-E 8+8 for the GPU; modern mobo designs run the CPU and RAM VRMs off the EPS12 entirely, modern GPU designs run the GPU VRMs off of PCI-E plugs and the VRAM VRMs off the PCI-E slot power, very little power of any voltage is supplied to the system via ATX24.
Another side note: Any board that has a legitimately clean 5v rail for USB is not feeding it off the PSU directly. Any board that has absolutely garbage USB power probably committed this sin.
This is delusional - AMD’s market share is like 15%.
- On the server side, AMD market share is only 4.5% (up by 1.4 percentage point from last year)
- For desktop, market share is 18.3%, up by 2.4 pp from last year.
- For laptop, market share is 16.2%, up by 4.0 pp from last year.
- For overall x86 chips, market share is 15.5%, up by 3.2 pp from last year.
So it's pretty clear that AMD isn't quite leading the market, although there is an overall upward trend in market share, and the rate at which AMD is gaining market share seems to be on the rise. Maybe in 4-5 years, AMD will be the dominant player. But it's still too early to call them this.
[0]: https://www.techpowerup.com/263612/amd-desktop-processor-mar...
But AMD is dominating when you look at CPUs sold: https://www.reddit.com/r/AMD_Stock/comments/dbvs45/amd_score...
Only dominant in enthusiast land. ~20% of desktop but their server marketshare is still in the 5-10% range, with many analysts estimating below 5%. (Numbers from rumors of Mercury's next report, to be released soon). Likely to blow past Intel after the next hardware cycle. I'd guess 2021-22, since epyc zen2 launched in the end of 2019. Big players plan hardware on a multi year schedule (oem, vps/hosting).
Fujitsu have been building PCs for a number of years with proprietary 12v-only PSUs, with SATA power being provided from an auxiliary connector on the motherboard.
Some SAS backplanes for cases that fit a bunch of drives sometimes have their own 5v VRMs and take only a standard 12v connector of some sort. Helps when PSUs typically don't supply enough 5v to feed a few dozen drives, but supply more than enough 12v.
The ones that don't have audible issues either have their own VRMs on the boards or they have a sufficiently isolated 5v rail but still use PSU power. A good board should not have issues even under extreme cases. Neither solution is particularly more expensive than the other.
If I need to maintain voltage without droop under high load, I probably should be looking at legitimate chargers, not something powered off the computer.
I'm curious which fans don't use 12v - all the fans I know of in a typical computer operate on 12v except perhaps GPU fans. Occasionally I'll run a fan on 5v if I need it to be quieter and I don't have any PWM headers available, but that's about it.
Some systems control fans via voltage (the old way of doing it), instead of modern PWM (which has been the standard for about a decade); and even with PWM, it should be viewed as an average of the voltage instead of merely as the peak voltage during pulses.
No matter how you end up controlling your fans, you're not giving them straight unfettered 12v, either you're controlling them via voltage (and they spend most of their time at around 5-7v), or you're controlling them via PWM (with a significantly reduced load cycle).
Some ultralight laptops have switched to 5v for their fans, but that does not seem to be any sort of standard. I have not seen a GPU that uses 5v for fans, and by the time GPUs needed big enough fans to have to control the speed on, they were exclusively PWM, and I am not aware of a GPU that used voltage control on it's fan.
I love it so much that 2 of the last 3 computers I built (on an ATX form factor) were 12VDC native. I have external AC/DC power supplies for them, but when travelling/living in a solar-powered environment, I get to just plug them straight into the clean juice.
Sadly, monitors used to be this way too - they came with external power bricks and were 12VDC native. But sometime in the mid-2000's, something changed. My guess is that someone figured out how to shrink the necessary converters to a size that made it more "sensible" to put them inside the monitor. Result: almost impossible now to find a monitor you connect to a 12VDC supply. The one I have in the van took me months to find on ebay back in 2015. The only exception are "TV" monitors specifically made for RVs, which is great and all, except that most of them are crap compared to a modern monitor.
One small problem with plugging 12VDC native computers into a solar system: it's not atypical for the solar charge controller to drive the charge voltage over 14V, at which point most internal "PSUs" designed for 12V will shutdown. You need a voltage regulator between you and the battery-terminal voltage.
What I will say is that the machine I used to use in the van lost its video output (the perils of onboard graphics), and I've found it impossible to source a reasonable replacement mobo. As a result, I've switched back to using a laptop (lenovo Y700) in the van.
What I notice as a potential improvement is that it effectively lets me store an extra X hours of power for computing, because I'm adding the laptop battery to the overall storage. Compared to my "house batteries", the laptop batteries are of course very small. But getting (say) 3 hours of work time stored up in the laptop does seem like a plus, especially on cloudier days.
https://www.jonnyguru.com/blog/2018/07/03/seasonic-prime-ult...
Is 30 dB SNR insufficient for a digital system?
> My guess is that someone figured out how to shrink the necessary converters to a size that made it more "sensible" to put them inside the monitor
I’m under the impression that this was demand/market forces. It used to be much cheaper to have an external brick rather than engineer a power supply into a monitor. However, the engineering cost was lower than the missed sales due to consumers not wanting external power supplies. It’s just another thing to lose. Another unique plug you’ll never be able to reasonably replace with what’s on hand.
Also, the 12VDC native monitor that I did buy didn't have a unique plug. It used the fairly standard ring/core 12VDC jack and plug found in many different contexts, though not a lot of digital ones.
It would be really cool to have a solar/battery powered vehicle you can power from DC. A bus-sized RV could easily generate 5kw from current solar panels just on it's roof.
Current RVs do have both 110v AC and 12v DC systems, but they're not easily integrated. 12v is too low to power bigger stuff like air conditioners (1500 watts = 125 amp wiring!) and converting DC to AC requires expensive inverters.
A really cool solution would be high-voltage DC powering DC appliances directly and stepping down to clean lower voltage DC close to where it's needed.
And yes, our van has 120VAC (1kW) available, but I prefer to use it only for things where's there no alternative (e.g. our toaster, or recharging laptops, sigh).
As far rooftop, we have the long Sprinter with about as much solar as you can get on the roof given that you need (at least) an exhaust fan. That gets us to 540W. The most I've ever seen on a van was 3kW and that was an ... insane ... physical setup.
Almost everything in the home needs adapters and power bricks to convert it to something sensible, which usually is 12VDC. So why not just make that the standard?
Power loss in cables at low voltage / high amps.
It's very likely you can just bypass the internal PSU on any monitor and power it straight from your battery.
You'll just need to take it apart, which can be hard depending on the housing.
I want this! Mainly to run from batteries efficiently, transforming from DC to AC and then back to DC is so absurd. Can you give any photos/links/recommendations on how to start?
A few images and details of one of the systems I built here: https://sprinter-source.com/forum/showpost.php?p=289185&post...
Of course, from a purely engineering perspective, it really does make sense and it's overdue.
This also affects the price of those components, while it probably does not make the PSU i get to buy in a webshop any cheaper.
From an engineering viewpoint, it probably does make sense, especially for the lower voltages, as less copper may be required.
A shame
[1] http://files.opencompute.org/oc/public.php?service=files&t=8...
Seems like that could be addressed with one of two adapters:
- A passive adapter cable with a female ATX connector (with the 3.3V and 5V pins disconnected) and male ATX12VO connector
- An active adapter cable/board with a female ATX12VO connector, the necessary circuits to step voltages down to 3.3V and 5V, and a male ATX connector.
One or both of these could happen today and immediately solve that chicken-and-egg problem for the custom market, no?
https://j-hackcompany.com/?product=j-hack-m2427-for-corsair-...
https://smallformfactor.net/forum/threads/m2427-cable-manage...
I think this will definitely happen, as supplies of classic ATX PSUs start to dry up, and people need to keep powering their old machines. This situation already exists for old AT style vintage machines, being powered by ATX PSUs.
Deleted Comment
Deleted Comment
A 4S battery (or equivalent) will have a nominal voltage of 14.8V and a maximum voltage of 16.8V, which leaves 2.2V extra, probably to account for losses.
Why 19V ended up being the typical voltage rather than rounding it up to something like 20V or 24V is beyond me (probably a matter of compliance, try checking IEC 60950-1 or related standards).
Deleted Comment
Any example of the modern one? I personally have no issues with the current one. I am not sure if those molex connectors should be called as legacy.
I also think the design is large and dated, and I think the modular nature of the connectors is unnecessary. But I don't know how small a wire gauge you can safely use for the current delivery on some of these 500-1k W PSUs.
ATX power uses 18 AWG copper cable. 18 AWG copper cable has 6.4 ohms of resistance per thousand feet. A 6 foot 18 AWG cable would have .0384 ohms of resistance. Let’s say you has a pretty heavy load of 10A. This would be a 0.33 ohm load. There would be about 10% loss, or 3W.
Not nothing, though at more typical loads the cable loss would be lower. 5V rails are more typically used and they would have close to half the loss.
Then every power supply could be its own UPS, and handle charging the battery.
It would need to be a pretty fat wire, but that's handleable.
It's 300W-400W load going to near zero and back within microseconds.