@LukeAlmighty
I was kinda thinking in this general direction yesterday. My PC is a bit older, so it doesn't draw 1000W like a proper gaming rig. I doubt I can make it draw 400W, but it has a proper THICK boy cable to the wall socket for those 400W, but we expect some thin, probably braided, cables to do 600W for a GPU? And why 12 pins, if all you're giving the GPU is the same 12v in all of them?
Fucking redesign PSUs from the ground up already, fewer pins for GPU, but have them massive, cause at this point the GPU is the main power user in your build.

@alyx @LukeAlmighty
600W = 12V * 50A

the 12pin connector IIRC has 6 positive and 5 negative (or the other way around) and 1 reserved pin, so 5 pins on the narrower pole

50A / 5 wires = 10A, perfectly fine for 1.5mm² (a.k.a. 15AWG).

As to why not fewer, thicker pins, I'm guessing it's easier/cheaper to just add more of the wires you were already using.

Follow

@wolf480pl @LukeAlmighty
>perfectly fine
Tell that to the 4090 connectors that have started to melt.
Sure, theoretically you could push the wires to be even thinner. The connectors are the actual weak link. But thin wires means small flimsy connectors. And we're already seeing what just 450w can do right now to the new connector spec.

· · Web · 1 · 1 · 1

@alyx @LukeAlmighty the meltong connectors were on the cable side, not GPU side, and only certain PSU manufacturers had this issue AFAIU. But yeah it is near the limit...

@wolf480pl @LukeAlmighty @alyx @wolf480pl @alyx @LukeAlmighty it's the crimp job for the pins in all likelihood. I've seen those fail inside appliances before

2x connectors on the gpu is fine. 14 awg stranded wire is still reasonably flexible and everything can remain at a very safe to handle 12 volts. plus no need for a new spec and backwards incompatibility

a 1000W space heater is already pretty bad in terms of fan noise and heating up the room. I don't think there's room for consumer cards to go significantly higher than they currently are. so no need to change anything

at one point nvidia was talking about datacenter cards that would require higher voltages and thus different PSUs. I don't know if that ever materialized though. there are power density (ie cooling) problems at some point

@roboneko @LukeAlmighty @wolf480pl
I don't really see the point of higher voltage cards. It's not like you're actually pushing in 12v to the chip. You'd just be forced into having more voltage stepdown circuity on the card.
Only thing it would help you at is pushing the same wattage but with fewer amps, and maybe that's more efficient or something, but I'm not enough of an electrical engineer to know.

@alyx @LukeAlmighty @wolf480pl @alyx @LukeAlmighty @wolf480pl yes that is exactly what it is meant for. using a single cable to power an absolutely ridiculously power hungry card. which is why I mentioned that power density becomes a problem. you can't just go sticking 3x 1000 watt (or whatever ridiculous TDP they were) GPUs into an existing rackmount case with an otherwise stock setup and expect everything to go well. neither can you quadruple the power budget per rack without remodeling the datacenter in various ways

@roboneko @alyx @LukeAlmighty
to add to your answer:

Watts = Volts x Amps
Cable thickness is directly proportional to amps.
With the same wattage, more volts => less amps => thinner cable.

Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.