I'm going to be adding another EVGA GTX 580 DS Super clock video card to my rig in a couple weeks. I need to upgrade my power supply, I currently have a Corsair HX750. I was looking at the Corsair HX1050 but it has 87.5A on the 12v, and both cards running at full load could use as much as 84A and that's a little close. I know that the machine will never run at 100% CPU and SLI under normal conditions outside of maybe synthetic benchmarks.
The Corsair AX1200 has 100.4A on the 12v. Which would support my system in a worst case scenario.
What do you guys think? HX1050 enough, or just spend the extra $40 and get the AX1200?
See system specs below:
Want to enjoy fewer advertisements and more features? Click here to become a Hardware Analysis registered user.
I don't know, MOE...$40 extra seems like pretty cheap insurance. (That's what...about 3% more these days?)
Remember that +12 also supplies some high-current spike demand devices (e.g. HDDs when spinning-up after being 'asleep' or powering up) which might be a surprise factor depending on configuration. And if I had devices that could even theoretically run close to the limits I'd feel safer with a higher margin. I prefer a 20% power margin on any given parameter even with quality PSUs...but then that's me.
edit to add:
It would be interesting to see graphic card reviewers include real-time DC current requirement graphs while running various games and benchmarks at different settings.
I am currently running a Corsair TX850 running a Q9550 and a HD4870 with a bunch of stuff. I definately have the peace of mind that I have enough headroom if I wanted to get a second card and OC.
I would say spend the extra doh to get the AX1200 so as you are set at a future time if you were to want to get a second GTX580 and OC both the cpu and gpu. If not, you will have plenty of power for any new build afterwards.
Also, with headroom.... your psu will run cooler because it is not being stressed out by running at or near it's full capacity. The closer any component running at their max capacity will cause higher running temps. More watts = more heat generated.
This is a bit off topic but, I was wondering do any of you know if CUDA works over SLI?
The reason I ask is I'm in the process of ripping my families DVD collection and streaming them from my home server. I can transcode a 4GB ISO to mp4 in about 13min on the CPU and 9min with my single GTX580 using a CUDA transcoder.
If I could transcode an ISO even faster it would save me a lot of time. Currently it takes ~25min to rip the ISO then transcode to mp4 for each DVD.