Tesla Personal Supercomputing Gpu's


log in

Advanced search

Message boards : Number crunching : Tesla Personal Supercomputing Gpu's

1 · 2 · Next
Author Message
NX
Send message
Joined: 15 Apr 09
Posts: 4
Credit: 119,902
RAC: 0
United States
Message 1310972 - Posted: 28 Nov 2012, 10:14:32 UTC
Last modified: 28 Nov 2012, 10:23:18 UTC

I have been pondering since the Supercomputing Gpu's came out few years ago if is really i mean really worth it forking out $4000 for one of those cards. With all the video cards out there like the GTX 500's and GTX 600's look like cheap alternative to Nvidia's Supercomputing Gpu's. ever improving cuda technology I want to make a super cruncher. But which one will it give me some good bang for my buck?

Tesla Kepler are they worth the price and for SETI? Any sugestions?

http://www.nvidia.com/object/personal-supercomputing.html

i have a ASUS P7P55 WS Supercomputer motherboard and 8Gb of ram ( going 16gb soon). I want get my computer running to its full potential. Not just as a gaming system since is workstation mobo.

http://setiathome.berkeley.edu/show_host_detail.php?hostid=6844622

Thanks in advance :-)

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1310997 - Posted: 28 Nov 2012, 11:55:10 UTC
Last modified: 28 Nov 2012, 12:13:55 UTC

Tesla GPU are not for all, they are so expensive and the price/performance for crunching is not good. Go for Testa only if your software realy need one. There target is the corporate market not the end user market.

For crunching, in the Nvidia universe the high end GPUs GTX690 have a far better price/performance tag. But something you need to know. 4 GTX 690 does not work on the same MB, it´s a windows limitation, they use to many resources to work.

Of course, unless you are "crazy or extramely rich person" and you want to go to a BIOS re-writing of a >US$ 1000 GPU. There are a way to do that, but i know at least one (old and highly trained cruncher) who try and almost "loose" his board, even with the Nvidia suport helping.

If you can handle the mecanical and hot air flow problems, the real actual maximum capacitiy is 3x690 + 1x680 on a MB like yours (actualy 3x680+1x590 works), or if you could find 4x590 but that will produce a lot more heat and drain a lot more power. The actual 3 GPU limit only apply to the 690 model.

But to put that quantity of GPUs on a MB, you will need a ultra Big PSU (each 690 drains 300W, 680 about 170W and a 590 even more >350W not sure) so is 1100W just for the GPUs, + 300W for the rest of the host we are talking about 1400W. You could think a 1500W PSU will drive that, NO you are wrong, there a so many other things to consider. PSU eficience, W/VA conversion, Capacitor/Components Lifetine, ect. so at least you need no less than a 2000W PSU or even bigger if you are thinking to use the 24/7.

In this cases is better go to a hibrid solution, 2 PSU on the same hosts, something like a big 1250 or more PSU for the MB and 2x290 + 1 1000W PSU for the rest 690+680 (there are a lot more combinations possible). You could thing i´m crazy, but that configuration will make you run the PSU on a confortable margin to avoid any trouble.

Of course if you could go to water cooling, if no, prepare to use fans all over the case, these host will produce a lot of heat (with rear and front hot air exaust), more than normal fans can hand. You will need to study your case to maximixe the air flow and heat exaust = more big fans.

One last thing, you will need a high end CPU to handle this beast, go for a top of the line I7 nothing less, big GPUs need a big CPU to feed them.

If all works you will reach no less than >200K RAC on this host actualy bearing >250K.

I build one like this few months ago (with another less expensive MB of course), but i never realy use, works but is to hot to handle, to over the edge componets to fail, then i decide to split in two smaller hosts (with a less expensive I5 CPU, one the 2x690 and the other with 1x690+1x590). An in the end 2 heavy hosts actualy work faster than 1 ultra heavy host)

Good luck, and post a photo if you build one.
____________

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,649,682
RAC: 330
Greece
Message 1311004 - Posted: 28 Nov 2012, 12:40:02 UTC
Last modified: 28 Nov 2012, 12:44:39 UTC

Tesla Kepler are they worth the price and for SETI? Any sugestions?


No, not worth the price for SETI.

A lot of people here can explain this far better than I, but the short answer to why Teslas have a triple-quadruple price tag is:

a) They are far better at Double Precision (which however SETI does not use)
b) Most of what you are paying for is NVIDIA support, not the GPU itself. Think expensive warranty.

Edit: Oh and welcome to the message boards!:)

outlaw
Send message
Joined: 6 Mar 00
Posts: 43
Credit: 17,063,897
RAC: 1
Canada
Message 1311027 - Posted: 28 Nov 2012, 13:53:58 UTC

The three 690-cards-per-host limitation, that is strictly due to Windows?

Where in Windows is the bottleneck exactly, just curious...
____________

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311032 - Posted: 28 Nov 2012, 14:11:18 UTC - in response to Message 1311027.
Last modified: 28 Nov 2012, 14:32:11 UTC

The three 690-cards-per-host limitation, that is strictly due to Windows?

Where in Windows is the bottleneck exactly, just curious...


DOn´t know why, when i ask the answer was, no mather the MB have suport for 4 GPUS the 690 is realy a 2xGPU on one, for make it work it uses a lot of resources and 4 of them on one host exausts the windows resources. What resources i don cleary Know. Just know because when i try to build a supercruncher, does not work and i ask help from Nvidia suport. But at the time the win 8 was not avaiable, so i don´t know if that changes, in win 7/64 or XP not works.

On the other hand, there is a fix avaiable from EVGA (never see it realy work or not), but is realy complicated and requires a compleate rewrite of the GPU BIOS, even if you know how to do that, is something very dangerous to do with a high price GPU. I know someone who try and need to send the GPU back to EVGA... for repairs... I nerver try i could be crazy but not totaly madness to play with fire.

That limitation only apears on the 690, you could use for example 2x690 + 2x680 on a single host without problem or even 4x590 if you have, the PCIe slots avaible, a way to feed them (PSUCPU) and manage to take out the heat.
____________

Profile Tim
Volunteer tester
Avatar
Send message
Joined: 19 May 99
Posts: 199
Credit: 242,584,471
RAC: 218,277
Greece
Message 1311036 - Posted: 28 Nov 2012, 14:32:13 UTC - in response to Message 1311032.
Last modified: 28 Nov 2012, 14:33:47 UTC

The three 690-cards-per-host limitation, that is strictly due to Windows?

Where in Windows is the bottleneck exactly, just curious...


DOn´t know why, when i ask the answer was, no mather the MB have suport for 4 GPUS the 690 is realy a 2xGPU on one, for make it work it uses a lot of resources and 4 of them on one host exausts the windows resources. What resources i don cleary Know. Just know because when i try to build a supercruncher, does not work and i ask help from Nvidia suport. But at the time the win 8 was not avaiable, so i don´t know if that changes, in win 7/64 or XP not works.

On the other hand, there is a fix avaiable from EVGA (never see it realy work or not), but is realy complicated and requires a compleate rewrite of the GPU BIOS, even if you know how to do that, is something very dangerous to do with a high price GPU. I know someone who try and need to send the GPU back to EVGA... for repairs... I nerver try i could be crazy but not totaly madness to play with fire.

That limitation only apears on the 690, you could use for example 2x690 + 2x680 on a single host without problem or even 4x590 if you have the way to feed them and take out the heat.




I can confirm the power draw and the heat at my 4x590 rig.

I use 2 x1200 watt psus. One for the system and the 2x590, and one for the other 2x590.

Now about the limitations.... i don't know...Maybe it is driver issue and not windows?
My rig working perfect with 275.33 driver, but i have problems with the latest drivers, like restarts, blue screens etc.
____________

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311050 - Posted: 28 Nov 2012, 15:14:06 UTC - in response to Message 1311036.
Last modified: 28 Nov 2012, 15:27:18 UTC

The three 690-cards-per-host limitation, that is strictly due to Windows?

Where in Windows is the bottleneck exactly, just curious...


DOn´t know why, when i ask the answer was, no mather the MB have suport for 4 GPUS the 690 is realy a 2xGPU on one, for make it work it uses a lot of resources and 4 of them on one host exausts the windows resources. What resources i don cleary Know. Just know because when i try to build a supercruncher, does not work and i ask help from Nvidia suport. But at the time the win 8 was not avaiable, so i don´t know if that changes, in win 7/64 or XP not works.

On the other hand, there is a fix avaiable from EVGA (never see it realy work or not), but is realy complicated and requires a compleate rewrite of the GPU BIOS, even if you know how to do that, is something very dangerous to do with a high price GPU. I know someone who try and need to send the GPU back to EVGA... for repairs... I nerver try i could be crazy but not totaly madness to play with fire.

That limitation only apears on the 690, you could use for example 2x690 + 2x680 on a single host without problem or even 4x590 if you have the way to feed them and take out the heat.




I can confirm the power draw and the heat at my 4x590 rig.

I use 2 x1200 watt psus. One for the system and the 2x590, and one for the other 2x590.

Now about the limitations.... i don't know...Maybe it is driver issue and not windows?
My rig working perfect with 275.33 driver, but i have problems with the latest drivers, like restarts, blue screens etc.

No is not a driver problem, the problem is the way the GPU interacts with the windows itself, something like we have in the early computer days, when you exaust the IRQ avaiables, simply you reach the OS capacity to control any more devices.

Yes the 590 runs hotter and uses more power (up to 365W by Nvidia specs) so a big PSU is needed (for drive 2 of them a 1250 PSU is a good choice for a 24/7 cruncher).

The 3x690 limitation is only on the 690, 4x590 works perfectly, the problem, they are not avaiable anymore (you can buy them only in the secondary market normaly only used) and draws a lot more power than the 690 (about 20-30% more to do the same job). You will fell the diference in your high power bill.

Running a super heavy cruncher (that uses about 1.5-2KW/hr) is not cheap, you will notice it on the power bill!

(edit) The expected RAC figures i give is with 24/7 usage in a crunching only host, optimized apps. etc.
____________

Team kizb
Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1311132 - Posted: 28 Nov 2012, 19:53:38 UTC
Last modified: 28 Nov 2012, 19:56:39 UTC

@ Juan, how do you have your 4 580's setup?

Would you recommend 2 680's or 1 690 for the best performance? The current price for either setup is similar.

Looking at eBay it also looks like 2 590s is possible for the price of a single 690, do you think the 690 is still going to be the best bet?
____________
My Computers:
Blue Offline
Green Offline
Red Offline

Profile Tron
Send message
Joined: 16 Aug 09
Posts: 180
Credit: 2,236,055
RAC: 0
United States
Message 1311139 - Posted: 28 Nov 2012, 20:16:46 UTC

The issue is not limited to windows : It is a CPU /MB capacity issue.
The dual CPU asus WS MoBo's may support all 4 690's as long as both CPU are installed.
And, if there is adequate PCI resources available on the motherboard.

What motherboard is Tim running on this host ?

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311148 - Posted: 28 Nov 2012, 21:02:32 UTC - in response to Message 1311139.

The issue is not limited to windows : It is a CPU /MB capacity issue.
The dual CPU asus WS MoBo's may support all 4 690's as long as both CPU are installed.
And, if there is adequate PCI resources available on the motherboard.

What motherboard is Tim running on this host ?


As i say there is a way... but not for all, ask TIM and few other owners of TOP Hosts and you will now what i talk...





____________

Profile Tron
Send message
Joined: 16 Aug 09
Posts: 180
Credit: 2,236,055
RAC: 0
United States
Message 1311151 - Posted: 28 Nov 2012, 21:14:59 UTC

Juan wrote:
As i say there is a way... but not for all, ask TIM and few other owners of TOP Hosts and you will now what i talk...


I understand what your saying, it's just hardware technology is leaping and bounding. Soon, if not already there will be a motherboard that will support as many as 16 dual slot GPUs.
What I was saying is that it's not "the OS' that is the bottleneck. its purely a hardware capacity problem. Increasing the OS capacity is as simple as an update compared to the physical limitations.

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311158 - Posted: 28 Nov 2012, 21:27:41 UTC - in response to Message 1311132.
Last modified: 28 Nov 2012, 21:42:20 UTC

@ Juan, 1) how do you have your 4 580's setup?

2) Would you recommend 2 680's or 1 690 for the best performance? The current price for either setup is similar.

3) Looking at eBay it also looks like 2 590s is possible for the price of a single 690, do you think the 690 is still going to be the best bet?


1 - It´s not realy a 4x580, is a 1x590+2x580 - the boinc only show the last GPU on the host.

2 - For best performance 2x680 is beter and easy to handle, but remember, the number of slots in your MB is limited, if you have a 2 slot MB if you put 1 690 then you could put a second GPU in the future (if you have the PSU of course). My experience, use the best GPU you could buy /suport

3 - Of course NO, if you have the PSU and don´t care about the power bill, the 590 and the 690 produces almost the same RAC (the 690 with the new keppler optimized apps actualy do a little more). But you need to remember, the power bill, the 590 need about 25% more power than the 690 to do the same job and produces a lot of more heat, if you live in a high price KW/hr region as me, the diferences pays the 690 in few months. In another point of view, the production of the 590 stops about 6 months ago so is hard to find brand new ones.

BTW I´m use both cards 590/690 my experience, if you forget the power bill, the 590 is less expensive, need more power to work and produce to much heat, is dificult to handle all that heat in a 2x590 host even with a lot of fans. On the other side the 690 cost a lot more but runs cooler and is easy to handle the heat in a 2x690 host. You choice... less initial cost x less monthly power bill

Use more than 2 GPUs on a single host is complicated. Realy not for begginers. 3 is relatively dificult to handle but with few experience and help you could try, 4 just for experienced users try only if you realy know how to do.

@tron, im not sure it´s only a MB/hardware limitation, when i ask Nvidia few months ago (that could be changed after that) I allready ask if i change the MB, etc. could fix the problem and the answer was the same, only with a new GPU BIOS, thats why i abandon the ideia.

On other hand in the undergroud there is a rummors of a new Boinc development that will virtualy merge your hosts as they are a single cruncher, but don´t expect anything on that area for the following months maybe in few years, it´s a marvelous ideia but something very hard to develop due the complexity of the job.
____________

Team kizb
Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1311160 - Posted: 28 Nov 2012, 21:40:19 UTC - in response to Message 1311158.

Thanks for the great information. Power isn't cheap in Germany as well, so sounds like a 690 would be the best choice for me.

One of my rigs has 3 295's in it and your right, its been a pain to get running stable. They put out a ton of heat. Going to give it another try, but at this point I'm thinking it might be better to just sell them off and use the money towered a 690, I bet it would be a lot trouble overall.
____________
My Computers:
Blue Offline
Green Offline
Red Offline

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311162 - Posted: 28 Nov 2012, 21:52:14 UTC - in response to Message 1311160.

Thanks for the great information. Power isn't cheap in Germany as well, so sounds like a 690 would be the best choice for me.

One of my rigs has 3 295's in it and your right, its been a pain to get running stable. They put out a ton of heat. Going to give it another try, but at this point I'm thinking it might be better to just sell them off and use the money towered a 690, I bet it would be a lot trouble overall.

I don´t have any 295, but i very sure if you actualy make a 3x295 host work would be ease to put 2x690 or a 3x690 to work, you will be impress with who fast and cool they are and the diference in your monthly power bill.

We have a german friend (tpl): http://setiathome.berkeley.edu/show_user.php?userid=171379 he have a 3x690 host and you could PM and ask about the performance, power and heat it needs/produce, and of course in german and knows about the electric cost in Germany.
____________

Team kizb
Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1311166 - Posted: 28 Nov 2012, 21:58:56 UTC - in response to Message 1311162.

Looks like tpl is also running a couple 295 rigs. One has 2 cards, but is only showing 3 cores which is an issue I've had as well with one my cards.
____________
My Computers:
Blue Offline
Green Offline
Red Offline

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311168 - Posted: 28 Nov 2012, 22:04:25 UTC - in response to Message 1311166.
Last modified: 28 Nov 2012, 22:05:10 UTC

Looks like tpl is also running a couple 295 rigs. One has 2 cards, but is only showing 3 cores which is an issue I've had as well with one my cards.

Go speak with him, he is a very nice guy with a lot of experience, sure you will like him.
____________

Profile dancer42
Volunteer tester
Send message
Joined: 2 Jun 02
Posts: 436
Credit: 1,151,306
RAC: 2,819
United States
Message 1311215 - Posted: 29 Nov 2012, 1:07:23 UTC
Last modified: 29 Nov 2012, 1:10:15 UTC

If you were to use an a10 5800k fm2 motherboard 8 gig of pc1866 ddr3 memory a 650 watt power supply and 2 hd6870 video cards plus 1 scrooged case.
for about $560 you would have a 3 gpu system that ran about 4.7 teraflops.
don't overclock you do not need to build another.
save the killer server for controlling the network and being a sweat gaming/ business box.
server load the boxes they don't need monitors, keyboards mice, or hard drives.
I would also spend $30 on a cpu coolers it will run hot.
for prospective 1982 the record of 800 MFLOPS was set by Cray X-M
my current rig runs 4.1 teraflops and cost a hole lot more.
____________

Profile Tim
Volunteer tester
Avatar
Send message
Joined: 19 May 99
Posts: 199
Credit: 242,584,471
RAC: 218,277
Greece
Message 1311256 - Posted: 29 Nov 2012, 4:41:00 UTC - in response to Message 1311139.
Last modified: 29 Nov 2012, 4:42:21 UTC

The issue is not limited to windows : It is a CPU /MB capacity issue.
The dual CPU asus WS MoBo's may support all 4 690's as long as both CPU are installed.
And, if there is adequate PCI resources available on the motherboard.

What motherboard is Tim running on this host ?


I am using the Asus P6T7 WS SuperComputer.

http://www.asus.com/Motherboards/Intel_Socket_1366/P6T7_WS_SuperComputer/
____________

zoom314Project donor
Avatar
Send message
Joined: 30 Nov 03
Posts: 46304
Credit: 36,691,420
RAC: 5,240
Message 1311295 - Posted: 29 Nov 2012, 8:02:42 UTC

Ok, in My 2nd motherboard a used Asus Rampage 3 Extreme motherboard, the R3E has 4 pcie slots, when all 4 x16 length slots are populated each slot is x8 wide in ability, of course it has an i7 940 cpu installed. It's good to know that 4x590 cards will work, 4x690 cards would have been nice, but yeah their not inexpensive in the least. I run Windows 7 Pro x64, Boinc 6.10.58 x64, x41g and BoincTasks 1.43 as the front end to Boinc 6.10.58. I was going to use a Rosewill 1300w psu with 108A for the 1st 3 GTX590 cards and an FSP 450w Booster X5 video card psu for the 4th GTX590 card, all will be water cooled.
____________
My Facebook, War Commander, 2015

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5299
Credit: 293,880,641
RAC: 466,064
Brazil
Message 1311307 - Posted: 29 Nov 2012, 9:31:57 UTC - in response to Message 1311256.
Last modified: 29 Nov 2012, 10:20:21 UTC

The issue is not limited to windows : It is a CPU /MB capacity issue.
The dual CPU asus WS MoBo's may support all 4 690's as long as both CPU are installed.
And, if there is adequate PCI resources available on the motherboard.

What motherboard is Tim running on this host ?


I am using the Asus P6T7 WS SuperComputer.

http://www.asus.com/Motherboards/Intel_Socket_1366/P6T7_WS_SuperComputer/


@Tim

I´m interested and curious to, are you actualy realy run 4x690 on that board? Did you have the same problem when you try to put the 4 board to work in the past? Or that kind of double CPU board actualy fix the resources problem to?

@Bobier

For us, "mere mortals" the Asus Rampage 3 Extreme is one of the best MB you could use for a cruncher, exactly the one i try to put the 4x690 in the past, sure an excelent choice. Maybe the things change and now is possible to put 4x690 on that, lets see the Tim answer.

On that MB you shout go easely at least for a 3x690+1x590 and a double PSU solution works fine but with a lot of heat, you need to take some special care with the mechanical fix in the #2&#3 GPUs if you put 4 to work, very few space avaiable but as you use watter cooling that will be not a problem.

I´m strongly sugest not to put 3x590 (even 3x690) on one PSU, even on a platinum 1350W PSU, just the 3x590 will drain near 1100W (900W on a 690) from the PSU, to close to the edge, you could expect a total component failure after few months. A 2xGPU per PSU is a better and more reliable solution for 24/7 usage.

Could some people don´t agree but for an long lasting, stable and turn-on and forget 24/7 usage, you could only use 60/70% of the total capacity of a PSU (60% is better than 70%). On a heavy cruncher, one of the weakest point is actualy the PSU itself because the internal components performance decays with the past of the time. We can´t do nothing about to avoid that, even with a high grade solid capacitors, the charging/discharging cycle take his toll.

PD: If you can, go for the latest beta x41zb builds, works perfect on the 690 with a lot of few keppler directed options squeezing a lot more juice from the GPU.
____________

1 · 2 · Next

Message boards : Number crunching : Tesla Personal Supercomputing Gpu's

Copyright © 2014 University of California