Message boards :
Number crunching :
Quick question about the GPU count
Message board moderation
Author | Message |
---|---|
SongBird Send message Joined: 23 Oct 01 Posts: 104 Credit: 164,826,157 RAC: 297 |
This host has "[8] NVIDIA GeForce GTX 690 (2047MB)". Are those really eight distinct GTX690 video cards? Or is there some logical multiplication there? I just can't believe someone could put 8 of those in a single computer :/ |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
4 x 690 cards, 8 GPUs [Edit:] well looking closer, that particular machine has 2 x 590s and 2x690s, but the counts stand at least. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
SongBird Send message Joined: 23 Oct 01 Posts: 104 Credit: 164,826,157 RAC: 297 |
So these are 8 TITANs?! What kind of a motherboard would be used... What kind of power supply... My head is spinning! |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
So these are 8 TITANs?! What kind of a motherboard would be used... What kind of power supply... My head is spinning! One Titan, 1 680 and 3 690s (total 8 GPUs). In this kindof situation you're looking at a server class motherboard where the hardware cost of adding GPUs is slightly spread amongst more hardware (for example quad channel memory, probably). It's tough to do, 8 GPUs in one system effectively, but probably a sign of what's to come as all of hardware, firmware, OS and software improve. [Edit:] usually two PSUs. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Vipin Palazhi Send message Joined: 29 Feb 08 Posts: 286 Credit: 167,386,578 RAC: 0 |
4 x 690 cards, 8 GPUs Pardon me for the ignorance but how do you "look closer" to find the different GPUs installed? |
SongBird Send message Joined: 23 Oct 01 Posts: 104 Credit: 164,826,157 RAC: 297 |
4 x 690 cards, 8 GPUs The same question popped up in my head. I assumed this to be public information so it was just a matter of clicking here and there and I got to a Stderr output of a random task where it has the CUDA devices enumerated. I would guess this is where he got this info too... It's cool learning new stuff :) |
Mike Send message Joined: 17 Feb 01 Posts: 34258 Credit: 79,922,639 RAC: 80 |
4 x 690 cards, 8 GPUs Go to the host you want to look at. Open one finnished GPU task. Stderr output <core_client_version>7.2.33</core_client_version> <![CDATA[ <stderr_txt> setiathome_CUDA: Found 8 CUDA device(s): Device 1: GeForce GTX TITAN, 4095 MiB, regsPerBlock 65536 computeCap 3.5, multiProcs 14 pciBusID = 1, pciSlotID = 0 Device 2: GeForce GTX 680, 4095 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 15, pciSlotID = 0 Device 3: GeForce GTX 690, 2048 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 4, pciSlotID = 0 Device 4: GeForce GTX 690, 2048 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 8, pciSlotID = 0 Device 5: GeForce GTX 690, 2048 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 13, pciSlotID = 0 Device 6: GeForce GTX 690, 2048 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 5, pciSlotID = 0 Device 7: GeForce GTX 690, 2048 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 9, pciSlotID = 0 Device 8: GeForce GTX 690, 2048 MiB, regsPerBlock 65536 computeCap 3.0, multiProcs 8 pciBusID = 12, pciSlotID = 0 With each crime and every kindness we birth our future. |
Bernie Vine Send message Joined: 26 May 99 Posts: 9954 Credit: 103,452,613 RAC: 328 |
So these are 8 TITANs?! What kind of a motherboard would be used... What kind of power supply... My head is spinning! From what I understand these are DUAL GPU cards so in fact there are only 4 actual cards, but show as 4x2=8 So something like this https://www.asus.com/News/8F6yRQS9yi9RBCJl |
Batter Up Send message Joined: 5 May 99 Posts: 1946 Credit: 24,860,347 RAC: 0 |
One Titan, 1 680 and 3 690s (total 8 GPUs). There is a two CPU socket 2011 that has seven physical PCIe slots but that uses Xeon chips not i7 and the slots can only handle a single slot wide card. I assume the eight Titans are part of the hustle stats most of the top hosts use. The top, at this point in time, consumer setup would be an i7-4960X, four GTX690, and an ASUS Extreme IV MB with four PCIe slots each of which can handle a double wide GTX690. This can all be run on one 1300 watt power supply, the most that can be run off a single 120 volt outlet. This machine is that type of setup, runs 24/7 and does not play games with science. http://setiathome.berkeley.edu/show_host_detail.php?hostid=7054503 |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
In theory 4x690 DOES NOT WORK on windows XP or 7, any other combination of 4 GPU´s works (590+3x690 for example) if you have the motherboard with 4 PCI-e avaiable, the PSU (normaly 2) and a way to handle the heat. Some time ago we try to run and even with the help of Nvidia we can´t do that. The 4x690 GPU limitation is related to the windows resources itself not to the GPU. But there is a host that apparently works with 4x690 on SETI, how? i realy wish to know how he manage to do that. It´s runs on Windows 8 so that could be the reason, when we test windows 8 was not avaiable yet, but i can´t say for sure. This is the host: http://setiathome.berkeley.edu/show_host_detail.php?hostid=7054503 maybe someone could explain that to us. Anyway i stop to use such big crunchers, i use a 590+3x690 with 1350+1000W PSU´s and EVGA X79 Classified MB, for some time, but with the 100WU limit it´s hard to mantain it working 24/7, so i split the gpu´s on diferent hosts. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
In theory 4x690 DOES NOT WORK on windows XP or 7, any other combination works (590+3x690) if you have the motherboard with 4 PCI-e avaiable, the PSU (normaly 2) and a way to handle the heat. Some time ago we try to run and even with the help of Nvidia we can´t do that. The 4x690 GPU limitation is related to the windows resources itself not to the GPU. With MS changing the Windows display driver infrastructure undereath, for 8 & 8.1, and patching Vista and 7 for platform compatibility via Windows updates, all bets are off ;). See the WDDM wiki The most relevant, among many, gradual changes involve those about sharing between applications on the same host, memory and reliability. These are the same factors that increase the latencies (slow the, i.e. our 'old-school', apps down) while making the whole lot 'better'. Later XP drivers are kindof a mix/hybrid of old XPDM and newer WDDM [...so have increased latencies over old XPDM, but are far more reliable]. I'm currently exploring options to 'hide' the increased latencies the more sophisticated newer models introduce, which will be intended to allow stuffing more smaller GPUs into a system if so desired, and raised X-branches internal limits to 16 devices in anticipation (though current server side limits will only believe 8) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
I have the 690´s and the PSU´s but don´t have the EVGA MB anymore to test if the windows 8 solves the problem, maybe the user of the 4x690 could explain to us his configuration. On other way the 100 GPU WU limit still on place so "ultra-big crunchers", in my opinion, are not the best option on SETI for now or at least until the limit where rised to at least 100WU per GPU. One thing is important, if you go to the nvidia site you will see the 690 is not suported on XP, but actualy i have a host with 2x690+670 (total of logical 5 GPU´s) working fine for months with a 1350W PSU. Event this host running empty a lot of times in SETI. |
Batter Up Send message Joined: 5 May 99 Posts: 1946 Credit: 24,860,347 RAC: 0 |
I have seen most of the top crunchers with 200, or MORE, AP WU in their cache. I crunch what I am given and under normal server conditions I never run out of WUs. As for how I got eight GPUs working, I don't know, they just worked, almost plug and play too, just some manual driver installation. I had the same setup working under Win 7, I just went with Win 8.1 this week. Of course all GPUs work independently, no SLI. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Actualy the limit of 100 GPU WU/host still valid, so if anyone have more than 100 GPU WU means he is rescheduling the WU´s something that actualy mess a lot with the credit guaranted per WU and not recomended to do. 100 AP holds for a while, but 100 MB only holds for few hours, running 2 WU per GPU and with 12 min to crunch each WU,(for a medium AR WU) your host will be dry in 75 min or less if anything happens with the servers, something very common before the move to colo, that´s why i stop to use my 590+3x690 host and remain asking to rise the limit. As for how I got eight GPUs working, I don't know, they just worked, almost plug and play too, just some manual driver installation. I had the same setup working under Win 7, I just went with Win 8.1 this week. Of course all GPUs work independently, no SLI. That´s interesting, in theory 4x690 don´t work but something must be changed in the last 2 years and make them work. Just curiosity: What is your MB? PSU´s, GPU brand etc? |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
Juan is correct. I have 200 AP or MB depending on what the servers spit out. 100 for GPU and 100 for CPU. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
Thomas Send message Joined: 9 Dec 11 Posts: 1499 Credit: 1,345,576 RAC: 0 |
Go to the host you want to look at. Thanks Mike ! :) This tip will help me... |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Juan is correct. That´s exactly the point, no matter how many GPU´s your host have you could only build a 100 WU GPU cache that´s what i allways ask to rise the size of the cache to 100 per GPU, so in the multiple GPU hosts the cache will hold for at least 1/4 day if the WU where MB´s, so we don´t run dry even in normal outages. |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
Juan is correct. I second to that. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
Batter Up Send message Joined: 5 May 99 Posts: 1946 Credit: 24,860,347 RAC: 0 |
What is your MB? PSU´s, GPU brand etc? The GPUs are four GTX 690 ( two Kepler chips per card) , ASUS Rampage IV Extreme MB and a Thermaltake 1350 watt power supply. The PC has an SSD HD no CD and four sticks, 16 gig, of DDR3 RAM. I get an occasional power warning but no BSOD. Everything is consumer grade running at factory settings. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Thanks for the info. Actualy i have a Rampage and the GPÙ´s too so i will think on the possibility to try again if they release the 100 WU limit in the future. Anyway it´s interesting to see a thermaltake 1350w PSU driving 4x300W GPUs and the rest of the host, sure it´s working over the edge. When i build the 4xGPU host i use 2 PSU one 1350 (thermaltake too) used to drive de MB+2GPU´s plus one 1000 (coolermaster) for the other 2 GPU´s. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.