Quick question about the GPU count

Message boards : Number crunching : Quick question about the GPU count
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile SongBird
Volunteer tester

Send message
Joined: 23 Oct 01
Posts: 104
Credit: 164,826,157
RAC: 297
Bulgaria
Message 1459970 - Posted: 2 Jan 2014, 13:14:26 UTC

This host has "[8] NVIDIA GeForce GTX 690 (2047MB)".

Are those really eight distinct GTX690 video cards? Or is there some logical multiplication there? I just can't believe someone could put 8 of those in a single computer :/
ID: 1459970 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1459972 - Posted: 2 Jan 2014, 13:21:04 UTC - in response to Message 1459970.  
Last modified: 2 Jan 2014, 13:24:16 UTC

4 x 690 cards, 8 GPUs

[Edit:] well looking closer, that particular machine has 2 x 590s and 2x690s, but the counts stand at least.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1459972 · Report as offensive
Profile SongBird
Volunteer tester

Send message
Joined: 23 Oct 01
Posts: 104
Credit: 164,826,157
RAC: 297
Bulgaria
Message 1459974 - Posted: 2 Jan 2014, 13:43:45 UTC - in response to Message 1459972.  
Last modified: 2 Jan 2014, 13:44:48 UTC

So these are 8 TITANs?! What kind of a motherboard would be used... What kind of power supply... My head is spinning!
ID: 1459974 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1459975 - Posted: 2 Jan 2014, 13:51:17 UTC - in response to Message 1459974.  
Last modified: 2 Jan 2014, 14:08:00 UTC

So these are 8 TITANs?! What kind of a motherboard would be used... What kind of power supply... My head is spinning!


One Titan, 1 680 and 3 690s (total 8 GPUs). In this kindof situation you're looking at a server class motherboard where the hardware cost of adding GPUs is slightly spread amongst more hardware (for example quad channel memory, probably). It's tough to do, 8 GPUs in one system effectively, but probably a sign of what's to come as all of hardware, firmware, OS and software improve.

[Edit:] usually two PSUs.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1459975 · Report as offensive
Profile Vipin Palazhi
Avatar

Send message
Joined: 29 Feb 08
Posts: 286
Credit: 167,386,578
RAC: 0
India
Message 1459979 - Posted: 2 Jan 2014, 14:08:06 UTC - in response to Message 1459972.  

4 x 690 cards, 8 GPUs

[Edit:] well looking closer, that particular machine has 2 x 590s and 2x690s, but the counts stand at least.


Pardon me for the ignorance but how do you "look closer" to find the different GPUs installed?
ID: 1459979 · Report as offensive
Profile SongBird
Volunteer tester

Send message
Joined: 23 Oct 01
Posts: 104
Credit: 164,826,157
RAC: 297
Bulgaria
Message 1459981 - Posted: 2 Jan 2014, 14:28:14 UTC - in response to Message 1459979.  
Last modified: 2 Jan 2014, 14:29:08 UTC

4 x 690 cards, 8 GPUs

[Edit:] well looking closer, that particular machine has 2 x 590s and 2x690s, but the counts stand at least.


Pardon me for the ignorance but how do you "look closer" to find the different GPUs installed?

The same question popped up in my head. I assumed this to be public information so it was just a matter of clicking here and there and I got to a Stderr output of a random task where it has the CUDA devices enumerated. I would guess this is where he got this info too...

It's cool learning new stuff :)
ID: 1459981 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34258
Credit: 79,922,639
RAC: 80
Germany
Message 1459982 - Posted: 2 Jan 2014, 14:42:02 UTC - in response to Message 1459979.  

4 x 690 cards, 8 GPUs

[Edit:] well looking closer, that particular machine has 2 x 590s and 2x690s, but the counts stand at least.


Pardon me for the ignorance but how do you "look closer" to find the different GPUs installed?


Go to the host you want to look at.
Open one finnished GPU task.

Stderr output

<core_client_version>7.2.33</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 8 CUDA device(s):
  Device 1: GeForce GTX TITAN, 4095 MiB, regsPerBlock 65536
     computeCap 3.5, multiProcs 14 
     pciBusID = 1, pciSlotID = 0
  Device 2: GeForce GTX 680, 4095 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 15, pciSlotID = 0
  Device 3: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 4, pciSlotID = 0
  Device 4: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 8, pciSlotID = 0
  Device 5: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 13, pciSlotID = 0
  Device 6: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 5, pciSlotID = 0
  Device 7: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 9, pciSlotID = 0
  Device 8: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 12, pciSlotID = 0



With each crime and every kindness we birth our future.
ID: 1459982 · Report as offensive
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9954
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1460008 - Posted: 2 Jan 2014, 15:53:41 UTC - in response to Message 1459974.  
Last modified: 2 Jan 2014, 15:56:45 UTC

So these are 8 TITANs?! What kind of a motherboard would be used... What kind of power supply... My head is spinning!

From what I understand these are DUAL GPU cards so in fact there are only 4 actual cards, but show as 4x2=8

So something like this

https://www.asus.com/News/8F6yRQS9yi9RBCJl
ID: 1460008 · Report as offensive
Batter Up
Avatar

Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,860,347
RAC: 0
United States
Message 1460014 - Posted: 2 Jan 2014, 16:27:07 UTC - in response to Message 1459975.  
Last modified: 2 Jan 2014, 16:35:36 UTC

One Titan, 1 680 and 3 690s (total 8 GPUs).
[Edit:] usually two PSUs.

There is a two CPU socket 2011 that has seven physical PCIe slots but that uses Xeon chips not i7 and the slots can only handle a single slot wide card. I assume the eight Titans are part of the hustle stats most of the top hosts use.

The top, at this point in time, consumer setup would be an i7-4960X, four GTX690, and an ASUS Extreme IV MB with four PCIe slots each of which can handle a double wide GTX690. This can all be run on one 1300 watt power supply, the most that can be run off a single 120 volt outlet.

This machine is that type of setup, runs 24/7 and does not play games with science.
http://setiathome.berkeley.edu/show_host_detail.php?hostid=7054503
ID: 1460014 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1460073 - Posted: 2 Jan 2014, 20:05:37 UTC
Last modified: 2 Jan 2014, 20:13:56 UTC

In theory 4x690 DOES NOT WORK on windows XP or 7, any other combination of 4 GPU´s works (590+3x690 for example) if you have the motherboard with 4 PCI-e avaiable, the PSU (normaly 2) and a way to handle the heat. Some time ago we try to run and even with the help of Nvidia we can´t do that. The 4x690 GPU limitation is related to the windows resources itself not to the GPU.

But there is a host that apparently works with 4x690 on SETI, how? i realy wish to know how he manage to do that. It´s runs on Windows 8 so that could be the reason, when we test windows 8 was not avaiable yet, but i can´t say for sure. This is the host:
http://setiathome.berkeley.edu/show_host_detail.php?hostid=7054503 maybe someone could explain that to us.

Anyway i stop to use such big crunchers, i use a 590+3x690 with 1350+1000W PSU´s and EVGA X79 Classified MB, for some time, but with the 100WU limit it´s hard to mantain it working 24/7, so i split the gpu´s on diferent hosts.
ID: 1460073 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1460079 - Posted: 2 Jan 2014, 20:19:23 UTC - in response to Message 1460073.  
Last modified: 2 Jan 2014, 20:23:58 UTC

In theory 4x690 DOES NOT WORK on windows XP or 7, any other combination works (590+3x690) if you have the motherboard with 4 PCI-e avaiable, the PSU (normaly 2) and a way to handle the heat. Some time ago we try to run and even with the help of Nvidia we can´t do that. The 4x690 GPU limitation is related to the windows resources itself not to the GPU.

But there is a host that apparently works with 4x690 on SETI, how? i realy wish to know how he manage to do that. It´s runs on Windows 8 so that could be the reason, when we test windows 8 was not avaiable yet, but i can´t say for sure. This is the host:
http://setiathome.berkeley.edu/show_host_detail.php?hostid=7054503

Anyway i stop to use such big crunchers, i use a 590+3x690 with 1350+1000W PSU´s for some time, but with the 100WU limit it´s hard to mantain it working 24/7, so i split the gpu´s on diferent hosts.


With MS changing the Windows display driver infrastructure undereath, for 8 & 8.1, and patching Vista and 7 for platform compatibility via Windows updates, all bets are off ;).

See the WDDM wiki
The most relevant, among many, gradual changes involve those about sharing between applications on the same host, memory and reliability. These are the same factors that increase the latencies (slow the, i.e. our 'old-school', apps down) while making the whole lot 'better'. Later XP drivers are kindof a mix/hybrid of old XPDM and newer WDDM [...so have increased latencies over old XPDM, but are far more reliable].

I'm currently exploring options to 'hide' the increased latencies the more sophisticated newer models introduce, which will be intended to allow stuffing more smaller GPUs into a system if so desired, and raised X-branches internal limits to 16 devices in anticipation (though current server side limits will only believe 8)
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1460079 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1460084 - Posted: 2 Jan 2014, 20:40:09 UTC
Last modified: 2 Jan 2014, 20:41:56 UTC

I have the 690´s and the PSU´s but don´t have the EVGA MB anymore to test if the windows 8 solves the problem, maybe the user of the 4x690 could explain to us his configuration.

On other way the 100 GPU WU limit still on place so "ultra-big crunchers", in my opinion, are not the best option on SETI for now or at least until the limit where rised to at least 100WU per GPU.

One thing is important, if you go to the nvidia site you will see the 690 is not suported on XP, but actualy i have a host with 2x690+670 (total of logical 5 GPU´s) working fine for months with a 1350W PSU. Event this host running empty a lot of times in SETI.
ID: 1460084 · Report as offensive
Batter Up
Avatar

Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,860,347
RAC: 0
United States
Message 1460133 - Posted: 2 Jan 2014, 22:18:41 UTC - in response to Message 1460084.  


On other way the 100 GPU WU limit still on place so "ultra-big crunchers", in my opinion, are not the best option on SETI for now or at least until the limit where rised to at least 100WU per GPU.

I have seen most of the top crunchers with 200, or MORE, AP WU in their cache. I crunch what I am given and under normal server conditions I never run out of WUs.

As for how I got eight GPUs working, I don't know, they just worked, almost plug and play too, just some manual driver installation. I had the same setup working under Win 7, I just went with Win 8.1 this week. Of course all GPUs work independently, no SLI.
ID: 1460133 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1460258 - Posted: 3 Jan 2014, 10:28:01 UTC - in response to Message 1460133.  
Last modified: 3 Jan 2014, 10:45:02 UTC


On other way the 100 GPU WU limit still on place so "ultra-big crunchers", in my opinion, are not the best option on SETI for now or at least until the limit where rised to at least 100WU per GPU.

I have seen most of the top crunchers with 200, or MORE, AP WU in their cache. I crunch what I am given and under normal server conditions I never run out of WUs.

Actualy the limit of 100 GPU WU/host still valid, so if anyone have more than 100 GPU WU means he is rescheduling the WU´s something that actualy mess a lot with the credit guaranted per WU and not recomended to do. 100 AP holds for a while, but 100 MB only holds for few hours, running 2 WU per GPU and with 12 min to crunch each WU,(for a medium AR WU) your host will be dry in 75 min or less if anything happens with the servers, something very common before the move to colo, that´s why i stop to use my 590+3x690 host and remain asking to rise the limit.

As for how I got eight GPUs working, I don't know, they just worked, almost plug and play too, just some manual driver installation. I had the same setup working under Win 7, I just went with Win 8.1 this week. Of course all GPUs work independently, no SLI.

That´s interesting, in theory 4x690 don´t work but something must be changed in the last 2 years and make them work. Just curiosity: What is your MB? PSU´s, GPU brand etc?
ID: 1460258 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1460271 - Posted: 3 Jan 2014, 12:26:07 UTC

Juan is correct.

I have 200 AP or MB depending on what the servers spit out.
100 for GPU and 100 for CPU.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1460271 · Report as offensive
Thomas
Volunteer tester

Send message
Joined: 9 Dec 11
Posts: 1499
Credit: 1,345,576
RAC: 0
France
Message 1460284 - Posted: 3 Jan 2014, 13:18:33 UTC - in response to Message 1459982.  

Go to the host you want to look at.
Open one finnished GPU task.

Stderr output

<core_client_version>7.2.33</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 8 CUDA device(s):
  Device 1: GeForce GTX TITAN, 4095 MiB, regsPerBlock 65536
     computeCap 3.5, multiProcs 14 
     pciBusID = 1, pciSlotID = 0
  Device 2: GeForce GTX 680, 4095 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 15, pciSlotID = 0
  Device 3: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 4, pciSlotID = 0
  Device 4: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 8, pciSlotID = 0
  Device 5: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 13, pciSlotID = 0
  Device 6: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 5, pciSlotID = 0
  Device 7: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 9, pciSlotID = 0
  Device 8: GeForce GTX 690, 2048 MiB, regsPerBlock 65536
     computeCap 3.0, multiProcs 8 
     pciBusID = 12, pciSlotID = 0

Thanks Mike ! :)
This tip will help me...
ID: 1460284 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1460300 - Posted: 3 Jan 2014, 14:22:34 UTC - in response to Message 1460271.  
Last modified: 3 Jan 2014, 14:25:53 UTC

Juan is correct.

I have 200 AP or MB depending on what the servers spit out.
100 for GPU and 100 for CPU.

That´s exactly the point, no matter how many GPU´s your host have you could only build a 100 WU GPU cache that´s what i allways ask to rise the size of the cache to 100 per GPU, so in the multiple GPU hosts the cache will hold for at least 1/4 day if the WU where MB´s, so we don´t run dry even in normal outages.
ID: 1460300 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1460319 - Posted: 3 Jan 2014, 16:11:07 UTC - in response to Message 1460300.  

Juan is correct.

I have 200 AP or MB depending on what the servers spit out.
100 for GPU and 100 for CPU.

That´s exactly the point, no matter how many GPU´s your host have you could only build a 100 WU GPU cache that´s what i allways ask to rise the size of the cache to 100 per GPU, so in the multiple GPU hosts the cache will hold for at least 1/4 day if the WU where MB´s, so we don´t run dry even in normal outages.


I second to that.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1460319 · Report as offensive
Batter Up
Avatar

Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,860,347
RAC: 0
United States
Message 1460377 - Posted: 3 Jan 2014, 18:55:17 UTC - in response to Message 1460258.  

What is your MB? PSU´s, GPU brand etc?

The GPUs are four GTX 690 ( two Kepler chips per card) , ASUS Rampage IV Extreme MB and a Thermaltake 1350 watt power supply.

The PC has an SSD HD no CD and four sticks, 16 gig, of DDR3 RAM. I get an occasional power warning but no BSOD.

Everything is consumer grade running at factory settings.
ID: 1460377 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1460390 - Posted: 3 Jan 2014, 20:49:59 UTC

Thanks for the info. Actualy i have a Rampage and the GPÙ´s too so i will think on the possibility to try again if they release the 100 WU limit in the future. Anyway it´s interesting to see a thermaltake 1350w PSU driving 4x300W GPUs and the rest of the host, sure it´s working over the edge. When i build the 4xGPU host i use 2 PSU one 1350 (thermaltake too) used to drive de MB+2GPU´s plus one 1000 (coolermaster) for the other 2 GPU´s.
ID: 1460390 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : Quick question about the GPU count


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.