LotzaCores and a GTX 1080 FTW

Message boards : Number crunching : LotzaCores and a GTX 1080 FTW
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 11 · Next

AuthorMessage
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1658
Credit: 371,234,839
RAC: 322,514
United States
Message 1793526 - Posted: 4 Jun 2016, 23:08:21 UTC - in response to Message 1793520.  

Grant, I thought the exact same thing, but spoke with a gent at Koolance, and he said that the temp difference between the 1st and 2nd card in the loop is like 2-3c tops. It flows better as well, and I grilled him on this, because the reservoir has 2 pumps that are run either parallel or series, and he said that it would flow much better in a series and would remove heat more efficiently. I know what you're thinking, I was in that place too, but as he works for the company that makes it, and has a lot of experience testing these setups, I decided to believe him. So yep, a loop. Check out this link, and see how they have 4 cards piped together? Much neater than mine, but then, their cards are exactly the same, so they can get away doing that, me, not so much. But I'm sure it will work pretty well, and if everything goes as planned, I might actually have it fired up tomorrow sometime, depends on other things falling in line, but one can hope.

ID: 1793526 · Report as offensive
Cruncher-American Special Project $75 donor

Send message
Joined: 25 Mar 02
Posts: 1465
Credit: 278,897,507
RAC: 154,859
United States
Message 1793572 - Posted: 5 Jun 2016, 1:25:32 UTC

Grant - a Y or T halves the flow to each card, so cooling is cut. Water isn't air. In effect, the temp of the water in a well-flowing loop should be the same throughout. Not so for air! Water has a much higher heat capacity than air, remember.
ID: 1793572 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10189
Credit: 136,301,024
RAC: 87,990
Australia
Message 1793586 - Posted: 5 Jun 2016, 2:25:40 UTC - in response to Message 1793572.  

Water has a much higher heat capacity than air, remember.

Oh yes.
However if a group of heat exchangers in series have very little difference in the temperature of what is being cooled between the first and last in the series, then it indicates either poor efficiency of the heat exchanger, or little to no load.
I suspect it's a case of both for CPU & GPU coolers.


Grant - a Y or T halves the flow to each card, so cooling is cut.

I guess it depends on how the cooling is accomplished.
I'm not a plumber (or a refrigeration mechanic), but the plumbing for some cooling systems I've dealt with in the past comprises of the main feed & return lines, which are a larger diameter than the lines that split off to the heat exchangers & then back to the main return line.
The main feed & return lines are sized for the total flow rate required for the number of heat exchanges and their feed piping diameters.

Chiller plant = radiator.
Plot-1, 2 etc = water block.


Water isn't air. In effect, the temp of the water in a well-flowing loop should be the same throughout.

In a closed loop it should be very cool going in & very hot coming out- but this would depend on the load. A very light load, very little difference in temperatures. Very high load, much bigger difference in temperatures (for very efficient heat exchangers).

From the sounds of things they are just making use of the very high specific heat capacity of the water.
ie for all the heat the CPU & GPU produces, it's actually an extremely small thermal load for the water to absorb, so it's temperature doesn't change much. More efficient water blocks would transfer more heat, but I expect the efficiency is severely limited by the small contact area on the CPU/GPU and the corresponding small area in which the water can absorb the heat.
Grant
Darwin NT
ID: 1793586 · Report as offensive
Profile Wiggo "Socialist"
Avatar

Send message
Joined: 24 Jan 00
Posts: 14825
Credit: 191,943,389
RAC: 68,212
Australia
Message 1793591 - Posted: 5 Jun 2016, 3:11:28 UTC - in response to Message 1793586.  




Chiller plant = radiator.
Plot-1, 2 etc = water block.


That setup will not be able to maintain a constant flow to each GPU resulting in some running much warmer than others, yes, I know, it looks fine in theory, but it doesn't work in the real world unless you have a separate pump for each GPU feed.

Cheers.
ID: 1793591 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10189
Credit: 136,301,024
RAC: 87,990
Australia
Message 1793595 - Posted: 5 Jun 2016, 3:34:16 UTC - in response to Message 1793591.  

That setup will not be able to maintain a constant flow to each GPU resulting in some running much warmer than others, yes, I know, it looks fine in theory, but it doesn't work in the real world unless you have a separate pump for each GPU feed.

Single pump.
What that diagram doesn't show is the valves on each of the feeds so the system can be balanced, nor the change in diameter of the main feed & return piping for long runs (for an example of that checkout the size of the air ducting on very large air conditioning systems. Huge at the output of the fan, getting smaller in cross section as you get further away).

In the case of 2 heat exchangers only Y pieces would be needed to split & rejoin the main feed & return lines. The use of T pieces would require a valve on the straight through line.
In the case of multiple T pieces, valves on each of the inputs to the heat exchanger.
Grant
Darwin NT
ID: 1793595 · Report as offensive
Profile Cactus Bob
Avatar

Send message
Joined: 19 May 99
Posts: 195
Credit: 8,245,684
RAC: 1,868
Canada
Message 1793596 - Posted: 5 Jun 2016, 3:44:09 UTC

Grant replied as I was writing this post. This system will work as long as you equalize the resistance on each plot. Changing diameter of piping or changing the length of the piping to each card would do the trick.

The temperature difference between the input and output is a lot smaller than most people think. Because this system has a radiator they think of car engine input / output. The car engine is an extreme temperature difference and graphic cards are not. Usually the difference is within several degrees.

Like batteries they be connected in series or parallel. As long as your radiator does the job it should not make much difference. of course YMMV.

Bob
Sometimes you are the windshield, sometimes the bug.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SETI@home classic workunits 4,321
SETI@home classic CPU time 22,169 hours
ID: 1793596 · Report as offensive
Wembley
Volunteer tester
Avatar

Send message
Joined: 16 Sep 09
Posts: 429
Credit: 1,844,293
RAC: 0
United States
Message 1793605 - Posted: 5 Jun 2016, 5:14:26 UTC

If you are still worried about using that radiator in your cpu/gpu cooling loop you can always put a heat exchanger between the radiator loop and your cpu/gpu loop.

http://www.lytron.com/Heat-Exchangers/Standard/Heat-Exchangers-Liquid-to-Liquid
ID: 1793605 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1658
Credit: 371,234,839
RAC: 322,514
United States
Message 1793609 - Posted: 5 Jun 2016, 5:42:16 UTC - in response to Message 1793605.  

Naa, I think this will work out quite well for me actually, and is larger and more efficient than anything I've seen offered from any watercooling mfg, so I guess the proof will be when I get it up and running. :-)

ID: 1793609 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3719
Credit: 9,298,377
RAC: 433
Bulgaria
Message 1793783 - Posted: 5 Jun 2016, 22:21:38 UTC - in response to Message 1793345.  
Last modified: 5 Jun 2016, 22:32:16 UTC

for whatever reason, BOINC can't seem to identify more than one video card correctly.

Not really true, especially for GPUs from one vendor.
(i.e. if all are NVIDIA they use the same NVIDIA driver)

Maybe you mean not "can't seem to identify" but "can't seem to use"?
Because if the GPUs are all NVIDIA but different - for BOINC to use all of them you will need:
<use_all_gpus>1</use_all_gpus> in cc_config.xml

If in doubt - post the start of Event Log (Ctrl+Shift+E)


More info on "use_all_gpus" (by Ageless):
https://setiathome.berkeley.edu/forum_thread.php?id=78800&postid=1756194#1756194

... and later on the same thread about "Dummy Plug" (by me):
https://setiathome.berkeley.edu/forum_thread.php?id=78800&postid=1774759#1774759
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1793783 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 12090
Credit: 123,836,459
RAC: 47,941
United Kingdom
Message 1793784 - Posted: 5 Jun 2016, 22:32:29 UTC

The BOINC client locally on your computer will identify each card separately and list them and their distinct characteristics in the event log at startup.

But the BOINC database on the server will summarise them as multiple copies of the 'best' card, losing the differences.
ID: 1793784 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 1793785 - Posted: 5 Jun 2016, 22:32:47 UTC - in response to Message 1793783.  

for whatever reason, BOINC can't seem to identify more than one video card correctly.

Not really true, especially for GPUs from one vendor.
(i.e. if all are NVIDIA they use the same NVIDIA driver)

I agree with the "not identify" assertion if one modifies it to "not report to certain pages". It is my experience and observation that if more than one GPU of more than one model of Nvidia cards is mounted in a system, the system summary data posted on the web site computers list and computer detail pages will give just one of the models, with a number giving the correct total number of cards.

Somewhere I saw an assertion that if cards of differing CUDA capability levels are present, the one mentioned will have the highest CUDA capability level present. I don't know how it chooses among cards of the same level.

But "not identify" is a bit too broad, as in such mixed systems of Nvidia cards a review of the stderr for a task will show the actual card used, at least over at Einstein.
ID: 1793785 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 12090
Credit: 123,836,459
RAC: 47,941
United Kingdom
Message 1793789 - Posted: 5 Jun 2016, 22:55:16 UTC - in response to Message 1793785.  

Somewhere I saw an assertion that if cards of differing CUDA capability levels are present, the one mentioned will have the highest CUDA capability level present. I don't know how it chooses among cards of the same level.

All cards will be itemised in the Event Log, but 'lower' cards will be marked as (nor used) unless the option to use all cards is set in cc_config.xml

The comparison for NVidia cards depends on:

// factors (decreasing priority):
// - compute capability
// - software version
// - available memory
// - speed
//
// If "loose", ignore FLOPS and tolerate small memory diff
ID: 1793789 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1658
Credit: 371,234,839
RAC: 322,514
United States
Message 1793793 - Posted: 5 Jun 2016, 23:02:30 UTC - in response to Message 1793784.  

The BOINC client locally on your computer will identify each card separately and list them and their distinct characteristics in the event log at startup.

But the BOINC database on the server will summarise them as multiple copies of the 'best' card, losing the differences.

Yes, this. No one else looking at my info on the server summary page can see the entire roster of hardware in the system, just one vid card and one CPU. Probably isn't really important to display it all, otherwise they would have corrected that years ago.

ID: 1793793 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1658
Credit: 371,234,839
RAC: 322,514
United States
Message 1793794 - Posted: 5 Jun 2016, 23:04:24 UTC - in response to Message 1793789.  
Last modified: 5 Jun 2016, 23:07:22 UTC

That had been set, all are working, running 2 tasks each. Now if I could just reserve 3 out of 8 cores for GPU support, I'd be golden.

ID: 1793794 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10189
Credit: 136,301,024
RAC: 87,990
Australia
Message 1793802 - Posted: 5 Jun 2016, 23:21:05 UTC - in response to Message 1793794.  
Last modified: 5 Jun 2016, 23:21:46 UTC

That had been set, all are working, running 2 tasks each. Now if I could just reserve 3 out of 8 cores for GPU support, I'd be golden.


My app_config file
<app_config>
 <app>
  <name>setiathome_v8</name>
  <gpu_versions>
  <gpu_usage>0.50</gpu_usage>
  <cpu_usage>1.00</cpu_usage>
  </gpu_versions>
 </app>
</app_config>


That runs 2 WUs at a time & reserves 1 CPU core for each WU. Setting CPU usage to 0.50 would reserve 1 core for every 2 WUs.
Grant
Darwin NT
ID: 1793802 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1658
Credit: 371,234,839
RAC: 322,514
United States
Message 1793807 - Posted: 5 Jun 2016, 23:37:14 UTC - in response to Message 1793802.  

So, app_config, not app_info? I've been messing around this whole time in the app_info, changing those settings. Shouldn't I be modifying that one? Ruh roh, raggy... Taking a quick look in both dirs, it appears I don't have that file currently, so I'd have to create it? If I do, what do I set the app_info back to, .04 CPU's again? Thanks!

ID: 1793807 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4655
Credit: 293,244,525
RAC: 471,227
United States
Message 1793808 - Posted: 5 Jun 2016, 23:38:29 UTC - in response to Message 1793807.  

App_config will override any setting in app_info
ID: 1793808 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10189
Credit: 136,301,024
RAC: 87,990
Australia
Message 1793810 - Posted: 5 Jun 2016, 23:51:19 UTC - in response to Message 1793807.  

So, app_config, not app_info?

The problem with app_info is if you make a mistake you can trash all your work & cache, and you have to modify every GPU entry in it, and you have to exit & restart BOINC for the changes to take effect.

Mess up app_config & things may not work, but you won't lose your cache, you only need to make the one entry, and you don't have to exit & restart BOINC for the changes to take effect; just Options, Read config files.
Grant
Darwin NT
ID: 1793810 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1658
Credit: 371,234,839
RAC: 322,514
United States
Message 1793831 - Posted: 6 Jun 2016, 2:34:56 UTC - in response to Message 1793810.  

Hmm, ok, that is good to know. I usually just use find and replace in notepad, it makes the changes almost foolproof, though it is still weird that it doesn't seem to have an effect on this computer, regardless what value I put in there. I will try creating a app_config, I presume it goes in the same place? And I can leave the app_info file as it is, since it will keep ignoring it as I am making the other file?

ID: 1793831 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4655
Credit: 293,244,525
RAC: 471,227
United States
Message 1793832 - Posted: 6 Jun 2016, 2:38:41 UTC - in response to Message 1793831.  

The app_info provides the information that the apps need to run the work

The app_config fine tunes those apps

So you need both.
ID: 1793832 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 11 · Next

Message boards : Number crunching : LotzaCores and a GTX 1080 FTW


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.