Does SETI@Home use only one thread in my RTX 2060 GPU?

Message boards : Number crunching : Does SETI@Home use only one thread in my RTX 2060 GPU?
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 · Next

AuthorMessage
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2004142 - Posted: 24 Jul 2019, 23:04:01 UTC

I'm trying to get my head wrapped around how SETI works, when it chooses to use my CPU vs GPU. With all three projects (Milkyway, Einstein, and SETI) I am having all three show work, but only SETI, Milkyway, or Einstein at a time will show a CPU & GPU in my project tasks list in BOINC Manager. The CPU's 8 threads are showing work being done, either all at once, one project for all 8 threads, or any mix of the three. I have installed an EVGA RTX 2060 GPU with 6GB of memory, but each of my three projects show using only one project at a time with the GPU. When I look at Nvidia's GPU activity of projects running it only shows one from SETI, Milkyway or Einstein at a time.

Is this normal? Or is there someway to make SETI, or Milkyway & Einstein & SETI, use more of the GPU. It does show that the tasks are running very fast, sometimes in as little as 3 minutes, seldom more than 12 minutes, but only one at a time.
George

ID: 2004142 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 2004143 - Posted: 24 Jul 2019, 23:13:38 UTC - in response to Message 2004142.  
Last modified: 24 Jul 2019, 23:25:54 UTC

Is this normal?

Yes.
CPUs used to only have 1 core, so they could only run 1 WU (Work Unit) at a time. Then they came up with Hyperthreading, wich allowed 1 physical core to also support a virtual core. So 1 CPU core effectively became 2. Then they started putting multiple CPU cores in a single CPU package.
So now you can have CPUs with as many as 128 cores, 256 threads.

You old i7 has 4 cores, with Hyperthreading, so it effectively has 8 cores.

With GPUs they only have a single core, and no Hyperthreading, so they run just 1 WU. It is possible to run more than one, however on anything other than high end hardware it will actually result in less work being done, not more (and even on higher end hardware, depending on the application used, running 1 at a t time is generally still best).
And as you have noted, while the GPU might only be able to run 1 at a time, it can process the work much faster than a CPU can.

With some command line values, your GPUs output could be much greater than it presently is (and your CPU output improved as well).
Grant
Darwin NT
ID: 2004143 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 2004147 - Posted: 24 Jul 2019, 23:23:06 UTC - in response to Message 2004142.  

Boinc is set up to share the resources of your machine..ie the CPU and GPU. Depending on how you have set it up, a project may have more time with your resources than others. But if you haven't done anything, then all 3 projects are supposed to share them equally. But that's not how it really happens. Projects like Einstein have shorter deadlines than Seti. So when you ask for work, BOINC manager will look at the deadlines and figure Einstein will need the resources more often than seti will. As such, it will get more time on your CPU and GPU than Seti will. I can't speak to MilkyWay since I don't crunch it. It's only when the work sent by Seti is approaching their deadlines will BoincManager switch project and run in Panic mode to try and complete the seti work units before their deadline.

As far as GPU and running more than 1 project at a time on them. Yes it is possible, but no. I would not recommend it. Due to the interactions between different project with either Cuda or Opencl, you can significantly alter the amount of time it will take to complete a work unit. Not to mention stability if you are overclocking the card. As another example, GPUGrid (back when it still had work) ran OpenCL but stressed the GPUs so hard that the card would actually hit their Thermal limits causing downclocking and any work on them to become unstable and crash. Not that will happen with Einstein, Milkyway or Seti. I only point it out to show different projects can affect the GPUs.
ID: 2004147 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2004150 - Posted: 24 Jul 2019, 23:29:31 UTC - in response to Message 2004143.  

Grant said:
With come command line values, your GPUs output could be much greater than it presently is (and your CPU output improved as well).

Grant, if I may I presume "come" should be "some". In which case I would be more than happy to here what you have to say.
George

ID: 2004150 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2004152 - Posted: 24 Jul 2019, 23:34:50 UTC

Another problem occurs with gpus. They are not set up to run more than one science application at a time. You can't have a Seti gpu app running on the card along with a MilkyWay gpu app. Both apps expect to have sole access to the gpu's resources. The different apps would be setting up the card using very different parameters regarding memory allocations, tables and cores. I do know you can run an AP task along with an MB task if you have the count set for 0.5. But I'm sure I've read here in the forums that does not work out so well with both tasks taking much longer to finish if either of the apps had sole access to the card resources. Now you can run more than one task on a high-end card from the same project, preferably the same app.

I've had my gpus set up at one time for 0.5 count for Seti and MW. But I never saw a Seti task running on the same card along with the MW task. What I would see is when a card became idle after finishing two Seti tasks, then two MW tasks took up residence.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2004152 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2004154 - Posted: 24 Jul 2019, 23:38:33 UTC - in response to Message 2004150.  

Grant said:
With come command line values, your GPUs output could be much greater than it presently is (and your CPU output improved as well).

Grant, if I may I presume "come" should be "some". In which case I would be more than happy to here what you have to say.

You can put this command line into play in the mb_cmdline_win_x86_SSE3_OpenCL_NV_SoG.txt file. That will work the gpu much harder than the stock general purpose parameter set.

-sbs 1024 -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2004154 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2004155 - Posted: 24 Jul 2019, 23:45:29 UTC - in response to Message 2004147.  

Zalster said:
GPUGrid (back when it still had work) ran OpenCL but stressed the GPUs so hard that the card would actually hit their Thermal limits causing downclocking and any work on them to become unstable and crash. Not that will happen with Einstein, Milkyway or Seti. I only point it out to show different projects can affect the GPUs.

With CPUID HWMonitor (ver. 1.40.0) my GPU has hit a max temp of only 68 C, though it does show the GPU utilization @ 100% (when I am not doing anything else) but memory is only being used 11%.

I understand your concerns about doing more than one at a time in the GPU. But I thought (was hoping?) that with more CUDA cores it just might be able to use more of them.
George

ID: 2004155 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2004157 - Posted: 24 Jul 2019, 23:50:01 UTC - in response to Message 2004155.  
Last modified: 24 Jul 2019, 23:52:14 UTC

I understand your concerns about doing more than one at a time in the GPU. But I thought (was hoping?) that with more CUDA cores it just might be able to use more of them.

The CUDA applications are written to use all of the card's resources all of the time at 100%. The OpenCL applications, no, not in their case typically.

[Edit]There is also a problem that no OpenCL application can use more than 26% of the available memory on the card. Just a limit in the OpenCL API that any science application can't get around. So memory utilization can be very low with OpenCL apps.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2004157 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 2004158 - Posted: 24 Jul 2019, 23:51:36 UTC - in response to Message 2004155.  
Last modified: 25 Jul 2019, 0:11:25 UTC



I understand your concerns about doing more than one at a time in the GPU. But I thought (was hoping?) that with more CUDA cores it just might be able to use more of them.


The other thing to keep in mind is how much of the RAM is actually available for use in OpenCl applications. Nvidia is about 27% of the sum total of the card. Intel and AMD is anywhere from 46-67% ?? It's just how they designed things. Otherwise no one would buy the Quadros....

High end GPU can run more than 1 cuda or OpenCl application at the same time and see a slightly faster processing when compared but single work unit but midrange to low end cards don't get a benefit from running more than 1 at a time. They can actually get worse times (total time/# of work units run) being greater than a single work unit)

Z
ID: 2004158 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2004159 - Posted: 24 Jul 2019, 23:56:29 UTC - in response to Message 2004154.  

Keith Meyers said:
Another problem occurs with gpus. They are not set up to run more than one science application at a time. You can't have a Seti gpu app running on the card along with a MilkyWay gpu app. Both apps expect to have sole access to the gpu's resources.

Thank you Keith, from you guys (gentlemen?) comments I'd better leave enough alone lest I get myself into more troubles and once again need you to bail me out. :*)
George

ID: 2004159 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 2004161 - Posted: 25 Jul 2019, 0:15:35 UTC - in response to Message 2004154.  
Last modified: 25 Jul 2019, 0:17:27 UTC

You can put this command line into play in the mb_cmdline_win_x86_SSE3_OpenCL_NV_SoG.txt file. That will work the gpu much harder than the stock general purpose parameter set.

-sbs 1024 -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64

Running stock, the file name will be
mb_cmdline-8.22_windows_intel__opencl_nvidia_SoG.txt
and it's in in your Seti project folder (if installed on the C: drive)
C:\ProgramData\BOINC\projects\setiathome.berkeley.edu


Something else that will help-
In the Seti project folder,
C:\ProgramData\BOINC\projects\setiathome.berkeley.edu

Make a new text file (if not already there) called
app_config.xml
(make sure no .txt on the end) using Notepad or another text editor (NOT Notepad or Word)
and copy and past the following in there and save it.
<app_config>
 <app>
  <name>setiathome_v8</name>
  <gpu_versions>
  <gpu_usage>1.0</gpu_usage>
  <cpu_usage>1.0</cpu_usage>
  </gpu_versions>
 </app>
 <app>
  <name>astropulse_v7</name>
  <gpu_versions>
  <gpu_usage>1.0</gpu_usage>
  <cpu_usage>1.0</cpu_usage>
  </gpu_versions>
 </app>
</app_config>


In the BOINC Manager, in the Advanced View select Options, Read config files.
This will allow the settings to take effect. What they do is reserve a CPU core to support the GPU- your system presently shows signs of being over committed and reserving a CPU core to support the GPU will fix that issue.


Your CPU WU processing times,
Run time 2 hours 48 min 52 sec
CPU time 2 hours 21 min 17 sec

Almost 30 min difference. Ideally there should be 2min or less difference between Run time & CPU time.

One of my system's CPU processing times.
Run time 44 min 52 sec
CPU time 44 min 29 sec
Grant
Darwin NT
ID: 2004161 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2004181 - Posted: 25 Jul 2019, 1:36:36 UTC

Thanks for the command filename correction. Never have run stock so I have only had the Lunatics version filename.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2004181 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2004836 - Posted: 30 Jul 2019, 0:38:02 UTC - in response to Message 2004161.  

Thanks for chiming in Grant. Sorry it's taken so long to respond but I was... ahh, get on with it!
I will try your suggestions and get back to this forum and let you know how it is going after a day or two.[/quote]
George

ID: 2004836 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005202 - Posted: 1 Aug 2019, 17:30:49 UTC - in response to Message 2004836.  

Hello Grant, et all,
Well it's been two days and I do see an increase in the SETI project, though small. The changes I've made have added 1 thread from the CPU (as opposed to a fraction) and the GPU to the calculations, taking away one CPU thread from the general use. That is with my 4-cores, 8-threads, I will now see when SETI is being used by my GPU & CPU that only 7 of the threads are used for the rest of the calculations by SETI, Milkyway or Einstein, when I used to see 8-threads used by any one or all plus the GPU + CPU by whomever is using it.
I hope this makes sense, it does to me.
George

ID: 2005202 · Report as offensive
elec999 Project Donor

Send message
Joined: 24 Nov 02
Posts: 375
Credit: 416,969,548
RAC: 141
Canada
Message 2005222 - Posted: 1 Aug 2019, 19:12:26 UTC - in response to Message 2004154.  

Grant said:
With come command line values, your GPUs output could be much greater than it presently is (and your CPU output improved as well).

Grant, if I may I presume "come" should be "some". In which case I would be more than happy to here what you have to say.

You can put this command line into play in the mb_cmdline_win_x86_SSE3_OpenCL_NV_SoG.txt file. That will work the gpu much harder than the stock general purpose parameter set.

-sbs 1024 -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64


Can you use these for other gpus also.
ID: 2005222 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005223 - Posted: 1 Aug 2019, 19:48:53 UTC - in response to Message 2005222.  

Grant said:
With come command line values, your GPUs output could be much greater than it presently is (and your CPU output improved as well).

Grant, if I may I presume "come" should be "some". In which case I would be more than happy to here what you have to say.

You can put this command line into play in the mb_cmdline_win_x86_SSE3_OpenCL_NV_SoG.txt file. That will work the gpu much harder than the stock general purpose parameter set.

-sbs 1024 -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64


Can you use these for other gpus also.

Depends on what you mean by "other gpus" Different brands? Different models? Those tunings are for higher end Nvidia hardware. Tunings are slightly different for high-end AMD hardware since the architecture isn't the same. Tunings would be different for low end gpu hardware or low powered, low cpu core systems also.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005223 · Report as offensive
elec999 Project Donor

Send message
Joined: 24 Nov 02
Posts: 375
Credit: 416,969,548
RAC: 141
Canada
Message 2005348 - Posted: 2 Aug 2019, 16:22:31 UTC

I added this to my nvidia 2060 system, can anyone confirm if it works?
ID: 2005348 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 2005350 - Posted: 2 Aug 2019, 16:45:47 UTC - in response to Message 2005348.  

I added this to my nvidia 2060 system, can anyone confirm if it works?


The only machine I see with a 2060 in it isn't running SoG. I see it running the cuda special which would not use the commandline above. Is there another machine you have with a 2060 in it running OpenCl instead of cuda?
ID: 2005350 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005413 - Posted: 3 Aug 2019, 0:55:20 UTC - in response to Message 2005350.  

I'm going to stick my neck out and show how ignorant I am.

1) What does SoG stand for?

2) Where does one find out the information about whether one is running the cuda special or OpenCl?

3) Am I running the cuda special or OpenCl?

4) Would it make a difference to change this, and of course how do I do it?

See, I'm not afraid to ask the big questions (at least in my eyes). I'm just crazy enough to want to try and learn more about SETI and others.
George

ID: 2005413 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 2005417 - Posted: 3 Aug 2019, 1:13:46 UTC - in response to Message 2005413.  
Last modified: 3 Aug 2019, 1:14:18 UTC

I'm going to stick my neck out and show how ignorant I am.

1) What does SoG stand for?


Stand for Signals on GPU.. Technically it's full name is OpenCl SoG. Just a name that Raistmer gave to it when he make the application. Basically means almost all the processing of the work unit is done on the GPU with minimal amount of CPU at the very end.

2) Where does one find out the information about whether one is running the cuda special or OpenCl?


Been so long since I looked at the BoincManager I forget if it list cuda or SoG. I'm guessing it will tell you under Tasks in the Manager in the Advance View. Might have to extend the column to see the full name of the work unit. For those of us that check other's computers, we look at the Stderr report of a completed work unit and in there it tells us if it's been run on either Cuda or OpenCl.

3) Am I running the cuda special or OpenCl?


You a running the OpenCl SoG

4) Would it make a difference to change this, and of course how do I do it?


Yes it would make a significant difference in the time to complete a work unit. Unfortunately, it is not an easy thing to convert a computer from Windows to Linux in order to make use of the Cuda Special App. Besides, there may be things you need to do on your computer with Windows that you might not be able to figure out how to do on Linux. Since most of the Users of the Cuda Special App are not using their "Daily Driver" machine, it doesn't make much of a difference since the machine is only crunching (exception to every rule...Keith) It's a steep learning curve and there are several pitfalls if you don't regularly use Linux. I have to reference Keith several times when I build mine. Something or several somethings always seem to pop up and I can't resolve them. Sometimes I don't as Keith will attest. So to answer you directly, I don't recommend converting to Linux to run the Special Cuda App if you don't have sufficient knowledge in that Area.

See, I'm not afraid to ask the big questions (at least in my eyes). I'm just crazy enough to want to try and learn more about SETI and others.


Nothing wrong with asking questions. It's how we learn.


Z
ID: 2005417 · Report as offensive
1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Does SETI@Home use only one thread in my RTX 2060 GPU?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.