Does SETI@Home use only one thread in my RTX 2060 GPU?

Message boards : Number crunching : Does SETI@Home use only one thread in my RTX 2060 GPU?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 2005553 - Posted: 3 Aug 2019, 20:27:45 UTC - in response to Message 2005552.  

but at the end of the day the same amount of work would get done.

say you have 2 GPUs and 2 projects. set resource share to 50% on each project. Sometimes you'll be crunching one project on both cards, and sometimes it'll be split. and sometimes it'll be the other project (depending on WU availability from each project of course)

does it really matter if one GPU is segregated to only one project? In my opinion, no. Actually it seems to carry the caveat that if one project were to run out of work, and you had one card excluded from the other project, it would sit idle and do nothing. I don't think that's the best use of resources.


In theory you are right. In reality, no. Einstein has shorter deadlines than Seti, don't know how Milky way sets theirs. What would end up happening is Einstein would seize control of his cards and run mostly exclusively those work units. It would not switch back to Seti until the Seti Work units enter the Panic Mode phase of not completing by the deadline. Once it cleared the panic mode, it would shift back to Einstein. He's better off just running seti on 1 card and Einstein/milky way on the other.
ID: 2005553 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2005554 - Posted: 3 Aug 2019, 20:33:25 UTC - in response to Message 2005553.  

couldn't you then just shift priority over to SETI like 75/25 (or whatever the ratio needed to be) to even it out for roughly equal run time?
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2005554 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005557 - Posted: 3 Aug 2019, 21:00:38 UTC

If projects all obeyed the BOINC rules and all projects had the same recent estimated credit values, only then would each project get 50% of work over time. BOINC decides when to run a project based on REC. REC is determined by the credit awarded for a host. If all credits given are based on Credit New then, then it should work. But Einstein does not award credit based on FLOPS, so this will never work.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005557 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2005559 - Posted: 3 Aug 2019, 21:05:02 UTC - in response to Message 2005557.  

If projects all obeyed the BOINC rules and all projects had the same recent estimated credit values, only then would each project get 50% of work over time. BOINC decides when to run a project based on REC. REC is determined by the credit awarded for a host. If all credits given are based on Credit New then, then it should work. But Einstein does not award credit based on FLOPS, so this will never work.


I see you run several projects on several of your systems. are you GPU excluding them to assign one GPU exclusively to one project? or are you splitting resource share? i figured it was resource share based on the amount of points being awarded across the different projects every day.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2005559 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005560 - Posted: 3 Aug 2019, 21:09:29 UTC - in response to Message 2005553.  

Zalster wrote:
In theory you are right. In reality, no. Einstein has shorter deadlines than Seti, don't know how Milky way sets theirs. What would end up happening is Einstein would seize control of his cards and run mostly exclusively those work units. It would not switch back to Seti until the Seti Work units enter the Panic Mode phase of not completing by the deadline. Once it cleared the panic mode, it would shift back to Einstein. He's better off just running seti on 1 card and Einstein/milky way on the other.

As ignorant as I may seem, I tend to agree with Z. Case in point, my little old computer is running three projects right now as we speak (type?). I have set SETI preferences to 50%, and both Milkyway and Einstein to 25% each. They have been that way for weeks at least. The end result is I will have Milkyway or Einstein running on all threads and occasionally have SETI running on one to three threads, usually it picks the "1 CPU + 1 Nvidia GPU". The other 7 threads are split between Milkyway and Einstein, or either all one or all the other.
George

ID: 2005560 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005561 - Posted: 3 Aug 2019, 21:11:24 UTC - in response to Message 2005559.  

Yes, I run all projects on all cards except for GPUGrid. I had to exclude the Turing cards from that project when I got the first since they have no compatible app yet released. I use resource share for Seti, MW and GPUGrid. Resource share on Einstein is pointless. No matter what you set, a one to million ratio for example for Einstein/Seti, Einstein will always dump too much work on you to complete before commandeering the systems with EDF tasks. So the only way I have been able to run more than one project on a host with Einstein in the mix, it to get a download of work and then set NNT. Then abort whatever amount of work in excess of around 120 tasks and then when getting down to half a dozen, unset NNT and get another load of work. Rinse and repeat.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005561 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005564 - Posted: 3 Aug 2019, 21:18:11 UTC - in response to Message 2005561.  

That's a thought I hadn't used in a while. I may try doing NNT for a while and see what happens.
George

ID: 2005564 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005565 - Posted: 3 Aug 2019, 21:18:11 UTC - in response to Message 2005560.  

As ignorant as I may seem, I tend to agree with Z. Case in point, my little old computer is running three projects right now as we speak (type?). I have set SETI preferences to 50%, and both Milkyway and Einstein to 25% each. They have been that way for weeks at least. The end result is I will have Milkyway or Einstein running on all threads and occasionally have SETI running on one to three threads, usually it picks the "1 CPU + 1 Nvidia GPU". The other 7 threads are split between Milkyway and Einstein, or either all one or all the other.


This all ties in with Z's comment about deadlines. Seti has a relatively long average deadline for all tasks compared to MW or Einstein. This also ties into how REC is calculated. So it will appear that a host will do other projects work long before getting around to working on Seti work. Hasn't anyone noticed on their host after a long outage at Seti, where you crunched nothing but backup projects,that once you have work again at Seti, that the host will exclusively process Seti work in opposition to what George is describing on his current host. This is the REC mechanism "balancing the books" even thought the other projects have earlier deadlines.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005565 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005566 - Posted: 3 Aug 2019, 21:23:00 UTC - in response to Message 2005564.  

That's a thought I hadn't used in a while. I may try doing NNT for a while and see what happens.

Based on your attached projects and their accumulated credits, I think you may be a candidate to use NNT the way I describe. It take manual intervention and contrary to what BOINC espouses as a automatic system but it works.
The only other way I have been able to restrict Einstein from overloading the systems is to set a value for BOINC disk share resources unnaturally low. Then it will not send work because BOINC says there is not enough space. But if you run other projects, this hamstrings them too.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005566 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005568 - Posted: 3 Aug 2019, 21:31:57 UTC

Another tip. Change the default <rec_half_life_days>10.00000</rec_half_life_days to <rec_half_life_days>1.000000</rec_half_life_days in cc_config.xml down in the <proxy> section. Or even as low as half or quarter of a day. This changes the averaging of REC to react faster to each projects daily credit totals in deciding which project gets to run.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005568 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005569 - Posted: 3 Aug 2019, 21:33:39 UTC - in response to Message 2005566.  

Well Keith, I am going to do the NNT for Einstein and Milkyway and leave SETI at it's present settings. Then I'll watch the tasks on both until they get low, and in the mean time I'll see if SETI can recover some more processing time.
George

ID: 2005569 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005570 - Posted: 3 Aug 2019, 21:37:20 UTC - in response to Message 2005568.  

Keith Wrote:
Another tip. Change the default <rec_half_life_days>10.00000</rec_half_life_days to <rec_half_life_days>1.000000</rec_half_life_days in cc_config.xml down in the <proxy> section. Or even as low as half or quarter of a day. This changes the averaging of REC to react faster to each projects daily credit totals in deciding which project gets to run.

That's something I never knew before. I'll look into it also.
George

ID: 2005570 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2005573 - Posted: 3 Aug 2019, 22:05:12 UTC - in response to Message 2005570.  

Just an FYI, I did just check on tasks for all three projects.
SETI is in progress for 25 OpenCL Nvidia SoG. Average time from "sent" to "deadline" is 50 days.
Milkyway is in progress for 215 OpenCL Nvidia SoG. Average time from "sent" to "deadline" is 12 days.
Einstein is in progress for 12 "normal"(?) tasks. Average time from "sent" to "deadline" is 14 days. Note: Most of the Einstein tasks are over 1 1/2 days long, and are presently using all 8 threads.
George

ID: 2005573 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2005582 - Posted: 3 Aug 2019, 22:45:22 UTC - in response to Message 2005573.  

FYI, this is what the client configuration pages show for definition of <rec_half_life_days>X</rec_half_life_days>.
<rec_half_life_days>X</rec_half_life_days>
A project's scheduling priority is determined by its estimated credit in the last X days. Default is 10; set it larger if you run long high-priority jobs.


I can't remember if the Einstein cpu tasks can be restricted with the mt command in the app_config. I never have run any projects cpu applications other than Seti. This is an example of how mt is used in an app_config. You can restrict the number of threads a multi-thread application uses. If the project doesn't have a setting on the web pages for restricting thread use for the application, you can still control the app through the app_config if the app allows it. I know that Zalster restricts the GPUGrid cpu app to only use 4 cores for each task instead of using eight.

<app_config>
[<app>
<name>Application_Name</name>
<max_concurrent>1</max_concurrent>
[<report_results_immediately/>]
[<fraction_done_exact/>]
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.4</cpu_usage>
</gpu_versions>
</app>]
...
[<app_version>
<app_name>Application_Name</app_name>
[<plan_class>mt</plan_class>]
[<avg_ncpus>x</avg_ncpus>]
[<ngpus>x</ngpus>]
[<cmdline>--nthreads 7</cmdline>]

</app_version>]
...
[<project_max_concurrent>N</project_max_concurrent>]
[<report_results_immediately/>]
</app_config>
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2005582 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006096 - Posted: 7 Aug 2019, 2:33:33 UTC - in response to Message 2005582.  

In C:\ProgramData\BOINC\projects\einstein.phys.uwm.edu there is no app_config.xml file. If I wanted to place a file in there, can I do so or would it conflict with the SETI file?

Secondly, if you are using Unbuntu would this be why you have <gpu_usage> and <cpu_usage) set to less than 1?

Keith Meyers wrote:
<app_config>
[<app>
<name>Application_Name</name>
<max_concurrent>1</max_concurrent>
[<report_results_immediately/>]
[<fraction_done_exact/>]
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.4</cpu_usage>

</gpu_versions>
</app>]
...
[<app_version>
<app_name>Application_Name</app_name>
[<plan_class>mt</plan_class>]
[<avg_ncpus>x</avg_ncpus>]
[<ngpus>x</ngpus>]
[<cmdline>--nthreads 7</cmdline>]
</app_version>]
...
[<project_max_concurrent>N</project_max_concurrent>]
[<report_results_immediately/>]
</app_config>

George

ID: 2006096 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006100 - Posted: 7 Aug 2019, 3:19:15 UTC

Each projects app_config is only applied to that project since it exists in the project directory. No conflicts with any other app_config file. Einstein has controls for running multiple tasks per card on the website project preferences. So you can simply choose to run two tasks per card configured there. No need to add it to an app_config.

The parts you quoted were not from any specific website. Those were just examples provided by the BOINC reference pages.
https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006100 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006102 - Posted: 7 Aug 2019, 3:45:06 UTC - in response to Message 2006100.  
Last modified: 7 Aug 2019, 4:22:42 UTC

If I only have SETI@Home in my <app_config> file, and I do not have any such listings for Milkyway or Einstein in that file, and there is no <app_config> file in either Milkyway or Einstein, how does Einstein know to use 1 CPU thread and the GPU at the same time? Milkyway does not use a CPU thread when it uses the GPU. BOINC Manager shows it as using "0.978 CPU + 1 Nvidia GPU", and it still allows my 8 threads to be used when the GPU is being used. Could I add an <app> section and change Milkyway to read:

<app>
<name>milkywayathome</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>

...and (hopefully) have Milkyway use the same resources as SETI & Einstein?

My <app_config> file:
<app_config>
<app>
<name>setiathome_v8</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
<app>
<name>astropulse_v7</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>

George

ID: 2006102 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006113 - Posted: 7 Aug 2019, 6:49:09 UTC

Each project specifies the resource use for each of their apps. Unless you override those stock resources with a specification in either an app_info or app_config file.

Of course you can use your own resources specification in an app_config for MilkyWay@home. You need to use the name of the project as specified in the client_state file for the project and the app name.

For instance this is my app_config for Milkyway.

<app_config>
<app>
  <name>milkyway</name>
        <gpu_versions>
            <gpu_usage>1</gpu_usage>
            <cpu_usage>1</cpu_usage>
        </gpu_versions>
  <max_concurrent>2</max_concurrent>
</app>
</app_config>

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006113 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 2006118 - Posted: 7 Aug 2019, 8:19:17 UTC
Last modified: 7 Aug 2019, 8:20:23 UTC

I am running Milkyway@home in Science United. I am not a registered user in Milkyway@home and cannot modify the stock conditions. It runs GPU tasks as 0.949 CPU and one nVidia GPU in a few minutes. My GPU board on the PC dedicated to Science United is a GTX 1050 Ti. On the same PC Asteroids@home GPU tasks run as 0.01 CPU and 1 nVidia GPU (Cuda 55, while Milkyway@home uses OpenCl). They take much longer, on the timeframe of one hour.
Tullio
ID: 2006118 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006158 - Posted: 7 Aug 2019, 14:53:44 UTC

You mean that Science United doesn't use the BOINC platform? If it does, then you should be able to use an app_config file. The only way that it wouldn't is that Science United provides a different client than BOINC.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006158 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Does SETI@Home use only one thread in my RTX 2060 GPU?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.