1 core / GPU task (especially SOG) vs. 0.50 or 0.25 etc

Message boards : Number crunching : 1 core / GPU task (especially SOG) vs. 0.50 or 0.25 etc
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1869524 - Posted: 26 May 2017, 13:38:33 UTC

The read me text files for the SOG gpu programs seem to often advocate dedicating a full-time core to each task running on a GPU.

Does anyone have a real world comparison of how much it appears to speed up the individual tasks as well as does it does to the total production on a computer?

The reason I am wondering is because for each cpu that is dedicated to a gpu task, you lose that much cpu level production. So the speed up would need to be at least that much production in order for it to make sense to dedicate a cpu core to a gpu task.

I have been examining the Gpu(s) as reported by Gpu-Z and cannot "see" a load difference between when I dedicate a cpu to each gpu task and when I run 0.25 or 0.50 cpu per task.

That is why I am asking about this.

Thank you,
Tom
A proud member of the OFA (Old Farts Association).
ID: 1869524 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1869527 - Posted: 26 May 2017, 13:59:38 UTC - in response to Message 1869524.  
Last modified: 26 May 2017, 14:13:37 UTC

For my 750Ti I was running this with 1 reserved core (notice the HighPriority)
-sbs 256 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64 -hp

Yes it helps the performance of the card. Example your Xeon is running over 4h per task so a gain of 1 task per 4h needed - you will easily see that.

My reasoning, it's only 1 core/thread and the 750Ti will only use 40-70% of that on average (it's a small card, bigger cards WILL use 100%). The rest can be used by the system, along with the other 7 threads since CPU tasks are low priority.

You could run all 8 with the GPU at high priority, and I don't think you would see much difference. But 1 free is best.

EDIT: I see you are using a different command line, of which there are many, but I would throw a (-hp) on it.
ID: 1869527 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1869546 - Posted: 26 May 2017, 14:56:48 UTC - in response to Message 1869527.  

For my 750Ti I was running this with 1 reserved core (notice the HighPriority)
-sbs 256 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64 -hp


I am not clear on one thing. How many task are you running on your 750 (eg 1 or 2). I can see running "1 reserved core" with either 1 task / core or 2 tasks / core? eg. tying up 2 cpu cores, one / gpu task?

Right now, I am not running anything in the command line as I have just switched to Lunatics to try to speed up my cpu processing (It appeared I was averaging 5.5 hours per cpu wu task and I remember running much faster previously).

Without a command line, the gpu-z reports considerably more ram being used.

Thank you, I will be copying the above command line for one of the next iterations of my experiments.

Tom Miller
A proud member of the OFA (Old Farts Association).
ID: 1869546 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1869551 - Posted: 26 May 2017, 15:05:19 UTC - in response to Message 1869546.  

I was running 1 task, 2 was a very slight improvement, but then the extra task and extra core becomes more of an issue.
ID: 1869551 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1869557 - Posted: 26 May 2017, 15:16:58 UTC
Last modified: 26 May 2017, 15:21:32 UTC

Using -use_sleep, I find that (running 3/GTX 1080 or 3/GTX 980 on my crunchers), Arecibo WUs use about 20% of a CPU and GBTs about 30% of a CPU (using BOINC Tasks). So I set the CPU usage in app_config.xml to 0.5 for v8; this allows me to use more of my CPU threads for v8 WUs.

NB: having also noticed that Astropulse WUs use just about a full CPU, I set them to use 1 CPU when running them on my GPUs.
ID: 1869557 · Report as offensive
Profile Darrell
Volunteer tester
Avatar

Send message
Joined: 14 Mar 03
Posts: 267
Credit: 1,418,681
RAC: 0
United States
Message 1869617 - Posted: 26 May 2017, 20:34:30 UTC
Last modified: 26 May 2017, 20:35:02 UTC

By default Boinc is going to use 1 core or 1 thread for every task. So on my six core processor, having the computing preference set to 84% gives me either 4 CPU tasks and 1 GPU task (for those projects that I run one per GPU) or it gives me 3 CPU tasks and 2 GPU tasks (for those projects that I run two per GPU). Process Explorer shows the CPU tasks each using 16.5% of the CPU or 1 full core or thread. The GPU task or tasks only use a full core or thread when the task starts up and when it is finishing. While a GPU task is running it normally uses much less than the full core or thread, but BOINC by default has no way of using the extra time (space) on the core. It is turned over to the operating system for use. Now when you use the cpu_usage or avg_ncpus in the xml files, BOINC can throttle the task to only use 0.5 or 0.25 or 0.75 of that core or thread, but again BOINC by default has no way to use the extra space on the core. In theory, I have not tried it in practice, you could by using cpu_lock, have two GPU tasks running on the same core or thread, but you will be throttling the tasks and they will fight for full use of the core when starting or stopping, most likely resulting in longer run-times.
... and still I fear, and still I dare not laugh at the Mad Man!

Queen - The Prophet's Song
ID: 1869617 · Report as offensive
Profile Darrell
Volunteer tester
Avatar

Send message
Joined: 14 Mar 03
Posts: 267
Credit: 1,418,681
RAC: 0
United States
Message 1869621 - Posted: 26 May 2017, 21:00:31 UTC
Last modified: 26 May 2017, 21:01:06 UTC

https://1drv.ms/i/s!ArIvftV8roEagVWysMkhcYv66BoC

Here you see four core being used by CPU tasks (Burp being multi-threaded is using 2 cores) and one GPU task. Notice how little of the 1 core assigned to it that Collatz is using, makes you think you could run more that one at a time, but no you can't. For some reason when running two tasks at a time on the GPU, if one task is Collatz, then the other project task gets throttle down to the same small percentage of a core, causing it to have very long run-times. NOTE: Due to the way the BOINC scheduler is programmed, when attached to two or more projects that use the GPU very, very, very, seldom will BOINC give you two tasks from the same project, if you run like I do with a cache setting of 0.
... and still I fear, and still I dare not laugh at the Mad Man!

Queen - The Prophet's Song
ID: 1869621 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13739
Credit: 208,696,464
RAC: 304
Australia
Message 1869629 - Posted: 26 May 2017, 21:30:28 UTC - in response to Message 1869617.  

By default Boinc is going to use 1 core or 1 thread for every task.

Not for GPU work. That is dependent on the settings for the application being used. For the CUDA applications, 0.04 CPU threads were reserved per GPU WU, but it would use as much CPU time as they actually needed, however they would release that CPU time to other applications with higher priorities as they needed it.
By reserving a core, and bumping up the priority of the application, you can make you GPU more productive at the cost of those CPU resources not being available to other applications if they need it.
Grant
Darwin NT
ID: 1869629 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13739
Credit: 208,696,464
RAC: 304
Australia
Message 1869632 - Posted: 26 May 2017, 21:33:33 UTC - in response to Message 1869527.  

For my 750Ti I was running this with 1 reserved core (notice the HighPriority)
-sbs 256 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64 -hp


I ran with -sbs 1024 -period_iterations_num 1 -high_perf on my GTX 750Tis on my dedicated cruncher and also used app_config.xml to reserve a CPU thread for each WU, 1 WU per GPU.
Gave a very nice output boost.
Grant
Darwin NT
ID: 1869632 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34258
Credit: 79,922,639
RAC: 80
Germany
Message 1869645 - Posted: 26 May 2017, 22:09:14 UTC

Especially the first to values makes the difference but this doesn`t work on all 750 TI`s.
Some doesn`t like the screen lag that might happen.


With each crime and every kindness we birth our future.
ID: 1869645 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13739
Credit: 208,696,464
RAC: 304
Australia
Message 1869651 - Posted: 26 May 2017, 22:25:13 UTC - in response to Message 1869645.  

Especially the first to values makes the difference but this doesn`t work on all 750 TI`s.
Some doesn`t like the screen lag that might happen.

Yep.
Best for dedicated crunchers.
Grant
Darwin NT
ID: 1869651 · Report as offensive
Profile Darrell
Volunteer tester
Avatar

Send message
Joined: 14 Mar 03
Posts: 267
Credit: 1,418,681
RAC: 0
United States
Message 1869664 - Posted: 26 May 2017, 23:03:56 UTC - in response to Message 1869629.  
Last modified: 26 May 2017, 23:10:57 UTC

By default Boinc is going to use 1 core or 1 thread for every task.

Not for GPU work. That is dependent on the settings for the application being used. For the CUDA applications, 0.04 CPU threads were reserved per GPU WU, but it would use as much CPU time as they actually needed, however they would release that CPU time to other applications with higher priorities as they needed it.
By reserving a core, and bumping up the priority of the application, you can make you GPU more productive at the cost of those CPU resources not being available to other applications if they need it.


I can not speak for how Boinc runs with NVidia GPUs, having never owned one. But in the following screen-capture of Process Explorer's GPU graph is showing a Milkway task and a Seti task running on my RX480. Each task was using a single core on the CPU driving the GPU usage to about 95%. Then there is a short dip where the Milkway task ended and an Einstein task started running with the Seti task. The final part of the graph just shows how much the Einstein is using the GPU after the Seti task ended. No other GPU task is started as Boinc is getting ready to switch to a Collatz task. Collatz is set to run only one task at a time.

https://1drv.ms/i/s!ArIvftV8roEagVZTcWtznFgY9pHg

Addendum: here are a few lines from the log when this was happening:

5/26/2017 5:40:14 PM | Milkyway@Home | [coproc] ATI instance 0; 0.500000 pending for de_modfit_19_3s_146_bundle5_ModfitConstraintsWithDisk_fixed_2_1494357557_1398613_1
5/26/2017 5:40:14 PM | SETI@home Beta Test | [coproc] ATI instance 0; 0.500000 pending for blc13_2bit_guppi_57824_83347_HIP22715_0052.4987.409.24.54.4.vlar_1
5/26/2017 5:40:14 PM | Milkyway@Home | [coproc] ATI instance 0: confirming 0.500000 instance for de_modfit_19_3s_146_bundle5_ModfitConstraintsWithDisk_fixed_2_1494357557_1398613_1
5/26/2017 5:40:14 PM | SETI@home Beta Test | [coproc] ATI instance 0: confirming 0.500000 instance for blc13_2bit_guppi_57824_83347_HIP22715_0052.4987.409.24.54.4.vlar_1
5/26/2017 5:40:14 PM | Einstein@Home | [coproc] Insufficient ATI for LATeah0030L_500.0_0_0.0_758020_0: need 0.500000
5/26/2017 5:40:14 PM | climateprediction.net | [css] running wah2_eu50r_mcmk_20171_3_528_010931969_1 ( )
5/26/2017 5:40:14 PM | rosetta@home | [css] running foldit2_2003796_0008_fold_SAVE_ALL_OUT_485206_1133_0 ( )
5/26/2017 5:40:14 PM | NFS@Home | [css] running C195_148_98_40224_0 ( )
5/26/2017 5:40:14 PM | SETI@home Beta Test | [css] running blc13_2bit_guppi_57824_83347_HIP22715_0052.4987.409.24.54.4.vlar_1 (1 CPU + 0.5 AMD/ATI GPUs)
5/26/2017 5:40:14 PM | Milkyway@Home | [css] running de_modfit_19_3s_146_bundle5_ModfitConstraintsWithDisk_fixed_2_1494357557_1398613_1 (1 CPU + 0.5 AMD/ATI GPUs)
5/26/2017 5:40:26 PM | Milkyway@Home | Computation for task de_modfit_19_3s_146_bundle5_ModfitConstraintsWithDisk_fixed_2_1494357557_1398613_1 finished
5/26/2017 5:40:26 PM | SETI@home Beta Test | [coproc] ATI instance 0; 0.500000 pending for blc13_2bit_guppi_57824_83347_HIP22715_0052.4987.409.24.54.4.vlar_1
5/26/2017 5:40:26 PM | SETI@home Beta Test | [coproc] ATI instance 0: confirming 0.500000 instance for blc13_2bit_guppi_57824_83347_HIP22715_0052.4987.409.24.54.4.vlar_1
5/26/2017 5:40:26 PM | Einstein@Home | [coproc] Assigning 0.500000 of ATI instance 0 to LATeah0030L_500.0_0_0.0_758020_0
5/26/2017 5:40:26 PM | climateprediction.net | [css] running wah2_eu50r_mcmk_20171_3_528_010931969_1 ( )
5/26/2017 5:40:26 PM | rosetta@home | [css] running foldit2_2003796_0008_fold_SAVE_ALL_OUT_485206_1133_0 ( )
5/26/2017 5:40:26 PM | NFS@Home | [css] running C195_148_98_40224_0 ( )
5/26/2017 5:40:26 PM | SETI@home Beta Test | [css] running blc13_2bit_guppi_57824_83347_HIP22715_0052.4987.409.24.54.4.vlar_1 (1 CPU + 0.5 AMD/ATI GPUs)
5/26/2017 5:40:26 PM | Einstein@Home | Starting task LATeah0030L_500.0_0_0.0_758020_0
5/26/2017 5:40:26 PM | Einstein@Home | [css] running LATeah0030L_500.0_0_0.0_758020_0 (1 CPU + 0.5 AMD/ATI GPUs)
... and still I fear, and still I dare not laugh at the Mad Man!

Queen - The Prophet's Song
ID: 1869664 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13739
Credit: 208,696,464
RAC: 304
Australia
Message 1869685 - Posted: 27 May 2017, 0:00:57 UTC - in response to Message 1869664.  

By default Boinc is going to use 1 core or 1 thread for every task.

Not for GPU work. That is dependent on the settings for the application being used. For the CUDA applications, 0.04 CPU threads were reserved per GPU WU, but it would use as much CPU time as they actually needed, however they would release that CPU time to other applications with higher priorities as they needed it.
By reserving a core, and bumping up the priority of the application, you can make you GPU more productive at the cost of those CPU resources not being available to other applications if they need it.


I can not speak for how Boinc runs with NVidia GPUs, having never owned one. But in the following screen-capture of Process Explorer's GPU graph is showing a Milkway task and a Seti task running on my RX480.

You need to read what is posted.
It has nothing to do with BOINC. BOINC doesn't run the GPU or CPU, BOINC doesn't use x cores or threads for whatever is running.
CPU use for GPU WUs is dependent on the application being used.
Grant
Darwin NT
ID: 1869685 · Report as offensive
Profile Darrell
Volunteer tester
Avatar

Send message
Joined: 14 Mar 03
Posts: 267
Credit: 1,418,681
RAC: 0
United States
Message 1869688 - Posted: 27 May 2017, 0:11:21 UTC

https://1drv.ms/i/s!ArIvftV8roEagVcenS9L5n1w06gt

This is a pretty CPU graph from Process Explorer, unfortunately it doesn't break down very well what is running on each core. The red on the graph is the sum of the kernel mode processes, the green is the sum of the kernel mode and user mode processes running on each core of the CPU. Off to the side, on the Process Explorer main screen you can see three CPU tasks running, each using 1 thread or 16.5% of the CPU. A Seti_Beta and Einstein task were running on the GPU with each task using about 4.5% of the core they were assigned to. This is the current command line options that I am using:

-v 1 -pref_wg_size 256 -perf_wg_num_per_cu 256 -instances_per_device 2 -total_GPU_instances 2 -no_cpu_lock -high_prec_timer -no_use_sleep -sbs 1280 -tune 1 1 1 256 -spike_fft_thresh 2048 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 2048 -oclfft_tune_bn 64 -oclfft_tune_cw 32

After clearing some used stuff off the HDD, and pruning some dead would out of the registry, and trying the latest driver from AMD, the system was a little unstable, and started getting driver resets. So after clearing everything out of the command line, I started adding the options back in one at a time. The -high-perf setting was causing the resets. I see that I have forgotten to put back the -hp setting, but the setting in the config xml should cause them to run at hp, but for some reason it isn't, will have to investigate it.

P.S. Apologies to Tom for hogging your thread, I'll go back to my own.
ID: 1869688 · Report as offensive
Profile Darrell
Volunteer tester
Avatar

Send message
Joined: 14 Mar 03
Posts: 267
Credit: 1,418,681
RAC: 0
United States
Message 1869692 - Posted: 27 May 2017, 0:54:36 UTC - in response to Message 1869685.  


You need to read what is posted.
It has nothing to do with BOINC. BOINC doesn't run the GPU or CPU, BOINC doesn't use x cores or threads for whatever is running.
CPU use for GPU WUs is dependent on the application being used.


Dear Mr. Grant, all I can say in response to this, is that you have apparently have never looked at the BOINC source code, so asking you to go to GitHub and check it out would be of no use. But you can go to the Boinc Wiki and read the client configuration options. All of the settings in BoincManager, in the web site preferences, in the cc_config, app_config, and app_info xml files are for telling the BOINC client how to control and run tasks. The options in the Seti command line are those put in by Rastimer for fine tuning how his Seti apps run while under control of the BOINC client.
ID: 1869692 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13739
Credit: 208,696,464
RAC: 304
Australia
Message 1869697 - Posted: 27 May 2017, 1:22:19 UTC - in response to Message 1869692.  
Last modified: 27 May 2017, 1:22:33 UTC

But you can go to the Boinc Wiki and read the client configuration options.

Notice those words there?
client configuration options
Notice that one particular word?
client, not BOINC.

All of the settings in BoincManager, in the web site preferences, in the cc_config, app_config, and app_info xml files are for telling the BOINC client how to control and run tasks.

Once again you need to read what is there, not what you think is there.
app_config, and app_info xml
Notice anything about those names?
App, Application. The programme that actually does the work. Both app_config.xml and app_info.xml are files that must be either made by hand, or by an installer, both for a particular application. They are not installed when you install BOINC. cc_config.xml is another file that is not installed with BOINC, it is done manually or by an installer- when installing a particular application.
Yes, you can use BOINC to pass configuration settings to the applications, but it is the applications that do the work and use or don't use the reserved or not reserved resources.

And the website preferences, or local preferences are all about BOINC, even the project ones.
BOINC is the manager, it allows people to run multiple projects, by managing a cache and resource share.

It is the applications and their settings that determine whether 1 or 0.04 threads will be reserved for a GPU WU or not. Not BOINC as you keep (incorrectly) stating.
By default Boinc is going to use 1 core or 1 thread for every task.

By default, BOINC does nothing of the sort.
It's only if you manually, or an installer automatically, adds or edits the appropriate files will BOINC supply any configuration values to the Application.
Grant
Darwin NT
ID: 1869697 · Report as offensive
Profile Darrell
Volunteer tester
Avatar

Send message
Joined: 14 Mar 03
Posts: 267
Credit: 1,418,681
RAC: 0
United States
Message 1869740 - Posted: 27 May 2017, 8:55:15 UTC

Ok, again I apologize to Tom for using his thread. I was going to send this as a PM to Grant, but then decided that this could make a good teaching example (I hope). So the "client" in the Wiki, the "BOINC" I refer to are one and the same name for a program on Windows called "Boinc.exe" Client is just more technical. The "Boinc.exe" file is made up of approximately 112 C++ header and code files which Visual Studio compiles, transforms into machine code snippets, and links the snippets together to form an executable file. The Windows operating system has three parts, the Data Configuration (contains the Registry and configuration files), the Shell, which provides interface tools to the user, and runs the programs the user selects, and handles any requests made of hardware via the third part, the Windows Kernel. The Windows Kernel has two parts, the Executive Services and the HAL (Hardware Abstraction Layer i.e. drivers). The Executive Services are programs or "Services" which move requests from other services or the shell via the HAL to the devices that make up a computer (See Device Manager in the control panel). Now the drivers and services that make up the Kernel are loaded when Windows boots up and the Shell and user auto-start programs are loaded when the user logs onto the system. So this first picture of Process Explorer shows the programs or services loaded when Windows start in the pink highlight section. The programs that the shell autostarts when I log on, or that I choose to run are in the bottom lavender highlight.

Pic 1: https://1drv.ms/i/s!ArIvftV8roEagVk6rVrs3IVQ0isR

If you install BOINC as a service to be loaded when the system starts, its executable and the project apps it runs will be in the pink highlighted section. Now, here in picture two, is just the Shell programs from when I log on in larger detail:

Pic 2: https://1drv.ms/i/s!ArIvftV8roEagVofXFNudsOCRZxX

The sections of this picture are the name of the executable running, the percentage of the entire CPU the executable is using, the number of private bytes and working bytes the program is using, the PID of the program, a descriptive name of the program, and finally the percentage of the GPU's processor the program may be using. The first five executable files running under explorer.exe are loaded when I log on (Controlled by the system configuration program under Administrator tools). The bottom three executable files are programs that when they installed, they set a setting in the Registry to run when I logged on. After the first five auto-starts, the picture shows that I started MSI Afterburner, which started the RivaTune Statisitcal Server. Then I started Processes Explorer, and then HWinfor64, a hardware monitoring program. Next we get to the heart of the confusion with Grant. Process Explorer shows that I started Boincmanager.exe (BOINC Manager for Windows), and since it could not find a Boinc client running, it started Boinc.exe (BOINC Client). That is why when you start the manager in the advance view all of the tabs are empty because Boinc has not provided any information yet. Boinc upon starting reads its cc_config xml file for setting flags that you wish it to use in controlling how it works, next it reads the client_state xml file, this gives Boinc the information about all of the projects you are attached to, what files are located in the project directories it creates, plus a lot more information, order of which can be seen in the Boinc Event Log. In the start-up lines of the log you'll find the list of any project app_config or app_info xml files created by you that Boinc finds and uses the settings in those xml to control the running of the app. There are a couple of Cosmology apps that run via the Oracle Virtual Box that when they first ran requested Boinc to use all six cores of my CPU and Boinc complied and started shutting down other running apps. Almost crashed the system before I could get Boinc to shutdown. I then went to the Cosmology website forums and found an app_config xml that would tell Boinc only to let the app use a certain number of cores. I chose 1. So let me say it again, Boinc reads the app_config or the app_info xml and uses the information contained in them to control the running of the app, the app never sees those xmls and has no clue that they even exist. There is one app that reads an xml created by Boinc and that is the MooWrapper app. It looks for and tries to read the coproc_info xml created by Boinc. I have found in the code where Boinc generates this xml, I just have not found the setting or flag to cause it to recreate it, because the changes I made to it by editing it with Notepad, have caused every Moo task to error out since (It was late, and I forgot to make a backup). So, going back to Tom's very first question of why he sees no change in load as shown in GPU-Z, when he throttles the app's CPU usage, it is because GPU-Z is just reporting the percentage of load of the GPU's processor and that when the app's CPU use was throttled by Boinc, the remaining core usage was enough to feed the GPU. I suspect though that if he were to closely look at the GPU and CPU run-times that were reported back to the website for those throttled tasks, he would find that they had increase. Well, it is getting late, and I need to get some sleep, so I'll end with those words that all students love to hear, "Class dismissed".
ID: 1869740 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13739
Credit: 208,696,464
RAC: 304
Australia
Message 1869743 - Posted: 27 May 2017, 9:38:44 UTC - in response to Message 1869740.  

Boinc reads the app_config or the app_info xml and uses the information contained in them to control the running of the app

This is the heart of Darrell's confusion.

His claim
By default Boinc is going to use 1 core or 1 thread for every task.

Isn't true.
In a stock installation there is no app_config.xml or app_info.xml files. You install BOINC, attach to your choice of projects, and the applications will download and run using whatever their default values might be for CPU & GPU use.
You can alter how the application uses resources by using command lines, or by using configuration files for BOINC, but by default the CPU usage for GPU work is due entirely to the application's default values.
Grant
Darwin NT
ID: 1869743 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1869753 - Posted: 27 May 2017, 11:28:45 UTC

Hey guys: can you just agree to disagree? Doesn't look/sound like you are going to come to a common conclusion here.
ID: 1869753 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1869767 - Posted: 27 May 2017, 13:06:02 UTC - in response to Message 1869753.  

Hey guys: can you just agree to disagree? Doesn't look/sound like you are going to come to a common conclusion here.



Or at least create another thread and disagree there. This is clearly not the topic of this thread.

I am happy with the discussion although no one has offered "hard numbers" about the faster processing available for SOG when using a dedicated cpu per gpu task vs. sharing a cpu with a cpu task (eg.. 1.0 gpu / 0.5 cpu)
What has been offered has given me a few ideas on how I might tweak my GTX 750 Ti's under both the Lunitics distribution and stock Seti at Home.

I want to thank everyone who posted on that specific topic.

Tom Miller
A proud member of the OFA (Old Farts Association).
ID: 1869767 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : 1 core / GPU task (especially SOG) vs. 0.50 or 0.25 etc


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.