Message boards :
Number crunching :
Question on Optimizing Win 10 system with 4 different Nvidia-based graphics cards
Message board moderation
Author | Message |
---|---|
Freewill Send message Joined: 19 May 99 Posts: 766 Credit: 354,398,348 RAC: 11,693 |
Hi All, I would appreciate any suggestions on optimizing my dedicated SETI box. I am running stock SETI and BOINC on an i7-5820K with 4 graphics cards: GTX 1070 Ti, GTX 1070, GTX 1060 3 GB, and GTX 980. Each GPU is taking 0.425 CPUs. I am running also 6 CPU SETI jobs and total system CPU load is ~90%. I am not overclocking any of the GPUs or CPU. I am using cc_config.xml: <cc_config> <options> <use_all_gpus>1</use_all_gpus> </options> </cc_config> and mb_cmdline-8.22_windows_intel__opencl_nvidia_SoG.txt: -hp -high_perf -instances_per_device 2 -pref_wg_num_per_cu 2 -sbs 1024 -period_iterations_num 1 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64 The GPU cards don't seem to be working overly hard. I am wondering if there is any value in trying to run more than 1 job per GPU? If so, how do I configure that? Any other optimizations anyone would suggest? Many thanks! Roger |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
The limiting factor is your 1060 GPU. I believe 1 at a time is the preferred setting for that. If you were to take that one out, then you could run 2 or maybe 3 work units per card. If you were to do that, then cutting down the number of CPU work units would benefit you as GPUs crunch much faster than CPU and you would do more work per hour. |
Freewill Send message Joined: 19 May 99 Posts: 766 Credit: 354,398,348 RAC: 11,693 |
The limiting factor is your 1060 GPU. I believe 1 at a time is the preferred setting for that. If you were to take that one out, then you could run 2 or maybe 3 work units per card. If you were to do that, then cutting down the number of CPU work units would benefit you as GPUs crunch much faster than CPU and you would do more work per hour. Thanks, Zalster. What would I need to do to get 2 work units at a time for each GPU? Are you saying that each card running 2 WU would also need 2 CPU threads? |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
You could get 2 per card by using a app_config.xml and tell BOINC to run 2 work units per card. The applications currently under use don't need a full CPU per work unit but you also don't want to use all of your cores, the system needs at least 1 for itself and you should always leave 1 more just for GPU support. So at a minimum you should leave 1 core free. If you run 2 per card and see 100% CPU usage, then decrease the number of CPU cores running. How are you limiting your system to 6 CPU cores?? Edit... Here's a sample App_config.xml <app_config> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_nvidia_SoG</plan_class> <avg_ncpus>1</avg_ncpus> <ngpus>0.5</ngpus> <cmdline>-no_sleep -sbs 1024 -hp -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer</cmdline> </app_version> <app_version> <app_name>astropulse_v7</app_name> <plan_class>opencl_nvidia_100</plan_class> <avg_ncpus>1</avg_ncpus> <ngpus>0.5</ngpus> <cmdline>-unroll 28 -oclFFT_plan 256 16 256 -ffa_block 12288 -ffa_block_fetch 6144 -tune 1 64 4 1 -tune 2 64 4 1 -hp </cmdline> </app_version> </app_config> Might need to clean up the command lines some |
Freewill Send message Joined: 19 May 99 Posts: 766 Credit: 354,398,348 RAC: 11,693 |
|
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
I'm assuming you are referring to the setting where it says use at most XXXX% I'm not a fan of that. Better to use a <project_max_concurrent> setting in the app_config.xml and control how many work units are running. I've always believe that people have the wrong understanding on how that actually works. Instant, put the restriction in the app config and put 100% in the computing preferences. Like this <app_config> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_nvidia_SoG</plan_class> <avg_ncpus>1</avg_ncpus> <ngpus>0.5</ngpus> <cmdline>-no_sleep -sbs 1024 -hp -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer</cmdline> </app_version> <app_version> <app_name>astropulse_v7</app_name> <plan_class>opencl_nvidia_100</plan_class> <avg_ncpus>1</avg_ncpus> <ngpus>0.5</ngpus> <cmdline>-unroll 28 -oclFFT_plan 256 16 256 -ffa_block 12288 -ffa_block_fetch 6144 -tune 1 64 4 1 -tune 2 64 4 1 -hp </cmdline> </app_version> <project_max_concurrent>10</project_max_concurrent> </app_config> This would allow 6 GPU work units (2 per card {assuming you removed the 1060} ) and 4 CPU work units. That should leave 2 threads free. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.