Message boards :
Number crunching :
Advice on system optimization needed.
Message board moderation
Author | Message |
---|---|
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
I recently started running seti@home again. Last time around I loaded a special app to take advantage of the video card. I spent a little bit of time searching around, and am having a hard time finding the instructions I used last time to make it work. I think you can just click on my username to see the details, but here is some info from the computer details page. CPU type GenuineIntel Intel(R) Xeon(R) CPU E5-2687W v2 @ 3.40GHz [Family 6 Model 62 Stepping 4] Number of processors 32 Coprocessors NVIDIA GeForce RTX 2080 (4095MB) driver: 430.40 OpenCL: 1.2 Virtualization None Operating System Linux LinuxMint Linux Mint 19.2 Tina [5.0.0-23-generic|libc 2.27 (Ubuntu GLIBC 2.27-3ubuntu1)] BOINC version 7.9.3 Memory 157.35 GB Cache 25600 KB Swap space 2 GB Total disk space 915.89 GB Free Disk Space 704.71 GB Measured floating point speed 3.82 billion ops/sec Measured integer speed 111.25 billion ops/sec Average upload rate 171.61 KB/sec Average download rate 1323.29 KB/sec Average turnaround time 0.15 days Thanks for any advice. Hoping to get the most out of this system. Eric Claussen |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
You are thinking of the old Lunatics installer. But that is for Windows only. Since you are running Linux and a current Nvidia card, I would HIGHLY recommend installing the All-in-One package. The package contains BOINC 7.14.2 and the Linux special app for Nvidia cards. The BOINC installation is in your /home folder with permissions owned by you instead of boinc:boinc. Makes it a lot easier to manage. The package is self contained. It has the full BOINC installation and preconfigured Multiband and Astropulse applications provided with the already written requisite app_info for their use. All that is required is to download the package, install the p7zip decompression protocol for the Archive Manager. Then simply unpack the package to somewhere in /home. Navigate to the Boinc folder and double-click the boincmgr file and BOINC is up and running waiting for you to join the Seti project. Login in with your Seti credentials and you are up and running the special app. Pretty easy. The AIO installer is located here: http://www.arkayn.us/lunatics/BOINC.7z I would first remove your distro BOINC installation so there are no remnants, reboot and then install the AIO package. The special app is 5X faster than any stock application. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
Ahh. Thanks for the link, I'll give it a try. I'm going to let my current work units finish computing and give it a try. Thanks Eric Claussen |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Not that familiar with Mint. Run Ubuntu. So don't know if the Archive Manager in Mint understands the .7z compression protocol. Easy to remedy. sudo apt install pzip There are good instructions in the package, but not really needed as the configuration just works. Only missing dependencies can hang you up, but TBar (the package maintainer) did a pretty thorough job of making it compatible with Debian from 12.04 to 18.04. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
Thanks for the tip, Mint has support for .7z out of the box. Eric |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
isn't it ? sudo apt install p7zip |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Yes. Typo. Goofed. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Wiggo Send message Joined: 24 Jan 00 Posts: 36817 Credit: 261,360,520 RAC: 489 |
It doesn't matter as it's included in a Mint19.x install. ;-)isn't it ? Cheers. |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
I switched over this morning before work. Completed the first GPU wu in about 4 mins. Faster than yesterday but still seems slow. I'll mess with it when I get home. Eric |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Yes, something is not correct. The output of stderr.txt looks normal but the run_times are very abnormal for a Turing card. There is another app included in the package. The default one is the CUDA90 app but there is also the CUDA101 application included ready to go after a simple Find and Replace in app_info.xml. You have 32 cpus so you could dedicate some to support the gpu threads. You do that by adding the -nobs parameter to the command line statement in app_info. <cmdline>-nobs</cmdline> That will speed up production. The lack of cpu support is the only think that I can think of that could be slowing down the gpu run times. The gpu application needs some cpu support to feed the gpu threads. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
The lack of cpu support is the only think that I can think of that could be slowing down the gpu run times. Long shoot, did he run more than 1 WU at a time? A common mistake from the windows users. With Linux special sauces you only run 1WU at a time. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
Yes, something is not correct. The output of stderr.txt looks normal but the run_times are very abnormal for a Turing card. There is another app included in the package. The default one is the CUDA90 app but there is also the CUDA101 application included ready to go after a simple Find and Replace in app_info.xml. This. and this: The lack of cpu support is the only think that I can think of that could be slowing down the gpu run times. make sure that you aren't starving the CPU by having it run 100% CPU while also trying to support the GPU. set your CPU use to like 80-90%. and add nobs as keith mentioned. can you get an nvidia-smi output from the terminal while it is running? Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
Thanks for all the suggestions. I will change it around as soon as i get home. Within the next hour or so. |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
OK, so I tried the easiest thing first. Reduced CPU usage and Time from 100% to 95%. Turning out 1 miinute work units with CUDA90. What times should I be expecting? Thanks Eric |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
OK, so I tried the easiest thing first. Reduced CPU usage and Time from 100% to 95%. Turning out 1 miinute work units with CUDA90. Still too long, on my 2070 a WU is crunched on about 1 minute. Down the CPU usage to 80% just to see if it changes. And can you post you app_config.html file? |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.40 Driver Version: 430.40 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 2080 Off | 00000000:03:00.0 On | N/A | | 76% 83C P2 198W / 225W | 2565MiB / 7981MiB | 82% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1641 G /usr/lib/xorg/Xorg 536MiB | | 0 2205 G cinnamon 77MiB | | 0 2503 G ...quest-channel-token=5828664825319891002 538MiB | | 0 8882 C ...x41p_V0.98b1_x86_64-pc-linux-gnu_cuda90 1343MiB | +-----------------------------------------------------------------------------+ |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
OK, so I tried the easiest thing first. Reduced CPU usage and Time from 100% to 95%. Turning out 1 miinute work units with CUDA90. A RTX 2080 should be doing task in 40 seconds. Mine do. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
nvidia-smi 83°C. is too hot. The card is downclocking. Check Nvidia X-Server settings PowerMizer tab for what the clocks are. You need better case cooling or turn up the fans on the card to 100%. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
Where is the app_config.html file? I couldn't find it. I do have an app_config,.h and app_config.cpp. Neither one looks like it has info we might be looking for. I have app_info.xml. Looks more useful. Here it is. <app_info> <app> <name>setiathome_v8</name> </app> <file_info> <name>setiathome_x41p_V0.98b1_x86_64-pc-linux-gnu_cuda90</name> <executable/> </file_info> <app_version> <app_name>setiathome_v8</app_name> <platform>x86_64-pc-linux-gnu</platform> <version_num>801</version_num> <plan_class>cuda90</plan_class> <cmdline></cmdline> <coproc> <type>NVIDIA</type> <count>1</count> </coproc> <avg_ncpus>0.1</avg_ncpus> <max_ncpus>0.1</max_ncpus> <file_ref> <file_name>setiathome_x41p_V0.98b1_x86_64-pc-linux-gnu_cuda90</file_name> <main_program/> </file_ref> </app_version> <app> <name>astropulse_v7</name> </app> <file_info> <name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</name> <executable/> </file_info> <file_info> <name>AstroPulse_Kernels_r2751.cl</name> </file_info> <file_info> <name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</name> </file_info> <app_version> <app_name>astropulse_v7</app_name> <platform>x86_64-pc-linux-gnu</platform> <version_num>708</version_num> <plan_class>opencl_nvidia_100</plan_class> <coproc> <type>NVIDIA</type> <count>1</count> </coproc> <avg_ncpus>0.1</avg_ncpus> <max_ncpus>0.1</max_ncpus> <file_ref> <file_name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</file_name> <main_program/> </file_ref> <file_ref> <file_name>AstroPulse_Kernels_r2751.cl</file_name> </file_ref> <file_ref> <file_name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</file_name> <open_name>ap_cmdline.txt</open_name> </file_ref> </app_version> <app> <name>setiathome_v8</name> </app> <file_info> <name>MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu</name> <executable/> </file_info> <app_version> <app_name>setiathome_v8</app_name> <platform>x86_64-pc-linux-gnu</platform> <version_num>800</version_num> <file_ref> <file_name>MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu</file_name> <main_program/> </file_ref> </app_version> <app> <name>astropulse_v7</name> </app> <file_info> <name>ap_7.05r2728_sse3_linux64</name> <executable/> </file_info> <app_version> <app_name>astropulse_v7</app_name> <version_num>704</version_num> <platform>x86_64-pc-linux-gnu</platform> <plan_class></plan_class> <file_ref> <file_name>ap_7.05r2728_sse3_linux64</file_name> <main_program/> </file_ref> </app_version> </app_info> Eric |
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
I think it's just too warm in this room. Would cost quite a bit to air condition it. Eric nvidia-smi |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.