Setting up Linux to crunch CUDA90 and above for Windows users

Message boards : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 109 · 110 · 111 · 112 · 113 · 114 · 115 . . . 162 · Next

AuthorMessage
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001051 - Posted: 4 Jul 2019, 21:28:32 UTC - in response to Message 2001049.  
Last modified: 4 Jul 2019, 21:30:19 UTC

Need those startup lines of Event Log. See you have OpenCL detected now.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001051 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001053 - Posted: 4 Jul 2019, 21:31:33 UTC - in response to Message 2001049.  

Shut BOINC down, then restart it and you'll get those 30 start up up lines required, but so far your rig with the GT710 is showing no signs of the OpenCL component being installed.

Cheers.

If he hadn't restarted BOINC it wouldn't because BOINC hasn't detected the OpenCL drivers yet. He has them installed as proven by his clinfo output in his earlier post. He needs to restart BOINC. I should have explicitly told him that.


Just restarted, received this "SETI@home: Notice from server
Your app_info.xml file doesn't have a usable version of SETI@home v8."

EDIT: Does it matter where I got BONIC from? I used this link http://www.arkayn.us/lunatics/BOINC.7z

No. That is the only place to get the AIO installer. We need to see the first 30 lines of the BOINC startup to see what BOINC is complaining about.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001053 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001055 - Posted: 4 Jul 2019, 21:34:52 UTC

Thu 04 Jul 2019 04:32:38 PM CDT | | Starting BOINC client version 7.14.2 for x86_64-pc-linux-gnu
Thu 04 Jul 2019 04:32:38 PM CDT | | log flags: file_xfer, sched_ops, task, sched_op_debug
Thu 04 Jul 2019 04:32:38 PM CDT | | Libraries: libcurl/7.58.0 GnuTLS/3.5.18 zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3
Thu 04 Jul 2019 04:32:38 PM CDT | | Data directory: /home/thephx/Desktop/BOINC
Thu 04 Jul 2019 04:32:38 PM CDT | | CUDA: NVIDIA GPU 0: GeForce GT 710 (driver version 390.11, CUDA version 9.1, compute capability 3.5, 2001MB, 1762MB available, 366 GFLOPS peak)
Thu 04 Jul 2019 04:32:38 PM CDT | | OpenCL: NVIDIA GPU 0: GeForce GT 710 (driver version 390.116, device version OpenCL 1.2 CUDA, 2001MB, 1762MB available, 366 GFLOPS peak)
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | Found app_info.xml; using anonymous platform
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | File referenced in app_info.xml does not exist: libcudart.so.6.0
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | File referenced in app_info.xml does not exist: libcufft.so.6.0
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | [error] State file error: missing application file libcudart.so.6.0
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | File referenced in app_info.xml does not exist: MBv8_8.22r3711_sse41_x86_64-pc-linux-gnu
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | [error] State file error: missing application file MBv8_8.22r3711_sse41_x86_64-pc-linux-gnu
Thu 04 Jul 2019 04:32:38 PM CDT | | [libc detection] gathered: 2.27, Ubuntu GLIBC 2.27-3ubuntu1
Thu 04 Jul 2019 04:32:38 PM CDT | | Host name: FX8350
Thu 04 Jul 2019 04:32:38 PM CDT | | Processor: 8 AuthenticAMD AMD FX(tm)-8350 Eight-Core Processor [Family 21 Model 2 Stepping 0]
Thu 04 Jul 2019 04:32:38 PM CDT | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
Thu 04 Jul 2019 04:32:38 PM CDT | | OS: Linux Ubuntu: Ubuntu 18.04.2 LTS [4.18.0-25-generic|libc 2.27 (Ubuntu GLIBC 2.27-3ubuntu1)]
Thu 04 Jul 2019 04:32:38 PM CDT | | Memory: 7.77 GB physical, 2.00 GB virtual
Thu 04 Jul 2019 04:32:38 PM CDT | | Disk: 219.06 GB total, 197.84 GB free
Thu 04 Jul 2019 04:32:38 PM CDT | | Local time is UTC -5 hours
Thu 04 Jul 2019 04:32:38 PM CDT | | Config: use all coprocessors
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 8742274; resource share 100
Thu 04 Jul 2019 04:32:38 PM CDT | | No general preferences found - using defaults
Thu 04 Jul 2019 04:32:38 PM CDT | | Preferences:
Thu 04 Jul 2019 04:32:38 PM CDT | | max memory usage when active: 3979.53 MB
Thu 04 Jul 2019 04:32:38 PM CDT | | max memory usage when idle: 7163.16 MB
Thu 04 Jul 2019 04:32:38 PM CDT | | max disk usage: 197.15 GB
Thu 04 Jul 2019 04:32:38 PM CDT | | don't use GPU while active
Thu 04 Jul 2019 04:32:38 PM CDT | | suspend work if non-BOINC CPU load exceeds 25%
Thu 04 Jul 2019 04:32:38 PM CDT | | (to change preferences, visit a project web site or select Preferences in the Manager)
Thu 04 Jul 2019 04:32:38 PM CDT | | Setting up project and slot directories
Thu 04 Jul 2019 04:32:38 PM CDT | | Checking active tasks
Thu 04 Jul 2019 04:32:38 PM CDT | | Setting up GUI RPC socket
Thu 04 Jul 2019 04:32:38 PM CDT | | Checking presence of 9 project files
Thu 04 Jul 2019 04:32:38 PM CDT | | Suspending GPU computation - computer is in use
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | [sched_op] Starting scheduler request
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | Sending scheduler request: To fetch work.
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | [sched_op] CPU work request: 414720.00 seconds; 8.00 devices
Thu 04 Jul 2019 04:32:38 PM CDT | SETI@home | [sched_op] NVIDIA GPU work request: 51840.00 seconds; 1.00 devices
Thu 04 Jul 2019 04:32:39 PM CDT | SETI@home | Scheduler request completed: got 0 new tasks
Thu 04 Jul 2019 04:32:39 PM CDT | SETI@home | [sched_op] Server version 709
Thu 04 Jul 2019 04:32:39 PM CDT | SETI@home | Not sending work - last request too recent: 213 sec
Thu 04 Jul 2019 04:32:39 PM CDT | SETI@home | Project requested delay of 303 seconds
Thu 04 Jul 2019 04:32:39 PM CDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu 04 Jul 2019 04:32:39 PM CDT | SETI@home | [sched_op] Reason: requested by project
Thu 04 Jul 2019 04:32:48 PM CDT | | Suspending computation - user request



I guess I grabbed the wrong MB version. And can't locate libcudart.so.6.0 and libcufft.so.6.0
ID: 2001055 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001057 - Posted: 4 Jul 2019, 21:37:18 UTC - in response to Message 2001049.  
Last modified: 4 Jul 2019, 22:03:45 UTC

See the problem, I goofed in the app_info header. I dropped the beginning <app_info>. This is the corrected app_info. Also corrected the cpu appname for MB found in the AIO installer.

<app_info>
  <app>
     <name>setiathome_v8</name>
  </app>
    <file_info>
      <name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda60</name>
      <executable/>
    </file_info>
    <file_info>
      <name>libcudart.so.6.0</name>
    </file_info>
    <file_info>
      <name>libcufft.so.6.0</name>
    </file_info>
    <app_version>
      <app_name>setiathome_v8</app_name>
      <platform>x86_64-pc-linux-gnu</platform>
      <version_num>802</version_num>
      <plan_class>cuda60</plan_class>
      <cmdline>-unroll 6 -nobs</cmdline>
      <coproc>
        <type>NVIDIA</type>
        <count>1</count>
      </coproc>
      <avg_ncpus>0.1</avg_ncpus>
      <max_ncpus>0.1</max_ncpus>
      <file_ref>
          <file_name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda60</file_name>
          <main_program/>
      </file_ref>
      <file_ref>
          <file_name>libcudart.so.6.0</file_name>
      </file_ref>
      <file_ref>
          <file_name>libcufft.so.6.0</file_name>
      </file_ref>
    </app_version>
  <app>
     <name>astropulse_v7</name>
  </app>
     <file_info>
       <name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</name>
        <executable/>
     </file_info>
     <file_info>
       <name>AstroPulse_Kernels_r2751.cl</name>
     </file_info>
     <file_info>
       <name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</name>
     </file_info>
    <app_version>
      <app_name>astropulse_v7</app_name>
      <platform>x86_64-pc-linux-gnu</platform>
      <version_num>708</version_num>
      <plan_class>opencl_nvidia_100</plan_class>
      <coproc>
        <type>NVIDIA</type>
        <count>1</count>
      </coproc>
      <avg_ncpus>0.1</avg_ncpus>
      <max_ncpus>0.1</max_ncpus>
      <file_ref>
         <file_name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</file_name>
          <main_program/>
      </file_ref>
      <file_ref>
         <file_name>AstroPulse_Kernels_r2751.cl</file_name>
      </file_ref>
      <file_ref>
         <file_name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</file_name>
         <open_name>ap_cmdline.txt</open_name>
      </file_ref>
    </app_version>
   <app>
      <name>setiathome_v8</name>
   </app>
      <file_info>
         <name>MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu</name>
         <executable/>
      </file_info>
     <app_version>
     <app_name>setiathome_v8</app_name>
     <platform>x86_64-pc-linux-gnu</platform>
     <version_num>800</version_num>   
      <file_ref>
        <file_name>MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu</file_name>
        <main_program/>
      </file_ref>
    </app_version>
   <app>
      <name>astropulse_v7</name>
   </app>
     <file_info>
       <name>ap_7.05r2728_sse3_linux64</name>
        <executable/>
     </file_info>
    <app_version>
       <app_name>astropulse_v7</app_name>
       <version_num>704</version_num>
       <platform>x86_64-pc-linux-gnu</platform>
       <plan_class></plan_class>
       <file_ref>
         <file_name>ap_7.05r2728_sse3_linux64</file_name>
          <main_program/>
       </file_ref>
    </app_version>
</app_info>


Please copy again and update the app_info.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001057 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001060 - Posted: 4 Jul 2019, 21:44:26 UTC
Last modified: 4 Jul 2019, 21:44:51 UTC

Haha, I actually noticed that when I was copying and pasting before, I made sure to keep <app_info> at the beginning.

The MB version that I grabbed says MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu VS the MBv8_8.22r3711_sse41_x86_64-pc-linux-gnu that you have in your app_info. I assume I just need to replace with the name that has "intel" that appears to be the only difference? Also, where do I get libcudd and libcuft, they are not in the project folder like everything else.
ID: 2001060 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001061 - Posted: 4 Jul 2019, 21:44:46 UTC
Last modified: 4 Jul 2019, 21:56:15 UTC

Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because it did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO.

You need the CUDA60 gpu app from the CUDA60 package along with the cuda library files copied into the directory.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001061 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001062 - Posted: 4 Jul 2019, 21:49:19 UTC - in response to Message 2001060.  

Haha, I actually noticed that when I was copying and pasting before, I made sure to keep <app_info> at the beginning.

The MB version that I grabbed says MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu VS the MBv8_8.22r3711_sse41_x86_64-pc-linux-gnu that you have in your app_info. I assume I just need to replace with the name that has "intel" that appears to be the only difference? Also, where do I get libcudd and libcuft, they are not in the project folder like everything else.

Yes, that is correct. I forget that I had switched my MB cpu app over the AMD variety. Just match the name of the actual executable up to the filename refernenced in the app_info.
Make sure every file referenced in the app_info is actually in the directory. Make sure they all have the allow execution flag set on them. Your previous Event Log post shows most of the apps are missing now.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001062 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001064 - Posted: 4 Jul 2019, 21:50:27 UTC - in response to Message 2001061.  

Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because is did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO.

You need the CUDA60 gpu app from the CUDA60 package along with the cuda library files copied into the directory.


Haha, that is a smart app! I noticed that earlier..."where did that go? Could have sworn I put that file in here..."

I believe the only things I am missing now is the libcubart and libcufft, they arent in the project folder. Do they need to be?
ID: 2001064 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001065 - Posted: 4 Jul 2019, 21:53:19 UTC - in response to Message 2001064.  
Last modified: 4 Jul 2019, 21:53:41 UTC

Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because is did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO.

You need the CUDA60 gpu app from the CUDA60 package along with the cuda library files copied into the directory.


Haha, that is a smart app! I noticed that earlier..."where did that go? Could have sworn I put that file in here..."

I believe the only things I am missing now is the libcubart and libcufft, they arent in the project folder. Do they need to be?

Yes, they need to be in the project folder along with the executables. Go back to where you unpacked the CUDA60 archive and grab them from there.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001065 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001066 - Posted: 4 Jul 2019, 21:54:56 UTC - in response to Message 2001065.  

Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because is did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO.

You need the CUDA60 gpu app from the CUDA60 package along with the cuda library files copied into the directory.


Haha, that is a smart app! I noticed that earlier..."where did that go? Could have sworn I put that file in here..."

I believe the only things I am missing now is the libcubart and libcufft, they arent in the project folder. Do they need to be?

Yes, they need to be in the project folder along with the executables. Go back to where you unpacked the CUDA60 archive and grab them from there.


Hahaha, wow I absolutely overlooked those two files. LETS try this once more....give me a minute. Need to swig this beer and hit save
ID: 2001066 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001067 - Posted: 4 Jul 2019, 21:55:37 UTC

Thu 04 Jul 2019 04:55:09 PM CDT | | Starting BOINC client version 7.14.2 for x86_64-pc-linux-gnu
Thu 04 Jul 2019 04:55:09 PM CDT | | log flags: file_xfer, sched_ops, task, sched_op_debug
Thu 04 Jul 2019 04:55:09 PM CDT | | Libraries: libcurl/7.58.0 GnuTLS/3.5.18 zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3
Thu 04 Jul 2019 04:55:09 PM CDT | | Data directory: /home/thephx/Desktop/BOINC
Thu 04 Jul 2019 04:55:09 PM CDT | | CUDA: NVIDIA GPU 0: GeForce GT 710 (driver version 390.11, CUDA version 9.1, compute capability 3.5, 2001MB, 1752MB available, 366 GFLOPS peak)
Thu 04 Jul 2019 04:55:09 PM CDT | | OpenCL: NVIDIA GPU 0: GeForce GT 710 (driver version 390.116, device version OpenCL 1.2 CUDA, 2001MB, 1752MB available, 366 GFLOPS peak)
Thu 04 Jul 2019 04:55:09 PM CDT | SETI@home | Found app_info.xml; using anonymous platform
Thu 04 Jul 2019 04:55:10 PM CDT | | [libc detection] gathered: 2.27, Ubuntu GLIBC 2.27-3ubuntu1
Thu 04 Jul 2019 04:55:10 PM CDT | | Host name: FX8350
Thu 04 Jul 2019 04:55:10 PM CDT | | Processor: 8 AuthenticAMD AMD FX(tm)-8350 Eight-Core Processor [Family 21 Model 2 Stepping 0]
Thu 04 Jul 2019 04:55:10 PM CDT | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
Thu 04 Jul 2019 04:55:10 PM CDT | | OS: Linux Ubuntu: Ubuntu 18.04.2 LTS [4.18.0-25-generic|libc 2.27 (Ubuntu GLIBC 2.27-3ubuntu1)]
Thu 04 Jul 2019 04:55:10 PM CDT | | Memory: 7.77 GB physical, 2.00 GB virtual
Thu 04 Jul 2019 04:55:10 PM CDT | | Disk: 219.06 GB total, 197.80 GB free
Thu 04 Jul 2019 04:55:10 PM CDT | | Local time is UTC -5 hours
Thu 04 Jul 2019 04:55:10 PM CDT | | Config: use all coprocessors
Thu 04 Jul 2019 04:55:10 PM CDT | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 8742274; resource share 100
Thu 04 Jul 2019 04:55:10 PM CDT | | No general preferences found - using defaults
Thu 04 Jul 2019 04:55:10 PM CDT | | Preferences:
Thu 04 Jul 2019 04:55:10 PM CDT | | max memory usage when active: 3979.53 MB
Thu 04 Jul 2019 04:55:10 PM CDT | | max memory usage when idle: 7163.16 MB
Thu 04 Jul 2019 04:55:10 PM CDT | | max disk usage: 197.15 GB
Thu 04 Jul 2019 04:55:10 PM CDT | | don't use GPU while active
Thu 04 Jul 2019 04:55:10 PM CDT | | suspend work if non-BOINC CPU load exceeds 25%
Thu 04 Jul 2019 04:55:10 PM CDT | | (to change preferences, visit a project web site or select Preferences in the Manager)
Thu 04 Jul 2019 04:55:10 PM CDT | | Setting up project and slot directories
Thu 04 Jul 2019 04:55:10 PM CDT | | Checking active tasks
Thu 04 Jul 2019 04:55:10 PM CDT | | Setting up GUI RPC socket
Thu 04 Jul 2019 04:55:10 PM CDT | | Checking presence of 12 project files
Thu 04 Jul 2019 04:55:10 PM CDT | | Suspending GPU computation - computer is in use
Thu 04 Jul 2019 04:55:10 PM CDT | SETI@home | [sched_op] Starting scheduler request
Thu 04 Jul 2019 04:55:10 PM CDT | SETI@home | Sending scheduler request: To fetch work.
Thu 04 Jul 2019 04:55:10 PM CDT | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Thu 04 Jul 2019 04:55:10 PM CDT | SETI@home | [sched_op] CPU work request: 414720.00 seconds; 8.00 devices
Thu 04 Jul 2019 04:55:10 PM CDT | SETI@home | [sched_op] NVIDIA GPU work request: 51840.00 seconds; 1.00 devices
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | Scheduler request completed: got 53 new tasks
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | [sched_op] Server version 709
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | Project requested delay of 303 seconds
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | [sched_op] estimated total CPU task duration: 423390 seconds
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 0 seconds
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu 04 Jul 2019 04:55:12 PM CDT | SETI@home | [sched_op] Reason: requested by project
Thu 04 Jul 2019 04:55:14 PM CDT | SETI@home | Started download of blc43_2bit_guppi_58543_64115_HIP32806_0015.18986.818.21.44.6.vlar
Thu 04 Jul 2019 04:55:14 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_64115_HIP32806_0015.18996.0.21.44.80.vlar
Thu 04 Jul 2019 04:55:17 PM CDT | SETI@home | Finished download of blc43_2bit_guppi_58543_64115_HIP32806_0015.18986.818.21.44.6.vlar
Thu 04 Jul 2019 04:55:17 PM CDT | SETI@home | Finished download of blc63_2bit_guppi_58543_64115_HIP32806_0015.18996.0.21.44.80.vlar
Thu 04 Jul 2019 04:55:17 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_63791_HIP33142_0014.17151.818.22.45.197.vlar
Thu 04 Jul 2019 04:55:17 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18903.409.21.44.130.vlar
Thu 04 Jul 2019 04:55:17 PM CDT | SETI@home | Starting task blc63_2bit_guppi_58543_64115_HIP32806_0015.18996.0.21.44.80.vlar_1
Thu 04 Jul 2019 04:55:17 PM CDT | SETI@home | Starting task blc43_2bit_guppi_58543_64115_HIP32806_0015.18986.818.21.44.6.vlar_0
Thu 04 Jul 2019 04:55:18 PM CDT | | Suspending computation - user request
Thu 04 Jul 2019 04:55:19 PM CDT | SETI@home | Finished download of blc63_2bit_guppi_58543_63791_HIP33142_0014.17151.818.22.45.197.vlar
Thu 04 Jul 2019 04:55:19 PM CDT | SETI@home | Finished download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18903.409.21.44.130.vlar
Thu 04 Jul 2019 04:55:19 PM CDT | SETI@home | Started download of blc43_2bit_guppi_58543_63791_HIP33142_0014.18957.818.22.45.13.vlar
Thu 04 Jul 2019 04:55:19 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18903.409.21.44.188.vlar
Thu 04 Jul 2019 04:55:21 PM CDT | SETI@home | Finished download of blc43_2bit_guppi_58543_63791_HIP33142_0014.18957.818.22.45.13.vlar
Thu 04 Jul 2019 04:55:21 PM CDT | SETI@home | Finished download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18903.409.21.44.188.vlar
Thu 04 Jul 2019 04:55:21 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18903.409.21.44.214.vlar
Thu 04 Jul 2019 04:55:21 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18968.0.22.45.109.vlar
Thu 04 Jul 2019 04:55:23 PM CDT | SETI@home | Finished download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18903.409.21.44.214.vlar
Thu 04 Jul 2019 04:55:23 PM CDT | SETI@home | Finished download of blc63_2bit_guppi_58543_63145_HIP33142_0012.18968.0.22.45.109.vlar
Thu 04 Jul 2019 04:55:23 PM CDT | SETI@home | Started download of blc43_2bit_guppi_58543_63145_HIP33142_0012.18915.409.21.44.163.vlar
Thu 04 Jul 2019 04:55:23 PM CDT | SETI@home | Started download of blc63_2bit_guppi_58543_63791_HIP33142_0014.17151.818.22.45.190.vlar
ID: 2001067 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001068 - Posted: 4 Jul 2019, 22:00:51 UTC

OK, that is finally looking correct. I am so sorry for missing something so simple in the copy and paste. Everything is normal and found in your Event Log startup. Now just need to get some gpu work to prove out the CUDA60 app. Hopefully it won't be long before the scheduler decides to send you some. I see the gpu request for seconds of work already. It just could be the normal scheduler filling the cpu cache first before filling the gpu cache.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001068 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001070 - Posted: 4 Jul 2019, 22:05:53 UTC - in response to Message 2001068.  

OK, that is finally looking correct. I am so sorry for missing something so simple in the copy and paste. Everything is normal and found in your Event Log startup. Now just need to get some gpu work to prove out the CUDA60 app. Hopefully it won't be long before the scheduler decides to send you some. I see the gpu request for seconds of work already. It just could be the normal scheduler filling the cpu cache first before filling the gpu cache.


Thank you SO much for your help! I am going to make another computer ubuntu, however I will be using that AMD gpu that I replaced that was in this computer. Going to make https://setiathome.berkeley.edu/show_host_detail.php?hostid=8730309 "CHRIS" a linux machine. I would NOT use the special app correct? I still have the info you pasted from the other site regarding the gpu drivers, going to give it a shot. An actual gpu should help the little weak machine...
ID: 2001070 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001073 - Posted: 4 Jul 2019, 22:19:20 UTC - in response to Message 2001070.  

OK, that is finally looking correct. I am so sorry for missing something so simple in the copy and paste. Everything is normal and found in your Event Log startup. Now just need to get some gpu work to prove out the CUDA60 app. Hopefully it won't be long before the scheduler decides to send you some. I see the gpu request for seconds of work already. It just could be the normal scheduler filling the cpu cache first before filling the gpu cache.


Thank you SO much for your help! I am going to make another computer ubuntu, however I will be using that AMD gpu that I replaced that was in this computer. Going to make https://setiathome.berkeley.edu/show_host_detail.php?hostid=8730309 "CHRIS" a linux machine. I would NOT use the special app correct? I still have the info you pasted from the other site regarding the gpu drivers, going to give it a shot. An actual gpu should help the little weak machine...

Yes, unfortunately all the "special" apps require relatively recent Nvidia hardware. The AMD gpu will have to suffice on the stock SoG gpu application.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001073 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001079 - Posted: 4 Jul 2019, 22:37:05 UTC

I'll check back in with you tomorrow. The 24 hour penalty box should be expired by then and you should start receiving CUDA60 gpu tasks. I am really curious how the old zi3v code works on the early Maxwell gpus. I can't see how the performance could be any worse than the SoG application. If you are using the cmdline I stuck into the app_info, you have a unroll value of 6. The docs say that is the likely maximum. You might have to play with that and might find reducing to 4 or 2 might process faster. I also stuck in the -nobs parameter which speeds up gpu tasks by using a full cpu core to support the gpu thread. You have 8 cpu cores in the FX. I would limit the max cpu usage to 80% or put in a max_concurrent statement in an app_config for the project. Or set up the app_config for reserving a cpu core for each gpu task

<app_config>
  <app_version>
    <app_name>setiathome_v8</app_name>
    <plan_class>cuda60</plan_class>
    <avg_ncpus>1</avg_ncpus>
    <ngpus>1</ngpus>
    <cmdline></cmdline>
  </app_version>
</app_config>


Your GT 710 has 192 CUDA cores and I have a Nvidia Jetson Nano SBC running a Maxwell gpu that only has 128 CUDA cores. It also is running the old zi3v code branch that CyborgSam and I have wrangled into working on the ARM64 platform. This is it here:
https://setiathome.berkeley.edu/results.php?hostid=8707387&offset=0&show_names=0&state=4&appid=

So I would think your GT 710 to be at least as fast considering it has a much more powerful cpu shoveling data into and out of the gpu and is probably clocked much higher than the SoC gpu in my Nano.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001079 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2001095 - Posted: 4 Jul 2019, 23:16:16 UTC - in response to Message 2000884.  

I'm not seeing any gpu tasks yet. Just cpu tasks. You have two gpu tasks in your cached but haven't crunched them yet. Are you deliberately holding off those for some reason?
I'd like to see what the CUDA60 zi3v application looks like on your GT 710 card.


. . It ran quite well on my GT730 but that is CC=3.5. And a bit more powerful than the 710 :(

Stephen

? ?
ID: 2001095 · Report as offensive     Reply Quote
Profile ThePHX264

Send message
Joined: 29 May 19
Posts: 86
Credit: 6,688,090
RAC: 32
United States
Message 2001099 - Posted: 4 Jul 2019, 23:22:27 UTC - in response to Message 2001079.  
Last modified: 4 Jul 2019, 23:25:45 UTC

I'll check back in with you tomorrow. The 24 hour penalty box should be expired by then and you should start receiving CUDA60 gpu tasks. I am really curious how the old zi3v code works on the early Maxwell gpus. I can't see how the performance could be any worse than the SoG application. If you are using the cmdline I stuck into the app_info, you have a unroll value of 6. The docs say that is the likely maximum. You might have to play with that and might find reducing to 4 or 2 might process faster. I also stuck in the -nobs parameter which speeds up gpu tasks by using a full cpu core to support the gpu thread. You have 8 cpu cores in the FX. I would limit the max cpu usage to 80% or put in a max_concurrent statement in an app_config for the project. Or set up the app_config for reserving a cpu core for each gpu task

<app_config>
  <app_version>
    <app_name>setiathome_v8</app_name>
    <plan_class>cuda60</plan_class>
    <avg_ncpus>1</avg_ncpus>
    <ngpus>1</ngpus>
    <cmdline></cmdline>
  </app_version>
</app_config>


Your GT 710 has 192 CUDA cores and I have a Nvidia Jetson Nano SBC running a Maxwell gpu that only has 128 CUDA cores. It also is running the old zi3v code branch that CyborgSam and I have wrangled into working on the ARM64 platform. This is it here:
https://setiathome.berkeley.edu/results.php?hostid=8707387&offset=0&show_names=0&state=4&appid=

So I would think your GT 710 to be at least as fast considering it has a much more powerful cpu shoveling data into and out of the gpu and is probably clocked much higher than the SoC gpu in my Nano.


Just started the ribs for the 4th, but I will try to do the things that you mentioned. Haha, I was about to buy a 1060...but then I forgot the mobo only supported pcie 2.0. This was one of the "better" cards I could find at microcenter. https://www.msi.com/Graphics-card/GT-710-2GD3H-LP.html Nice and cheap...and thankfully they still had some variety to choose from that worked with pcie2.0! Only bad thing...this one is passive. I made sure to upgrade the case fans. Hopefully that's all I need to do. Will find out when the temps raise due to GPU tasks.
ID: 2001099 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2001101 - Posted: 4 Jul 2019, 23:26:18 UTC - in response to Message 2000895.  

Right now I am running the beta SOG app on this host https://setiathome.berkeley.edu/show_host_detail.php?hostid=8702456. It has a GT 720 and a GT 730 (both CC = 3.5) and they don't seem very fast with the SOG app. As soon as I clear the cache I will run the CUDA60 -zi3v app and see if there is any improvement.

One card has only 1GB of RAM so that adds an added wrinkle according to the readme file.

My ultimate goal for this host is to get a GTX 1050 Ti Mini on the cheap. Hopefully that will be soon but you never know...


. . On my C2D machine I updated from a GT730 to the GTX1050ti and it is well worth the effort :)

. . For the record, run times on the 730 with SoG r3557 were around 45 to 50 mins. With Cuda 60 they were about27 mins (with Cuda50 they were about 33 to 37 mins).

. . Have fun ...

Stephen

:)
ID: 2001101 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2001103 - Posted: 4 Jul 2019, 23:28:52 UTC - in response to Message 2000896.  

Yes, the beta r3602 app doesn't seem much faster than the stock r3584 8.22 Linux SoG app. I have a hunch the zi3v app will be much faster even if you are forced to run with unroll 1 or 2 because of only 1GB of memory.

Really curious about that CUDA60 zi3v app in relation to the stock SoG app for Linux clients. Think that might be another great reason to persuade the entry level card hosts over from Windows.


. . Well the GT730 only has 2 CUs so unroll 2 is it ... I don't know about the 720.

Stephen

. .
ID: 2001103 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2001108 - Posted: 4 Jul 2019, 23:52:25 UTC - in response to Message 2001095.  

I'm not seeing any gpu tasks yet. Just cpu tasks. You have two gpu tasks in your cached but haven't crunched them yet. Are you deliberately holding off those for some reason?
I'd like to see what the CUDA60 zi3v application looks like on your GT 710 card.


. . It ran quite well on my GT730 but that is CC=3.5. And a bit more powerful than the 710 :(

Stephen

? ?

Don't see it in your hosts. What kind of typical times did you get?

He should be good to go on his GT 710 also as it has the GK208b chip just like your GT 730. It is really just a GT 730 with some shaders disabled.
From Wikipedia - "C.C of 3.5 on GK110/GK208 GPUs "
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2001108 · Report as offensive     Reply Quote
Previous · 1 . . . 109 · 110 · 111 · 112 · 113 · 114 · 115 . . . 162 · Next

Message boards : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.