Setting up Linux to crunch CUDA90 and above for Windows users

Message boards : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 98 · 99 · 100 · 101 · 102 · 103 · 104 . . . 161 · Next

AuthorMessage
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 11744
Credit: 1,160,866,277
RAC: 4,249
United States
Message 1951382 - Posted: 22 Aug 2018, 18:45:06 UTC

Yes this should be doable. You need to look at xorg.conf and see if it has two screen definitions. Look for which screen has the monitor attached to it. If the identifier is nvidia, you need to change it to intel. Pay attention to the busID for each device and screen and make sure they match. Use lspci to verify the busID of each graphics device or
sudo lshw -c video

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951382 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 4936
Credit: 276,046,078
RAC: 1,048
Message 1951381 - Posted: 22 Aug 2018, 18:41:09 UTC - in response to Message 1951377.  


I got out of this terrible mess, but I am again at square 0, crunching and displaying through nVdia.
Before I make another catastrophic attempt, can anyone suggest a way around this corner?

Thank you very much in advance!

Sleepy


That is a VERY good question. I don't have the ability to run my Intel internal gpu at the same time I am running my discrete card so the issue there has not come up.

I have been running gpu tasks on cards that are also displaying.

I offer three possible work arounds.
1) Ignore the issue. Keep on computing.
2) Set the Gpu preferences to suspend whenever the computer is "active." This basically means anytime you are on the computer.
3) Set Seti so it will suspend after the system gets busy with other things above XX%. This basically means that it will suspend Seti anytime you are doing something that takes a significant amount of cpu time.

Depending on exactly what else you are trying to use the computer for while doing seti processing, #2/#3 might be a working compromise.

I have been getting very good production even when sharing the gpu with non-seti tasks.

HTH,
Tom
"I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz)
(on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!"
ID: 1951381 · Report as offensive     Reply Quote
Sleepy
Volunteer tester
Avatar

Send message
Joined: 21 May 99
Posts: 214
Credit: 98,947,784
RAC: 64,326
Italy
Message 1951377 - Posted: 22 Aug 2018, 18:16:57 UTC

Dear all,
I already was on the route to try unchain from Microsoft and the last apps by Petri (thank you thank you thank you!) gave me the last kick.

Therefore I stepped on Kubuntu 18.04 and installed the latest applications (but not the 9.2 CUDA toolkit. I am downloading the biiiiig files for that now).
I am running 0.97 10x0 application under 396.51 nVidia driver as of now and using 9.0 .so

Throughput has increased a lot and till now I have not trashed too many WUs during my attempts. Invalids have not increased. So far so good.

But I have a problem, probably common to many others:

In my system I also have the graphic processor of my CPU.
Not that I want to use it for Seti, This has long been discussed and deprecated.
But under Windows I could easily crunch with my 1060 and drive the display with the embedded Intel GPU.

This way I could push the 1060 hard without compromising the normal use of my PC.
Now it seems also the X display is generated by the 1060 (though physically the monitor is still connected to the old Intel output! I cannot believe this and this is very weird and probably also very wrong)

I tried Monday to solve the problem, details are not important, but I made a terrible mess.
Intel was always at low def (could be raised to higher resolution with some line commands, though, till next boot), and the 1060 was not recognized any more by Boinc.

I got out of this terrible mess, but I am again at square 0, crunching and displaying through nVdia.
Before I make another catastrophic attempt, can anyone suggest a way around this corner?

Thank you very much in advance!

Sleepy
ID: 1951377 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 4936
Credit: 276,046,078
RAC: 1,048
Message 1951375 - Posted: 22 Aug 2018, 17:58:07 UTC - in response to Message 1951368.  


Hi Stephen,
I suggest you use -nobs when you have spare cores. You can forget the -pfp flag. You can use -pfb 32 flag and -pfl like 64 for 1080+ and 512 for the gtx750.
There is a flag -pfe (no parameter value for that) you can try. It may give a boost but it will most certainly mess up with noise bombs. Your inconclusives and invalids count will rise.

Do not use -pfe flag, just test it to see if it helps with speed.

Petri


I am running CUDA90 and for some reason the -nobs doesn't do the same thing as the app_config.xml file does for using a full core per gpu.

<app_info>
  <app>
     <name>setiathome_v8</name>
  </app>
    <file_info>
      <name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda90</name>
      <executable/>
    </file_info>
    <file_info>
      <name>libcudart.so.9.0</name>
    </file_info>
    <file_info>
      <name>libcufft.so.9.0</name>
    </file_info>
    <app_version>
      <app_name>setiathome_v8</app_name>
      <platform>x86_64-pc-linux-gnu</platform>
      <version_num>801</version_num>
      <plan_class>cuda90</plan_class>
     <cmdline> -nobs  -pfb 32 flag -pfl 512</cmdline>
      <coproc>
        <type>NVIDIA</type>
        <count>1</count>
      </coproc>
      <avg_ncpus>0.1</avg_ncpus>
      <max_ncpus>0.1</max_ncpus>
      <file_ref>
         <file_name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda90</file_name>
          <main_program/>
      </file_ref>
      <file_ref>
         <file_name>libcudart.so.9.0</file_name>
      </file_ref>
      <file_ref>
         <file_name>libcufft.so.9.0</file_name>
      </file_ref>
    </app_version>
  <app>
     <name>astropulse_v7</name>
  </app>
     <file_info>
       <name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</name>
        <executable/>
     </file_info>
     <file_info>
       <name>AstroPulse_Kernels_r2751.cl</name>
     </file_info>
     <file_info>
       <name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</name>
     </file_info>
    <app_version>
      <app_name>astropulse_v7</app_name>
      <platform>x86_64-pc-linux-gnu</platform>
      <version_num>708</version_num>
      <plan_class>opencl_nvidia_100</plan_class>
      <coproc>
        <type>NVIDIA</type>
        <count>1</count>
      </coproc>
      <avg_ncpus>0.1</avg_ncpus>
      <max_ncpus>0.1</max_ncpus>
      <file_ref>
         <file_name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</file_name>
          <main_program/>
      </file_ref>
      <file_ref>
         <file_name>AstroPulse_Kernels_r2751.cl</file_name>
      </file_ref>
      <file_ref>
         <file_name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</file_name>
         <open_name>ap_cmdline.txt</open_name>
      </file_ref>
    </app_version>
   <app>
      <name>setiathome_v8</name>
   </app>
      <file_info>
         <name>MBv8_8.05r3345_avx_linux64</name>
         <executable/>
      </file_info>
     <app_version>
     <app_name>setiathome_v8</app_name>
     <platform>x86_64-pc-linux-gnu</platform>
     <version_num>800</version_num>   
      <file_ref>
        <file_name>MBv8_8.05r3345_avx_linux64</file_name>
        <main_program/>
      </file_ref>
    </app_version>
   <app>
      <name>astropulse_v7</name>
   </app>
     <file_info>
       <name>ap_7.05r2728_sse3_linux64</name>
        <executable/>
     </file_info>
    <app_version>
       <app_name>astropulse_v7</app_name>
       <version_num>704</version_num>
       <platform>x86_64-pc-linux-gnu</platform>
       <plan_class></plan_class>
       <file_ref>
         <file_name>ap_7.05r2728_sse3_linux64</file_name>
          <main_program/>
       </file_ref>
    </app_version>
</app_info>


Yes, this includes the AVX attempt which appears to be running SEE4.1 instead, but that is not the question. The question is what am I doing wrong with CUDA90 and the "-nobs" command?
"I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz)
(on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!"
ID: 1951375 · Report as offensive     Reply Quote
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 354
Finland
Message 1951368 - Posted: 22 Aug 2018, 16:37:54 UTC - in response to Message 1951311.  

OK, try this in Ubuntu 14.04 (And Others);
Linux_Maxwell+v0.97_Special
As with all the CUDA 9.x Apps, you need a Compute Code 5.0 GPU with at least 2 GB of vRAM.
If you have a GTX 960 2GB it will be very close to running out of vRAM at Unroll 8. If possible, connect the Monitor to a different GPU.
It should work with Kernel 3.13 and above, and CC 5.0 GPUs and above. Yes, it will work with Pascal and any forthcoming GPUs as well as Maxwell.
Same download as CUDA 9.0, app_info.xml and other Apps are included as well as the AMD CPU App 3711.


. . Well colour me tickled pink.

. . I am most impressed. On the GTX1050ti run times have reduced from the previous 4.6 min for Arecibo norms, 4.9 to 5.5 mins for various forms of GBT tasks up to 9.6 mins for Arecibo VLARS down to the current 3.1 mins for Arecibo norms, 3.2 to 3.8 mins for most GBT tasks except for the slow Blc14's we have had lately that take a massive 4.1 mins then up to 6.2 mins for the Arecibo VLARs. The increase comes to about 1.475 times what it used to be. Definitely a worthwhile development. Well done guys, many thanks to TBar and Petri. A genius of an app there Petri.

. . On the GTX970s well ..... Just about everything takes just about 2 mins. I haven't seen a VLAR go through yet but that will be the fly in the ointment. Everything else is 1.9 to 2.1 mins :)

. . I have read that there is no advantage in using the -nobs parameter but what about the pfp 32 setting?

Stephen

??

Stephen

:)


Hi Stephen,
I suggest you use -nobs when you have spare cores. You can forget the -pfp flag. You can use -pfb 32 flag and -pfl like 64 for 1080+ and 512 for the gtx750.
There is a flag -pfe (no parameter value for that) you can try. It may give a boost but it will most certainly mess up with noise bombs. Your inconclusives and invalids count will rise.

Do not use -pfe flag, just test it to see if it helps with speed.

Petri
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1951368 · Report as offensive     Reply Quote
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 1,893
Canada
Message 1951350 - Posted: 22 Aug 2018, 14:50:13 UTC - in response to Message 1951345.  

I have been wondering for a while if the baseline (suggested) command line parameters from the SOG readme would make a difference on my processing. I have just inserted this " <cmdline>-sbs 192 -spike_fft_thresh 2048 -tune 1 64 1 4</cmdline>" into my app_info.xml file on one machine.
The first thing I noticed is the cpu load is varying much more. It has hit 5% briefly. Stay tune for more stupid experiments...
This is with 1 cpu dedicated to the gpu.

I am also successfully using the app_config.xml file to control how much cpu I dedicate to my gpu(s).

Tom
You stderr file shows this:
<core_client_version>7.4.44</core_client_version>
<![CDATA[
<stderr_txt>
bad arg: -sbs
bad arg: 192
bad arg: -spike_fft_thresh
bad arg: 2048
bad arg: -tune
bad arg: 1
bad arg: 64
bad arg: 1
bad arg: 4
setiathome_CUDA: Found 1 CUDA device(s):
  Device 1: GeForce GTX 750 Ti, 2000 MiB, regsPerBlock 65536
     computeCap 5.0, multiProcs 5 
     pciBusID = 1, pciSlotID = 0
In cudaAcc_initializeDevice(): Boinc passed DevPref 1
setiathome_CUDA: CUDA Device 1 specified, checking...
   Device 1: GeForce GTX 750 Ti is okay
SETI@home using CUDA accelerated device GeForce GTX 750 Ti
Unroll autotune 5. Overriding Pulse find periods per launch. Parameter -pfp set to 5

setiathome v8 enhanced x41p_zi3v, Cuda 9.00 special
Modifications done by petri33, compiled by TBar
PETRI, we need a special error code for this type of input.
ID: 1951350 · Report as offensive     Reply Quote
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 32172
Credit: 79,922,639
RAC: 181
Germany
Message 1951349 - Posted: 22 Aug 2018, 14:44:13 UTC

Doesn`t work at all.
OpenCL values for cuda app.
With each crime and every kindness we birth our future.
ID: 1951349 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 4936
Credit: 276,046,078
RAC: 1,048
Message 1951345 - Posted: 22 Aug 2018, 14:10:49 UTC

I have been wondering for a while if the baseline (suggested) command line parameters from the SOG readme would make a difference on my processing. I have just inserted this " <cmdline>-sbs 192 -spike_fft_thresh 2048 -tune 1 64 1 4</cmdline>" into my app_info.xml file on one machine.
The first thing I noticed is the cpu load is varying much more. It has hit 5% briefly. Stay tune for more stupid experiments...
This is with 1 cpu dedicated to the gpu.

I am also successfully using the app_config.xml file to control how much cpu I dedicate to my gpu(s).

Tom
"I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz)
(on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!"
ID: 1951345 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 4936
Credit: 276,046,078
RAC: 1,048
Message 1951343 - Posted: 22 Aug 2018, 14:04:37 UTC

I was wondering if I could find the equivalent for GPU-Z for Linux.

Here is a "couple" of ideas: https://askubuntu.com/questions/5417/how-to-get-the-gpu-info

A terminal command of: nvidia-settings
brings up a nice Gui display including the gpu loading. It looks like it will display it for each card.

Tom
"I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz)
(on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!"
ID: 1951343 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 1951311 - Posted: 22 Aug 2018, 6:12:18 UTC - in response to Message 1951062.  

OK, try this in Ubuntu 14.04 (And Others);
Linux_Maxwell+v0.97_Special
As with all the CUDA 9.x Apps, you need a Compute Code 5.0 GPU with at least 2 GB of vRAM.
If you have a GTX 960 2GB it will be very close to running out of vRAM at Unroll 8. If possible, connect the Monitor to a different GPU.
It should work with Kernel 3.13 and above, and CC 5.0 GPUs and above. Yes, it will work with Pascal and any forthcoming GPUs as well as Maxwell.
Same download as CUDA 9.0, app_info.xml and other Apps are included as well as the AMD CPU App 3711.


. . Well colour me tickled pink.

. . I am most impressed. On the GTX1050ti run times have reduced from the previous 4.6 min for Arecibo norms, 4.9 to 5.5 mins for various forms of GBT tasks up to 9.6 mins for Arecibo VLARS down to the current 3.1 mins for Arecibo norms, 3.2 to 3.8 mins for most GBT tasks except for the slow Blc14's we have had lately that take a massive 4.1 mins then up to 6.2 mins for the Arecibo VLARs. The increase comes to about 1.475 times what it used to be. Definitely a worthwhile development. Well done guys, many thanks to TBar and Petri. A genius of an app there Petri.

. . On the GTX970s well ..... Just about everything takes just about 2 mins. I haven't seen a VLAR go through yet but that will be the fly in the ointment. Everything else is 1.9 to 2.1 mins :)

. . I have read that there is no advantage in using the -nobs parameter but what about the pfp 32 setting?

Stephen

??

Stephen

:)
ID: 1951311 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 11744
Credit: 1,160,866,277
RAC: 4,249
United States
Message 1951263 - Posted: 22 Aug 2018, 2:25:20 UTC - in response to Message 1951261.  

Not so simple on Linux though. On Windows, the environment is a known factor and all support structures are assumed to be in place. So, yes, the project sends you the application that works on 100% of Windows computers.

On Linux, there is no standardized environment and so there are too many variables that affect the supporting software that the application needs. I am familiar with the woes that Linux users have over at GPUGrid.net in trying to get the standard Linux app working that the servers send out. If the gcc package isn't installed, the application doesn't run. The users come to the forums asking why everything works fine on their Windows computers so why doesn't the project work on the Linux computers.

Since the special app needs a minimum Nvidia driver level, the servers would have to probe a host system for the compatible environment. If all the servers look for is the driver version, the installation will fail since the Linux driver ships with separate packages for the base graphics drivers, the CUDA drivers and the OpenCL drivers. Any one or all missing will cause the application to fail.

If the special app source code is somehow ported over to Windows, then the project should be able to automatically send the special application to a host and have an almost 100% chance of it working on first startup.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951263 · Report as offensive     Reply Quote
mmonnin
Volunteer tester

Send message
Joined: 8 Jun 17
Posts: 58
Credit: 10,176,849
RAC: 0
United States
Message 1951261 - Posted: 22 Aug 2018, 2:09:50 UTC - in response to Message 1951244.  

Yes these apps are beta versions and only should be run by beta testers who are familiar with their development, their flaws and how to properly install them and test with them. Not for general public release. And certainly not ready for Main. Unless the apps get tested for a year and pass approval by the Seti administrators AND someone comes up with an automatic installer like the Lunatics installer for the SoG app, I don't see these in general release. The installation of the apps has to be bulletproof and "idiot" proof and at the level of the general computer user that knows how to use a computer but has no clue of how it works and cares less so, but wants to do scientific search for E.T. on the desktop. We are a long way from that day.


BOINC would do the 'install' just like any other app on any project. Select Seti in the dropdown in BM and done. Since it would come from the project the app_info would not be required and we'd all download the executable just like the current SETI provided app. Just 2 files are required now. SETI could even put the Lunatics options right there in Project Preferences.

Some projects have multiple versions and PCs download the plan class versions their PCs can support based off the CPU info. If one app runs a bit faster then more tasks for that app will download. Asteroids on CPUs for example has multiple SSE and AVX versions. Depending on the CPU arch implementation one may run faster than the other even if a CPU supports the 'fastest'. Same can be done here.

Tweaking with the command lines is up to the user of course.
ID: 1951261 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 1951255 - Posted: 22 Aug 2018, 1:47:33 UTC - in response to Message 1951155.  
Last modified: 22 Aug 2018, 1:55:45 UTC

Sorry to hear your misfortune Petri. I think we all have done "fat-finger" goof-ups before.


. . and for me just as recently ...

. . but life goes on, do you not have a copy of your old client_state.xml? TBar says that is all you need to restore the old host id. ...

. . Just read your later message, good news ...

Stephen

:(
ID: 1951255 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 11744
Credit: 1,160,866,277
RAC: 4,249
United States
Message 1951244 - Posted: 22 Aug 2018, 1:06:22 UTC

Yes these apps are beta versions and only should be run by beta testers who are familiar with their development, their flaws and how to properly install them and test with them. Not for general public release. And certainly not ready for Main. Unless the apps get tested for a year and pass approval by the Seti administrators AND someone comes up with an automatic installer like the Lunatics installer for the SoG app, I don't see these in general release. The installation of the apps has to be bulletproof and "idiot" proof and at the level of the general computer user that knows how to use a computer but has no clue of how it works and cares less so, but wants to do scientific search for E.T. on the desktop. We are a long way from that day.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951244 · Report as offensive     Reply Quote
mmonnin
Volunteer tester

Send message
Joined: 8 Jun 17
Posts: 58
Credit: 10,176,849
RAC: 0
United States
Message 1951231 - Posted: 22 Aug 2018, 0:31:07 UTC
Last modified: 22 Aug 2018, 0:31:24 UTC

Thanks for the updated apps. It's a shame these aren't default. Even a few invalids is worth the tremendous output increase.
ID: 1951231 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 11744
Credit: 1,160,866,277
RAC: 4,249
United States
Message 1951194 - Posted: 21 Aug 2018, 22:54:27 UTC - in response to Message 1951192.  

Petri posted a direct link to both versions in an earlier post in this thread. Those links are from TBar's compilation. I think the Sierra designation might be for Pascal. You would have to read the docs in the file.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951194 · Report as offensive     Reply Quote
mmonnin
Volunteer tester

Send message
Joined: 8 Jun 17
Posts: 58
Credit: 10,176,849
RAC: 0
United States
Message 1951192 - Posted: 21 Aug 2018, 22:51:55 UTC - in response to Message 1951183.  
Last modified: 21 Aug 2018, 22:55:59 UTC

Be careful with the different 0.97 applications. There is an exclusive Pascal compiled version. And there is another version compiled for Maxwell cards.


There is? Both cards work with the above app and are much faster.

This only only has Maxwell 0.97 and Sierra 0.97
http://www.arkayn.us/lunatics/

Edit: Ugh as someone that doesn't run SETI all the time this is what makes it so frustrating. Info is all over the place. Someone else posted a link to a separate place in another thread.
https://setiathome.berkeley.edu/forum_thread.php?id=83246&postid=1950636#1950636

Or info is hidden somewhere of a 600 page thread.
ID: 1951192 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 11744
Credit: 1,160,866,277
RAC: 4,249
United States
Message 1951183 - Posted: 21 Aug 2018, 22:41:12 UTC - in response to Message 1951178.  

Be careful with the different 0.97 applications. There is an exclusive Pascal compiled version. And there is another version compiled for Maxwell cards.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951183 · Report as offensive     Reply Quote
mmonnin
Volunteer tester

Send message
Joined: 8 Jun 17
Posts: 58
Credit: 10,176,849
RAC: 0
United States
Message 1951178 - Posted: 21 Aug 2018, 22:29:55 UTC - in response to Message 1951139.  

I would estimate that it is 50% faster overall than zi3v app.


Wow dang. I renamed it as the cuda90 file name and dropped it in. Now I just need some tasks on the 1070 and 1070Ti. The 970 on another PC has tasks though.
ID: 1951178 · Report as offensive     Reply Quote
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 354
Finland
Message 1951169 - Posted: 21 Aug 2018, 22:15:31 UTC - in response to Message 1951155.  

Sorry to hear your misfortune Petri. I think we all have done "fat-finger" goof-ups before.


After all my fat-fingeredness did not cause any major harm. The BOINC and seti system recognized my machine and I'm back with the same host ID.

All I lost was 600 WUs and the credit history on the statistics tab. Luckily the history is on WOW and FreeDC.

No real damage.

:)
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1951169 · Report as offensive     Reply Quote
Previous · 1 . . . 98 · 99 · 100 · 101 · 102 · 103 · 104 . . . 161 · Next

Message boards : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users


 
©2020 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.