Setting up Linux to crunch CUDA90 and above for Windows users

留言板 : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users
留言板合理

To post messages, you must log in.

前 · 1 . . . 98 · 99 · 100 · 101 · 102 · 103 · 104 . . . 161 · 后

作者消息
Profile Keith Myers Special Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:29 Apr 01
贴子:11776
积分:1,160,866,277
近期平均积分:1,873
United States
消息 1951438 - 发表于:22 Aug 2018, 23:20:31 UTC - 回复消息 1951412.  

Default -pfl is 64 for the application. That is tuned for 1070's and 1080's. Probably a bit too aggressive for a 1060. Try -pfl 512. And you should not run -nobs to reduce the impact on cpu core usage. Those two changes should reduce the stuttering
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951438 · 举报违规帖子     回复 引用
Sleepy
志愿者测试人员
Avatar

发送消息
已加入:21 May 99
贴子:214
积分:98,947,784
近期平均积分:28,360
Italy
消息 1951412 - 发表于:22 Aug 2018, 21:42:43 UTC - 回复消息 1951386.  

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.51                 Driver Version: 396.51                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:01:00.0 Off |                  N/A |
| 49%   66C    P2    92W / 120W |   2022MiB /  3019MiB |     98%      Default |
+-------------------------------+----------------------+----------------------+


Dear Brent,
you are right then. Nvidia-SMI reports my 1060 not active.

And Xorg.conf reports only one GPU, the Intel.

I should then get the signal from the Intel GPU, as the physical connection implies.

Nevertheless, when the nVidia GPU is crunching, I experience strong lags and video stuttering, which I was never experiencing under Win7.
I am keeping on average 4 CPU off Seti, depending on CPU temperature. Therefore, I should have enough CPU reserve to cope with anything.
By snoozing GPU crunching everything runs again as normal.

Now I have updated the IGPU driver from ppa:oibaf/graphics-drivers, I will check if this helps. But I can test it only locally. VNC connection stutters by definition.

At least there are no strange data transfers from one GPU to the other (unreasonable I admit).

I will need probably to adjust some settings. I went straight on defaults, since nobody was talking as usual about tweaking the settings for best performance/usability.

Thank you for your insights.

Sleepy
ID: 1951412 · 举报违规帖子     回复 引用
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
志愿者测试人员

发送消息
已加入:1 Dec 99
贴子:2786
积分:685,657,289
近期平均积分:835
Canada
消息 1951386 - 发表于:22 Aug 2018, 19:07:06 UTC

'nvidia-smi' (in the terminal) will also tell you if the 1060 GPU is active:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.51                 Driver Version: 396.51                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN Xp            On   | 00000000:02:00.0 Off |                  N/A |
| 83%   83C    P2   267W / 300W |   3632MiB / 12196MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1080    On   | 00000000:03:00.0  On |                  N/A |
| 57%   53C    P2   155W / 217W |   3063MiB /  8111MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 1080    On   | 00000000:04:00.0 Off |                  N/A |
| 57%   49C    P2   143W / 217W |   2814MiB /  8119MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
Notice that my Card #1 has the display ON, or active.
ID: 1951386 · 举报违规帖子     回复 引用
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
志愿者测试人员

发送消息
已加入:1 Dec 99
贴子:2786
积分:685,657,289
近期平均积分:835
Canada
消息 1951383 - 发表于:22 Aug 2018, 18:49:20 UTC - 回复消息 1951377.  

If your monitor is connected to the motherboard and not the 1060, you are using the iGPU in the CPU as a graphics driver.
No if-and-or-buts. There is no such thing as porting (or what ever) from the 1060 to the motherboard port.

You have probably been there already, but you want the 'NVIDIA X Server Settings' screen. Just search for it in your Menu.

You don't need the NVidia 9.2 toolkit, only the 396 driver, and you're good to go with the latest apps :)
ID: 1951383 · 举报违规帖子     回复 引用
Profile Keith Myers Special Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:29 Apr 01
贴子:11776
积分:1,160,866,277
近期平均积分:1,873
United States
消息 1951382 - 发表于:22 Aug 2018, 18:45:06 UTC

Yes this should be doable. You need to look at xorg.conf and see if it has two screen definitions. Look for which screen has the monitor attached to it. If the identifier is nvidia, you need to change it to intel. Pay attention to the busID for each device and screen and make sure they match. Use lspci to verify the busID of each graphics device or
sudo lshw -c video

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951382 · 举报违规帖子     回复 引用
Profile Tom M
志愿者测试人员

发送消息
已加入:28 Nov 02
贴子:4973
积分:276,046,078
近期平均积分:462
消息 1951381 - 发表于:22 Aug 2018, 18:41:09 UTC - 回复消息 1951377.  


I got out of this terrible mess, but I am again at square 0, crunching and displaying through nVdia.
Before I make another catastrophic attempt, can anyone suggest a way around this corner?

Thank you very much in advance!

Sleepy


That is a VERY good question. I don't have the ability to run my Intel internal gpu at the same time I am running my discrete card so the issue there has not come up.

I have been running gpu tasks on cards that are also displaying.

I offer three possible work arounds.
1) Ignore the issue. Keep on computing.
2) Set the Gpu preferences to suspend whenever the computer is "active." This basically means anytime you are on the computer.
3) Set Seti so it will suspend after the system gets busy with other things above XX%. This basically means that it will suspend Seti anytime you are doing something that takes a significant amount of cpu time.

Depending on exactly what else you are trying to use the computer for while doing seti processing, #2/#3 might be a working compromise.

I have been getting very good production even when sharing the gpu with non-seti tasks.

HTH,
Tom
A proud member of the OFA (Old Farts Association).
A candidate for membership in the WWA (Walking Wounded Association).
ID: 1951381 · 举报违规帖子     回复 引用
Sleepy
志愿者测试人员
Avatar

发送消息
已加入:21 May 99
贴子:214
积分:98,947,784
近期平均积分:28,360
Italy
消息 1951377 - 发表于:22 Aug 2018, 18:16:57 UTC

Dear all,
I already was on the route to try unchain from Microsoft and the last apps by Petri (thank you thank you thank you!) gave me the last kick.

Therefore I stepped on Kubuntu 18.04 and installed the latest applications (but not the 9.2 CUDA toolkit. I am downloading the biiiiig files for that now).
I am running 0.97 10x0 application under 396.51 nVidia driver as of now and using 9.0 .so

Throughput has increased a lot and till now I have not trashed too many WUs during my attempts. Invalids have not increased. So far so good.

But I have a problem, probably common to many others:

In my system I also have the graphic processor of my CPU.
Not that I want to use it for Seti, This has long been discussed and deprecated.
But under Windows I could easily crunch with my 1060 and drive the display with the embedded Intel GPU.

This way I could push the 1060 hard without compromising the normal use of my PC.
Now it seems also the X display is generated by the 1060 (though physically the monitor is still connected to the old Intel output! I cannot believe this and this is very weird and probably also very wrong)

I tried Monday to solve the problem, details are not important, but I made a terrible mess.
Intel was always at low def (could be raised to higher resolution with some line commands, though, till next boot), and the 1060 was not recognized any more by Boinc.

I got out of this terrible mess, but I am again at square 0, crunching and displaying through nVdia.
Before I make another catastrophic attempt, can anyone suggest a way around this corner?

Thank you very much in advance!

Sleepy
ID: 1951377 · 举报违规帖子     回复 引用
Profile Tom M
志愿者测试人员

发送消息
已加入:28 Nov 02
贴子:4973
积分:276,046,078
近期平均积分:462
消息 1951375 - 发表于:22 Aug 2018, 17:58:07 UTC - 回复消息 1951368.  


Hi Stephen,
I suggest you use -nobs when you have spare cores. You can forget the -pfp flag. You can use -pfb 32 flag and -pfl like 64 for 1080+ and 512 for the gtx750.
There is a flag -pfe (no parameter value for that) you can try. It may give a boost but it will most certainly mess up with noise bombs. Your inconclusives and invalids count will rise.

Do not use -pfe flag, just test it to see if it helps with speed.

Petri


I am running CUDA90 and for some reason the -nobs doesn't do the same thing as the app_config.xml file does for using a full core per gpu.

<app_info>
  <app>
     <name>setiathome_v8</name>
  </app>
    <file_info>
      <name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda90</name>
      <executable/>
    </file_info>
    <file_info>
      <name>libcudart.so.9.0</name>
    </file_info>
    <file_info>
      <name>libcufft.so.9.0</name>
    </file_info>
    <app_version>
      <app_name>setiathome_v8</app_name>
      <platform>x86_64-pc-linux-gnu</platform>
      <version_num>801</version_num>
      <plan_class>cuda90</plan_class>
     <cmdline> -nobs  -pfb 32 flag -pfl 512</cmdline>
      <coproc>
        <type>NVIDIA</type>
        <count>1</count>
      </coproc>
      <avg_ncpus>0.1</avg_ncpus>
      <max_ncpus>0.1</max_ncpus>
      <file_ref>
         <file_name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda90</file_name>
          <main_program/>
      </file_ref>
      <file_ref>
         <file_name>libcudart.so.9.0</file_name>
      </file_ref>
      <file_ref>
         <file_name>libcufft.so.9.0</file_name>
      </file_ref>
    </app_version>
  <app>
     <name>astropulse_v7</name>
  </app>
     <file_info>
       <name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</name>
        <executable/>
     </file_info>
     <file_info>
       <name>AstroPulse_Kernels_r2751.cl</name>
     </file_info>
     <file_info>
       <name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</name>
     </file_info>
    <app_version>
      <app_name>astropulse_v7</app_name>
      <platform>x86_64-pc-linux-gnu</platform>
      <version_num>708</version_num>
      <plan_class>opencl_nvidia_100</plan_class>
      <coproc>
        <type>NVIDIA</type>
        <count>1</count>
      </coproc>
      <avg_ncpus>0.1</avg_ncpus>
      <max_ncpus>0.1</max_ncpus>
      <file_ref>
         <file_name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</file_name>
          <main_program/>
      </file_ref>
      <file_ref>
         <file_name>AstroPulse_Kernels_r2751.cl</file_name>
      </file_ref>
      <file_ref>
         <file_name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</file_name>
         <open_name>ap_cmdline.txt</open_name>
      </file_ref>
    </app_version>
   <app>
      <name>setiathome_v8</name>
   </app>
      <file_info>
         <name>MBv8_8.05r3345_avx_linux64</name>
         <executable/>
      </file_info>
     <app_version>
     <app_name>setiathome_v8</app_name>
     <platform>x86_64-pc-linux-gnu</platform>
     <version_num>800</version_num>   
      <file_ref>
        <file_name>MBv8_8.05r3345_avx_linux64</file_name>
        <main_program/>
      </file_ref>
    </app_version>
   <app>
      <name>astropulse_v7</name>
   </app>
     <file_info>
       <name>ap_7.05r2728_sse3_linux64</name>
        <executable/>
     </file_info>
    <app_version>
       <app_name>astropulse_v7</app_name>
       <version_num>704</version_num>
       <platform>x86_64-pc-linux-gnu</platform>
       <plan_class></plan_class>
       <file_ref>
         <file_name>ap_7.05r2728_sse3_linux64</file_name>
          <main_program/>
       </file_ref>
    </app_version>
</app_info>


Yes, this includes the AVX attempt which appears to be running SEE4.1 instead, but that is not the question. The question is what am I doing wrong with CUDA90 and the "-nobs" command?
A proud member of the OFA (Old Farts Association).
A candidate for membership in the WWA (Walking Wounded Association).
ID: 1951375 · 举报违规帖子     回复 引用
Profile petri33
志愿者测试人员

发送消息
已加入:6 Jun 02
贴子:1668
积分:623,086,772
近期平均积分:156
Finland
消息 1951368 - 发表于:22 Aug 2018, 16:37:54 UTC - 回复消息 1951311.  

OK, try this in Ubuntu 14.04 (And Others);
Linux_Maxwell+v0.97_Special
As with all the CUDA 9.x Apps, you need a Compute Code 5.0 GPU with at least 2 GB of vRAM.
If you have a GTX 960 2GB it will be very close to running out of vRAM at Unroll 8. If possible, connect the Monitor to a different GPU.
It should work with Kernel 3.13 and above, and CC 5.0 GPUs and above. Yes, it will work with Pascal and any forthcoming GPUs as well as Maxwell.
Same download as CUDA 9.0, app_info.xml and other Apps are included as well as the AMD CPU App 3711.


. . Well colour me tickled pink.

. . I am most impressed. On the GTX1050ti run times have reduced from the previous 4.6 min for Arecibo norms, 4.9 to 5.5 mins for various forms of GBT tasks up to 9.6 mins for Arecibo VLARS down to the current 3.1 mins for Arecibo norms, 3.2 to 3.8 mins for most GBT tasks except for the slow Blc14's we have had lately that take a massive 4.1 mins then up to 6.2 mins for the Arecibo VLARs. The increase comes to about 1.475 times what it used to be. Definitely a worthwhile development. Well done guys, many thanks to TBar and Petri. A genius of an app there Petri.

. . On the GTX970s well ..... Just about everything takes just about 2 mins. I haven't seen a VLAR go through yet but that will be the fly in the ointment. Everything else is 1.9 to 2.1 mins :)

. . I have read that there is no advantage in using the -nobs parameter but what about the pfp 32 setting?

Stephen

??

Stephen

:)


Hi Stephen,
I suggest you use -nobs when you have spare cores. You can forget the -pfp flag. You can use -pfb 32 flag and -pfl like 64 for 1080+ and 512 for the gtx750.
There is a flag -pfe (no parameter value for that) you can try. It may give a boost but it will most certainly mess up with noise bombs. Your inconclusives and invalids count will rise.

Do not use -pfe flag, just test it to see if it helps with speed.

Petri
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1951368 · 举报违规帖子     回复 引用
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
志愿者测试人员

发送消息
已加入:1 Dec 99
贴子:2786
积分:685,657,289
近期平均积分:835
Canada
消息 1951350 - 发表于:22 Aug 2018, 14:50:13 UTC - 回复消息 1951345.  

I have been wondering for a while if the baseline (suggested) command line parameters from the SOG readme would make a difference on my processing. I have just inserted this " <cmdline>-sbs 192 -spike_fft_thresh 2048 -tune 1 64 1 4</cmdline>" into my app_info.xml file on one machine.
The first thing I noticed is the cpu load is varying much more. It has hit 5% briefly. Stay tune for more stupid experiments...
This is with 1 cpu dedicated to the gpu.

I am also successfully using the app_config.xml file to control how much cpu I dedicate to my gpu(s).

Tom
You stderr file shows this:
<core_client_version>7.4.44</core_client_version>
<![CDATA[
<stderr_txt>
bad arg: -sbs
bad arg: 192
bad arg: -spike_fft_thresh
bad arg: 2048
bad arg: -tune
bad arg: 1
bad arg: 64
bad arg: 1
bad arg: 4
setiathome_CUDA: Found 1 CUDA device(s):
  Device 1: GeForce GTX 750 Ti, 2000 MiB, regsPerBlock 65536
     computeCap 5.0, multiProcs 5 
     pciBusID = 1, pciSlotID = 0
In cudaAcc_initializeDevice(): Boinc passed DevPref 1
setiathome_CUDA: CUDA Device 1 specified, checking...
   Device 1: GeForce GTX 750 Ti is okay
SETI@home using CUDA accelerated device GeForce GTX 750 Ti
Unroll autotune 5. Overriding Pulse find periods per launch. Parameter -pfp set to 5

setiathome v8 enhanced x41p_zi3v, Cuda 9.00 special
Modifications done by petri33, compiled by TBar
PETRI, we need a special error code for this type of input.
ID: 1951350 · 举报违规帖子     回复 引用
Profile Mike Special Project $75 donor
志愿者测试人员
Avatar

发送消息
已加入:17 Feb 01
贴子:32210
积分:79,922,639
近期平均积分:80
Germany
消息 1951349 - 发表于:22 Aug 2018, 14:44:13 UTC

Doesn`t work at all.
OpenCL values for cuda app.
With each crime and every kindness we birth our future.
ID: 1951349 · 举报违规帖子     回复 引用
Profile Tom M
志愿者测试人员

发送消息
已加入:28 Nov 02
贴子:4973
积分:276,046,078
近期平均积分:462
消息 1951345 - 发表于:22 Aug 2018, 14:10:49 UTC

I have been wondering for a while if the baseline (suggested) command line parameters from the SOG readme would make a difference on my processing. I have just inserted this " <cmdline>-sbs 192 -spike_fft_thresh 2048 -tune 1 64 1 4</cmdline>" into my app_info.xml file on one machine.
The first thing I noticed is the cpu load is varying much more. It has hit 5% briefly. Stay tune for more stupid experiments...
This is with 1 cpu dedicated to the gpu.

I am also successfully using the app_config.xml file to control how much cpu I dedicate to my gpu(s).

Tom
A proud member of the OFA (Old Farts Association).
A candidate for membership in the WWA (Walking Wounded Association).
ID: 1951345 · 举报违规帖子     回复 引用
Profile Tom M
志愿者测试人员

发送消息
已加入:28 Nov 02
贴子:4973
积分:276,046,078
近期平均积分:462
消息 1951343 - 发表于:22 Aug 2018, 14:04:37 UTC

I was wondering if I could find the equivalent for GPU-Z for Linux.

Here is a "couple" of ideas: https://askubuntu.com/questions/5417/how-to-get-the-gpu-info

A terminal command of: nvidia-settings
brings up a nice Gui display including the gpu loading. It looks like it will display it for each card.

Tom
A proud member of the OFA (Old Farts Association).
A candidate for membership in the WWA (Walking Wounded Association).
ID: 1951343 · 举报违规帖子     回复 引用
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:20 Sep 12
贴子:5398
积分:192,787,363
近期平均积分:628
Australia
消息 1951311 - 发表于:22 Aug 2018, 6:12:18 UTC - 回复消息 1951062.  

OK, try this in Ubuntu 14.04 (And Others);
Linux_Maxwell+v0.97_Special
As with all the CUDA 9.x Apps, you need a Compute Code 5.0 GPU with at least 2 GB of vRAM.
If you have a GTX 960 2GB it will be very close to running out of vRAM at Unroll 8. If possible, connect the Monitor to a different GPU.
It should work with Kernel 3.13 and above, and CC 5.0 GPUs and above. Yes, it will work with Pascal and any forthcoming GPUs as well as Maxwell.
Same download as CUDA 9.0, app_info.xml and other Apps are included as well as the AMD CPU App 3711.


. . Well colour me tickled pink.

. . I am most impressed. On the GTX1050ti run times have reduced from the previous 4.6 min for Arecibo norms, 4.9 to 5.5 mins for various forms of GBT tasks up to 9.6 mins for Arecibo VLARS down to the current 3.1 mins for Arecibo norms, 3.2 to 3.8 mins for most GBT tasks except for the slow Blc14's we have had lately that take a massive 4.1 mins then up to 6.2 mins for the Arecibo VLARs. The increase comes to about 1.475 times what it used to be. Definitely a worthwhile development. Well done guys, many thanks to TBar and Petri. A genius of an app there Petri.

. . On the GTX970s well ..... Just about everything takes just about 2 mins. I haven't seen a VLAR go through yet but that will be the fly in the ointment. Everything else is 1.9 to 2.1 mins :)

. . I have read that there is no advantage in using the -nobs parameter but what about the pfp 32 setting?

Stephen

??

Stephen

:)
ID: 1951311 · 举报违规帖子     回复 引用
Profile Keith Myers Special Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:29 Apr 01
贴子:11776
积分:1,160,866,277
近期平均积分:1,873
United States
消息 1951263 - 发表于:22 Aug 2018, 2:25:20 UTC - 回复消息 1951261.  

Not so simple on Linux though. On Windows, the environment is a known factor and all support structures are assumed to be in place. So, yes, the project sends you the application that works on 100% of Windows computers.

On Linux, there is no standardized environment and so there are too many variables that affect the supporting software that the application needs. I am familiar with the woes that Linux users have over at GPUGrid.net in trying to get the standard Linux app working that the servers send out. If the gcc package isn't installed, the application doesn't run. The users come to the forums asking why everything works fine on their Windows computers so why doesn't the project work on the Linux computers.

Since the special app needs a minimum Nvidia driver level, the servers would have to probe a host system for the compatible environment. If all the servers look for is the driver version, the installation will fail since the Linux driver ships with separate packages for the base graphics drivers, the CUDA drivers and the OpenCL drivers. Any one or all missing will cause the application to fail.

If the special app source code is somehow ported over to Windows, then the project should be able to automatically send the special application to a host and have an almost 100% chance of it working on first startup.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951263 · 举报违规帖子     回复 引用
mmonnin
志愿者测试人员

发送消息
已加入:8 Jun 17
贴子:58
积分:10,176,849
近期平均积分:0
United States
消息 1951261 - 发表于:22 Aug 2018, 2:09:50 UTC - 回复消息 1951244.  

Yes these apps are beta versions and only should be run by beta testers who are familiar with their development, their flaws and how to properly install them and test with them. Not for general public release. And certainly not ready for Main. Unless the apps get tested for a year and pass approval by the Seti administrators AND someone comes up with an automatic installer like the Lunatics installer for the SoG app, I don't see these in general release. The installation of the apps has to be bulletproof and "idiot" proof and at the level of the general computer user that knows how to use a computer but has no clue of how it works and cares less so, but wants to do scientific search for E.T. on the desktop. We are a long way from that day.


BOINC would do the 'install' just like any other app on any project. Select Seti in the dropdown in BM and done. Since it would come from the project the app_info would not be required and we'd all download the executable just like the current SETI provided app. Just 2 files are required now. SETI could even put the Lunatics options right there in Project Preferences.

Some projects have multiple versions and PCs download the plan class versions their PCs can support based off the CPU info. If one app runs a bit faster then more tasks for that app will download. Asteroids on CPUs for example has multiple SSE and AVX versions. Depending on the CPU arch implementation one may run faster than the other even if a CPU supports the 'fastest'. Same can be done here.

Tweaking with the command lines is up to the user of course.
ID: 1951261 · 举报违规帖子     回复 引用
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:20 Sep 12
贴子:5398
积分:192,787,363
近期平均积分:628
Australia
消息 1951255 - 发表于:22 Aug 2018, 1:47:33 UTC - 回复消息 1951155.  
最近的修改日期:22 Aug 2018, 1:55:45 UTC

Sorry to hear your misfortune Petri. I think we all have done "fat-finger" goof-ups before.


. . and for me just as recently ...

. . but life goes on, do you not have a copy of your old client_state.xml? TBar says that is all you need to restore the old host id. ...

. . Just read your later message, good news ...

Stephen

:(
ID: 1951255 · 举报违规帖子     回复 引用
Profile Keith Myers Special Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:29 Apr 01
贴子:11776
积分:1,160,866,277
近期平均积分:1,873
United States
消息 1951244 - 发表于:22 Aug 2018, 1:06:22 UTC

Yes these apps are beta versions and only should be run by beta testers who are familiar with their development, their flaws and how to properly install them and test with them. Not for general public release. And certainly not ready for Main. Unless the apps get tested for a year and pass approval by the Seti administrators AND someone comes up with an automatic installer like the Lunatics installer for the SoG app, I don't see these in general release. The installation of the apps has to be bulletproof and "idiot" proof and at the level of the general computer user that knows how to use a computer but has no clue of how it works and cares less so, but wants to do scientific search for E.T. on the desktop. We are a long way from that day.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951244 · 举报违规帖子     回复 引用
mmonnin
志愿者测试人员

发送消息
已加入:8 Jun 17
贴子:58
积分:10,176,849
近期平均积分:0
United States
消息 1951231 - 发表于:22 Aug 2018, 0:31:07 UTC
最近的修改日期:22 Aug 2018, 0:31:24 UTC

Thanks for the updated apps. It's a shame these aren't default. Even a few invalids is worth the tremendous output increase.
ID: 1951231 · 举报违规帖子     回复 引用
Profile Keith Myers Special Project $250 donor
志愿者测试人员
Avatar

发送消息
已加入:29 Apr 01
贴子:11776
积分:1,160,866,277
近期平均积分:1,873
United States
消息 1951194 - 发表于:21 Aug 2018, 22:54:27 UTC - 回复消息 1951192.  

Petri posted a direct link to both versions in an earlier post in this thread. Those links are from TBar's compilation. I think the Sierra designation might be for Pascal. You would have to read the docs in the file.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1951194 · 举报违规帖子     回复 引用
前 · 1 . . . 98 · 99 · 100 · 101 · 102 · 103 · 104 . . . 161 · 后

留言板 : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users


 
©2020 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.