Linux boinc version 7.6.31 can't run optimized applications

Message boards : Number crunching : Linux boinc version 7.6.31 can't run optimized applications
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1818659 - Posted: 21 Sep 2016, 16:52:24 UTC
Last modified: 21 Sep 2016, 17:09:54 UTC

Hi Grant, that helps thank you!
I guess what I was trying to say is 3 weeks of work is still too short of a time frame to have a good estimate of what my RAC will be down the road. What makes me think that were the few systems running for 3 weeks with as many valid WU's than pending WU's. I.E. if those pending ones were validated today, then my total credit would nearly double and so would my RAC.

thank you also for the clarifications on comparing similar WU's!


edit: nevermind it doesn't look like the system is keeping history of all valid WU's. They disappear after a while. That clarifies things a bit :D
ID: 1818659 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820140 - Posted: 27 Sep 2016, 6:24:13 UTC - in response to Message 1816848.  

It looks to be working well. You might want to change a few settings for your card. In the docs folder open the file ReadMe_MultiBeam_OpenCL.txt and scroll to near the bottom. There is a section named command line switches which has different settings. Some of them are already in the mb_cmdline-opencl_ati5_nocal.txt file but can be raised a little for your card. I'm not sure which of those settings in the readme are best for Linux, but you can change the existing settings without any trouble, try changing the settings to;
-sbs 256 -oclfft_tune_gr 256 -oclfft_tune_wg 256 -high_perf -period_iterations_num 16

The settings take effect when the next task is started, you can suspend a running task and force another to start. If there are any problems with the settings they will usually happen when the task is started. If you try any of the additional settings from the readme, it might be best to suspend all the other non-running tasks in case the new setting fails and then add new settings one at a time. If there is a problem with a particular setting just remove it and try the next one.



Hi again Tbar,

Looking at my numbers again specifically these 2 machines:
1. 4770K / GTX 1080 / Windows 10: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8081080
2. 3770K / R9 290X / Ubuntu 16.04: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8092813

I am trying to understand how the GTX1080 is getting owned by the R9. Clearly it's not the CPU (4770K should perform better than the 3770K). Or maybe it's the OS or that the Linux machine is running better optimized apps?

The windows machine is running the latest Lunatics installer (which is very simple to install and which I probably didn't mess up).

Or maybe the settings/app for linux you gave me are just that much better than those in the lunatics installer?

let me know if you see something obvious.

thanks
ID: 1820140 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1820147 - Posted: 27 Sep 2016, 7:38:38 UTC - in response to Message 1820140.  
Last modified: 27 Sep 2016, 7:50:15 UTC

I am trying to understand how the GTX1080 is getting owned by the R9.

The CUDA application you are running on the GTX 1080 isn't nearly as fast as the present OpenCL application which is being run on the R9.
There is an OpenCL application that is suitable for running on the GTX 1080- the SoG application available with the current Lunatics beta v4 installer.


If you were to run the SoG application on the GTX 1080 using a command line similar to the one below, you will find much improved run times. The CPU usage for the SoG application has been reduced significantly, however if you use the command line below I suggest reserving 1 CPU core for each WU being run. I'm personally happy with the throughput running just 1 WU at a time.
-tt 1500 -hp -period_iterations_num 3 -sbs 768 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64


If this is a dedicated cruncher you can change
-period_iterations_num 3
to
-period_iterations_num 1

If you use it as your daily system then 10 would be a better value.


EDIT- your other Nvidia/Windows systems would also benefit from using the SoG application, although with somewhat less aggressive command line settings for the GTX 650s.
Grant
Darwin NT
ID: 1820147 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820161 - Posted: 27 Sep 2016, 11:24:21 UTC - in response to Message 1820140.  

It looks to be working well. You might want to change a few settings for your card. In the docs folder open the file ReadMe_MultiBeam_OpenCL.txt and scroll to near the bottom. There is a section named command line switches which has different settings. Some of them are already in the mb_cmdline-opencl_ati5_nocal.txt file but can be raised a little for your card. I'm not sure which of those settings in the readme are best for Linux, but you can change the existing settings without any trouble, try changing the settings to;
-sbs 256 -oclfft_tune_gr 256 -oclfft_tune_wg 256 -high_perf -period_iterations_num 16

The settings take effect when the next task is started, you can suspend a running task and force another to start. If there are any problems with the settings they will usually happen when the task is started. If you try any of the additional settings from the readme, it might be best to suspend all the other non-running tasks in case the new setting fails and then add new settings one at a time. If there is a problem with a particular setting just remove it and try the next one.



Hi again Tbar,

Looking at my numbers again specifically these 2 machines:
1. 4770K / GTX 1080 / Windows 10: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8081080
2. 3770K / R9 290X / Ubuntu 16.04: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8092813

I am trying to understand how the GTX1080 is getting owned by the R9. Clearly it's not the CPU (4770K should perform better than the 3770K). Or maybe it's the OS or that the Linux machine is running better optimized apps?

The windows machine is running the latest Lunatics installer (which is very simple to install and which I probably didn't mess up).

Or maybe the settings/app for linux you gave me are just that much better than those in the lunatics installer?

let me know if you see something obvious.

thanks

As already noted, the 1080 is running an App that does poorly on the New BLC tasks. Currently the Older CUDA Apps should only be used on the Low End GPUs such as the 730/720/etc. The higher end GPUs will do better with the OpenCL App or the New 'Special' CUDA Apps, both of which are still being developed. Your R9 could do better still. It appears your are still running the 'default' settings which are designed for a HD 7750. The R9 will do better with the settings in my last post. It would be much faster with a lower -period_iterations_num, but numbers lower than around 16 could cause usability problems. After the new settings on the R9 you might consider running multiple tasks. Since you are running Linux it should work as it works fine on the Mac Pros. This Host is running basically the same App as your R9 and running 3 tasks at a time, Computer 6105482...seems to be working well, and it is the Highest ranking Mac. If you do try multiple tasks, you should observe the machine closely for a few days to make sure it doesn't have the same problems the Windows R9 Hosts have when running multiple tasks.
Give it a try, you can always go back to the previous settings if it doesn't help.
ID: 1820161 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820242 - Posted: 28 Sep 2016, 19:11:43 UTC
Last modified: 28 Sep 2016, 19:52:39 UTC

thanks guys,
I will try the SOG beta4 on the 3 nvidia machines.
I replaced the 980 by a RX480 for some testing (it's a test machine) but I'll probably put it back or even try using both GPU's if that's possible.

I need to search how to run a command line. I think that's done through an external file linked in the app_info file but I'm not sure.


edit:

Starting with my W10 workstation with the R9 295X2, cause it feels like with 2 GPU this machine should do better.

the app_info.xml's section about the GPU calls for an empty command line txt file:

<name>mb_cmdline_win_x86_SSE2_OpenCL_ATi_HD5.txt</name>


that file is completely empty.

I'm adding copying the settings you mentioned a few days ago and will restart the client and see what happens next:

-sbs 256 -oclfft_tune_gr 256 -oclfft_tune_wg 256 -high_perf -period_iterations_num 16


so that's done for the 295X2, and also for the 1080 but that one ran out of work with the maintenance so gotta wait for it to download some work.
ID: 1820242 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820280 - Posted: 28 Sep 2016, 20:38:02 UTC
Last modified: 28 Sep 2016, 20:55:01 UTC

Looks like the additional settings for the 295X2 are working well. Went from 26 mn avg to 15mn per GPU. that's pretty good! thanks guys, let's see if I can do the same for the 1080.

assuming my 14K RAC is a good average, coming from 12CPU's crunching WU's in 2 hours average and 2 GPU crunching in 25mn... going to 12CPU's and 2 GPU crunching in 15mn, should increase my RAC by 16% and I should end up above 16K.

I'm still super far from the 40K RAC that the guy with the MAC doing. I think his GPU's are a little better there is a x3 ratio. Not sure what I'm doing wrong. Sure I use this machine isn't fully dedicated to Seti but on a week period the crunching uptime is probably at around 80%.

He's running Open CL 1.2, where my 295X2 is Open CL 2.0. Could this explain the difference?
ID: 1820280 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820320 - Posted: 28 Sep 2016, 22:42:20 UTC

I've copied the same settings I put on the 295X2 to the RX480's one on linux and one on windows. So far they seem to perform similarly at roughly 40mn per unit.
I was hoping a little better out of these nice little/cheap cards. That being said it could be that the settings I used on the 295X2 are not ideal for the 480's.
ID: 1820320 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820339 - Posted: 28 Sep 2016, 23:41:49 UTC - in response to Message 1820320.  

Well, the cmdline -high_perf only works with the newer Apps. It will not work with the App currently sent by the SETI Server. You'll need to install the App from the Lunatics Beta installer to use that cmdline in Windows. Fortunately, it doesn't appear to be causing any trouble, you might want to remove it from those older Apps. I haven't been impressed with the Ellesmere's performance, you'll probably not see any better from it no matter what settings you use.

As for the Hawaii, those are good cards. The problem is you can't run multiple tasks with them in Windows. This is what happens in Windows if you try multiple tasks with a Hawaii, Validation inconclusive (258) · Valid (359) · Invalid (77) · Error (6)...not good. The biggest gain on the 40k Mac was when he started running Multiple tasks. Obviously, you don't have the Windows problem with OSX. I've seen similar AMD GPUs run Multiple tasks in Ubuntu without any trouble. I'd suggest trying 2 tasks on the Linux Hawaii. If that works, I'd move at least one other Hawaii to Linux and see how that goes with a dual card machine. It's fairly simple, just change one setting in the app_info.xml file. Change <count>1</count> to;
      <plan_class>opencl_ati5_nocal</plan_class>
        <coproc>
          <type>ATI</type>
          <count>0.5</count>
        </coproc>

You might want to add -instances_per_device 2 to the mb_cmdline-opencl_ati5_nocal.txt file, I'm not sure if that does much in Linux though. So, the new line in mb_cmdline-opencl_ati5_nocal.txt would be;
-sbs 256 -oclfft_tune_gr 256 -oclfft_tune_wg 256 -high_perf -instances_per_device 2 -period_iterations_num 16

See how that works.
ID: 1820339 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820343 - Posted: 29 Sep 2016, 0:35:19 UTC
Last modified: 29 Sep 2016, 1:19:45 UTC

Thanks,
all the apps I am running now on all machines are from lunatics, either Beta or those downloaded from the main lunatics download site.

I did see a nice improvement on my workstation (the W10/R9 295 X2) adding the line I mentioned in of my latest posts.

Sadly I can't get that machine under linux because it's my main workstation and need windows for most of my stuff. My GPU is a single card with 2x Hawaii chips on it.

I will be parting with the other R290X machine.

Pretty sad that Windows can't run multiple instances...

My other linux machine uses a R480X, so I may try to run 2 instances as you suggested.

I also added a 980GTX Ti on the windows system running the 1080, still no work downloaded but I am anxious to see how the command line you guys provided will help, especially with the added 980...

edit: regarding the version of Open CL, does it have any impact on performance? Like I said I've got machines running 2.0 and some others 1.2... hmm
ID: 1820343 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820383 - Posted: 29 Sep 2016, 3:51:19 UTC - in response to Message 1820320.  
Last modified: 29 Sep 2016, 4:04:01 UTC

I've copied the same settings I put on the 295X2 to the RX480's one on linux and one on windows. So far they seem to perform similarly at roughly 40mn per unit.
I was hoping a little better out of these nice little/cheap cards. That being said it could be that the settings I used on the 295X2 are not ideal for the 480's.

I'm not very familiar with the RX480's, however, you appear to have two different versions. The ones in Windows are showing 36 Compute Units, while the one in Linux shows 14 CUs. A quick look shows a 480 should have 36 while the lower end card has 16. I don't even see a card with 14 CUs. Which cards do you have? Also, which driver are you using on the Linux machine, the Clock rates seems low as well.
ID: 1820383 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820397 - Posted: 29 Sep 2016, 6:11:35 UTC
Last modified: 29 Sep 2016, 6:23:36 UTC

all 3 are the same, the only difference being one is an ES I got early on... while the 2 others are production board.

I am using the AMD-GPU driver from the AMD website. It says it is supported but maybe not...

you could think the ES not on par with production boards but as it turns out, the ES board is this one: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8089460


Where do you see the number of CU?

I tried putting 0.5 instead of 1 on the Linux/RX machine. could it be why you only see 14 CU's?
ID: 1820397 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820426 - Posted: 29 Sep 2016, 7:13:33 UTC - in response to Message 1820397.  

The CU count is listed in the results, along with the clock rate. So far every 480 Linux machine I've seen is running Ubuntu 16.04 with Kernel 4.4. They All have it listed as 14 CUs and 555 Mhz, including this one;
Max compute units: 14
Max clock frequency: 555Mhz
https://setiathome.berkeley.edu/result.php?resultid=5154323104

Looking around there appears to have been early problems with Kernel 4.4. Maybe it wasn't fixed correctly. Definitely something wrong, the Windows machines have it displayed correctly and have much better performance. It should give the same CU and CR no matter which settings you use. I'd try Kernel 4.2. Or maybe even install Ubuntu 14.04.4, which should have Kernel 4.2, and see how it works there. There is an AMD 480 driver for 14.04.4, maybe it will work correctly, AMDGPU-Pro Driver Version 16.30 for Ubuntu 14.04.4

Here's another with 16.04 & Kernel 4.4, https://setiathome.berkeley.edu/result.php?resultid=5186363210
ID: 1820426 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1820430 - Posted: 29 Sep 2016, 7:17:52 UTC - in response to Message 1820397.  

Where do you see the number of CU?

Apps report them in stderr:
http://setiathome.berkeley.edu/result.php?resultid=5183842312
Max compute units: 44
Number of compute units: 44

http://setiathome.berkeley.edu/result.php?resultid=5185565729
Max compute units: 14
Number of compute units: 14
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1820430 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820549 - Posted: 29 Sep 2016, 15:08:33 UTC - in response to Message 1820397.  
Last modified: 29 Sep 2016, 15:32:10 UTC

all 3 are the same, the only difference being one is an ES I got early on... while the 2 others are production board.

I am using the AMD-GPU driver from the AMD website. It says it is supported but maybe not...

you could think the ES not on par with production boards but as it turns out, the ES board is this one: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8089460


Where do you see the number of CU?

I tried putting 0.5 instead of 1 on the Linux/RX machine. could it be why you only see 14 CU's?

Your Hawaii with Kernel 4.4 and the new driver is also showing the Wrong Compute Unit Count and Clock Rate. This is a Hawaii running Kernel 4.2,
Max compute units: 40
Max clock frequency: 1000Mhz
https://setiathome.berkeley.edu/result.php?resultid=5187039488
It is also running an older driver.

Usually running the latest OS with the Latest driver is fraught with such perils.
Of course, I don't have that problem with my older ATI cards running the older OS & driver.

I received a new hard a few days ago to replace the SSD that died in the nVidia Host. I was going to install it and install Ubuntu 14.04.5, but now, I think I'll download 14.04.4 and install that instead. Just in case a new AMD card finds it's way onto that machine in the future.
ID: 1820549 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820584 - Posted: 29 Sep 2016, 17:21:47 UTC
Last modified: 29 Sep 2016, 18:21:34 UTC

thanks again for the ton of good info on CU.
So in summary, Linux should give me a better throughput than windows on high end AMD/Intel GPU's but I need to run an older kernel/driver.
I'll make sure my 16.04 LTS is up to date before I downgrade...

Kind of a tough situation because on Linux you can't necessarily find drivers for older kernels. Looks like AMD's got a driver for 14.04, so that may work out.

Not sure about nvidia on linux, I haven't done that in a long time.


Right now my 1080/980TI are on the same Windows10 machine and it's been a day and it has not been able to download any GPU work. CPU's been running and uploading completed units just fine so I don't know what's going on. Maybe BOINC doesn't like 2 different GPU's on the same machine?
edit: as it turns out, there is still no work available...

9/29/2016 11:11:50 AM | SETI@home | Sending scheduler request: To fetch work.
9/29/2016 11:11:50 AM | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
9/29/2016 11:11:51 AM | SETI@home | Scheduler request completed: got 0 new tasks
9/29/2016 11:11:51 AM | SETI@home | No tasks sent
9/29/2016 11:11:51 AM | SETI@home | No tasks are available for AstroPulse v7
9/29/2016 11:11:51 AM | SETI@home | No tasks are available for SETI@home v8
9/29/2016 11:11:51 AM | SETI@home | This computer has finished a daily quota of 2 tasks
9/29/2016 11:11:51 AM | SETI@home | This computer has reached a limit on tasks in progress
ID: 1820584 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820600 - Posted: 29 Sep 2016, 18:30:05 UTC - in response to Message 1820584.  

Actually, you should have about the same throughput as long as the same settings are used. The difference is you can't run multiple instances on some AMD GPUs in Windows due to the Windows driver. I've been having problems with the ATI/AMD cards in Ubuntu since 14.04. There is STILL a Bug in the downloaded versions of Ubuntu 14.04.x. Generally, if you want to use more than One GPU you need to apply this 'fix', http://askubuntu.com/questions/453902/problem-in-setting-up-amd-dual-graphics-trinity-radeon-hd-7660g-and-thames-ra/477006#477006 I use Option a). I would download and install the version of 14.04.4 mentioned in the AMD Driver, http://mirror.pnl.gov/releases/14.04.4/

Your Windows machine isn't having much luck either. I See a machine with just a GTX 1080, NVIDIA GeForce GTX 1080 (4095MB) driver: 372.90 OpenCL: 1.2 Apparently BOINC doesn't see the 980, and All the GPU tasks have been ending with Exit status 201 (0xc9) EXIT_MISSING_COPROC
I don't know about that one. You might try reinstalling the nVidia Driver, and making sure you have a monitor attached to the 980. You'll need a Windows person after that ;)
ID: 1820600 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820606 - Posted: 29 Sep 2016, 18:56:53 UTC - in response to Message 1820600.  

I was thinking maybe it's because no work is available. Not sure what happened with the bunch of aborted units though. I'll wait until some work is finally downloaded before taking the 980TI off. In that case I would try using the 980TI on the Ubuntu machine and move the RX480 to the other system already running a RX480. These haven't been doing so bad actually...

regarding the multiple instances, you did say that the guy on OSX with 2 workstation GPU whose machine's doing around 40K RAC was because he was probably running multiple instances. So it seems that doing so really pushes the throughput beyond what can be done with a single instance. Unless I'm missing a point.

Oh and I'll try putting a monitor on that 980 and see what happens
ID: 1820606 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820607 - Posted: 29 Sep 2016, 19:19:11 UTC - in response to Message 1820606.  
Last modified: 29 Sep 2016, 19:37:34 UTC

9/29/2016 11:11:51 AM | SETI@home | This computer has finished a daily quota of 2 tasks
9/29/2016 11:11:51 AM | SETI@home | This computer has reached a limit on tasks in progress

That shows you have reached today's limit, you won't be sent anymore tasks until after midnight. Probably 'cause all the GPU tasks that have been sent have immediately Failed because BOINC can't use the GPU. This is usually due to a Driver problem, You would first start by reinstalling the Driver from nVidia. As it stands now, any new tasks downloaded will immediately Fail as they have been failing. Apparently you are using Anonymous platform on that host, possibly there is a problem with your app_info.xml file. If you renamed, or deleted, the app_info.xml you will revert back to 'Stock' and the Server will probably send more tasks immediately. First, you need to reinstall the driver though, just in case that's the problem.

Also, copy and paste the first 30 lines of the event Log after restarting BOINC, that will show more details so people have an idea of what is happening.

So it seems that doing so really pushes the throughput beyond what can be done with a single instance. Unless I'm missing a point.

Yes, on the higher end GPUs you can produce more work by running more instances as long as the App has been designed to do so. Currently all Apps except the 'Special' CUDA App, and certain AMD cards in Windows, can run multiple instances. With 32 Compute Units, the 480 should produce more work running multiple instances, except maybe in Windows. I don't know how a 480 works in Windows running Multiple instances, I haven't seen anyone mention it.
ID: 1820607 · Report as offensive
Zytra

Send message
Joined: 29 Aug 16
Posts: 36
Credit: 58,532,935
RAC: 0
United States
Message 1820619 - Posted: 29 Sep 2016, 19:52:46 UTC - in response to Message 1820607.  
Last modified: 29 Sep 2016, 20:04:55 UTC

Thank you again.

1. I will try the RX's under windows and multiple instances and see how that goes. I've got one machine with 2 now, and one machine with one.

2. I haven't changed anything to the 1080 machine other than adding the 980. When I started having problem I installed the latest drivers. One thing I did change was adding the command line you guys suggested. So either that or the GTX980.

For now I will remove the command line stuff and see how it goes after midnight... The GTX980 TI is now longer in that machine now, so it's back to where it was so it should start working fine again after midnight I guess. For reference here's what I added in the command line:

-tt 1500 -hp -period_iterations_num 3 -sbs 768 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64


3. FYI, I added the same command line to the GTX650's and no problem there, BOINC did ask me to update the driver, after I did that it started working again. No visible improvement or maybe a little bit but nothing major, it could be because the settings are too aggressive...

4. I've installed the GTX980 Ti on the Ubuntu 16.04 machine to try my luck with it. Right I only have app's for the CPU as I can't seem to find an optimized app for GTX980's under linux, at least nothing is available on the lunatics website... I know it exists: http://setiathome.berkeley.edu/show_host_detail.php?hostid=7985986
Boinc detects CUDA. so it should work, but I still have an app_info.xml running for optimized CPU apps.
ID: 1820619 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1820628 - Posted: 29 Sep 2016, 20:32:40 UTC - in response to Message 1820619.  

It could have been the cndline. That's why I keep mine simple and short ;)
I've had one cmdline entry try to trash my entire cache. Now when trying something new I suspend all non-running tasks before applying the new setting.

There are Linux CUDA Apps around, I've got a few. But, they are still being worked on as seen in this post, https://setiathome.berkeley.edu/forum_thread.php?id=80158&postid=1820610#1820610
The latest version seems to work fine except it hates those immediate Overflows. The last time there was a Overflow flood my machines ended up with a couple hundred Inconclusive results. They are still working those off.
ID: 1820628 · Report as offensive
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : Linux boinc version 7.6.31 can't run optimized applications


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.