New Mac GPU Apps at BETA...

Message boards : Number crunching : New Mac GPU Apps at BETA...
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2019837 - Posted: 20 Nov 2019, 23:29:13 UTC - in response to Message 2019641.  
Last modified: 20 Nov 2019, 23:38:54 UTC

Ok, how do I stop the app from downloading the old 8.20 version. Whatever that thing is doing, I've just started aborting those work units. I can see why it was killed. regardless, if the app continues to download the old client, how do we test it?

Edit: any suggestions for the mb_cmdline-8.23-opencl_ati5_SoG_mac.txt file for 2 Frontier Edition cards? Here's what I'm using for my OpenCL_ati5_mac settings:
-pref_wg_size 32 -hp -high_perf -period_iterations_num 1 -sbs 1280 -spike_fft_thresh 4096 -tune 1 1 1 32 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 64 -oclfft_tune_ls 512 -oclfft_tune_bn 1024 -oclfft_tune_cw 64
ID: 2019837 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2019878 - Posted: 21 Nov 2019, 5:55:07 UTC - in response to Message 2019837.  

From what I've seen the old r3556 SoG App is working very well. The only problems I see are also problems with the Non-SoG App, i.e. must be something with the machine. The SoG App is much more demanding than the Non-SoG version, you will Not be able to use the same configurations when running multiple instances. I tested the command-line I use with the one you posted and mine is faster on My 570. cmdlines Can cause problems if they are not compatible, I suggest using the leanest one possible.
-sbs 224 -oclfft_tune_gr 256 -oclfft_tune_wg 256 -spike_fft_thresh 2048 -pref_wg_num_per_cu 6 -period_iterations_num 1
Try that cmdline with it running Only Three Instances and see how it works.
The only way to stop the server from sending different Apps is to switch to Anonymous platform using the files at Crunchers Anonymous The instructions are in the Posts.
ID: 2019878 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 2019939 - Posted: 21 Nov 2019, 19:08:13 UTC - in response to Message 2019633.  
Last modified: 21 Nov 2019, 19:10:34 UTC

Another Question is why does My AMD 570 run so much faster than Sam's 570?

This Mac Pro is my main computer. So even though I run BOINC during the day it has to compete with two active displays. Whilst I get my beauty rest BOINC can hog all it wants.

I haven't dedicated any cores to the GPU tasks, that was one of my questions for the beta test.
ID: 2019939 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2019973 - Posted: 21 Nov 2019, 21:26:50 UTC - in response to Message 2019939.  

I haven't dedicated any cores to the GPU tasks, that was one of my questions for the beta test.
Why not? I just checked, Every ReadMe for the OpenCL and CUDA Apps say you should free at least one CPU core for the GPU. It doesn't matter if it's Beta or Main, OpenCL or CUDA, if you are running a GPU then you should free a CPU core(s). Depending on your configuration freeing one may not be enough.
Here at CA, You should free at least One CPU core when running the GPU Apps.
Here in the setiathome-8.22-opencl_ati5_mac_darwin_README_OPENCL;
For best performance it is important to free 2 CPU cores running multiple instances. Freeing at least 1 CPU core is necessity to get enough GPU usage.
In the CUDA README_x41p_V0.98.txt;
You should reserve CPU usage for the GPU App by changing the "Use at most ___ % of the CPUs" to at least 99% in the BOINC - Computing preferences. For multiple GPUs you need to reserve more CPU usage. The App will run poorly without Free CPU time available.

It seems to be common throughout SETI though, even if it is in All the ReadMe.txt
ID: 2019973 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2019982 - Posted: 21 Nov 2019, 22:18:30 UTC

to at least 99% in the BOINC

This line should read:
to at most 99% in the BOINC



As "at least 99" indicates that the value has to be greater than 99, so 99.5 would be correct the logic as given. But what is actually meant is 98.5 would be a correct value.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2019982 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2019983 - Posted: 21 Nov 2019, 22:35:37 UTC - in response to Message 2019973.  

It seems to be common throughout SETI though, even if it is in All the ReadMe.txt
You need to read the specific ReadMe for your app and operating system, carefully.

In general, CUDA apps do not require a whole CPU core in support, unless one of the synchronisation options like SWAN SYNC or -nobs is in operation for your specific application.
ID: 2019983 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2019984 - Posted: 21 Nov 2019, 22:59:41 UTC - in response to Message 2019983.  

In general, CUDA apps do not require a whole CPU core in support, unless one of the synchronisation options like SWAN SYNC or -nobs is in operation for your specific application.
In My experience, the only way to free any CPU core at BOINC startup is to reserve at least 1 full cpu core. To free 1 core at startup you can either use the BOINC preferences set to 99% or set an app_config to free 1 core. Using the BOINC Preferences is easier for most people. Any other configuration will fail to reserve any cpu core at startup as fractions of ncpu will Not free any cpu at BOINC startup on machines with just 1 GPU. SETI would probably see a noticeable gain in production if they just changed the BOINC default setting to 99% CPU instead of 100% CPU as most machines have at least One GPU and without a Free CPU core that GPU will run poorly. I'd wager 90% of GPUs at SETI are running poorly due to having the BOINC Preferences set to 100% CPU by default.
ID: 2019984 · Report as offensive
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2019992 - Posted: 21 Nov 2019, 23:21:27 UTC - in response to Message 2019984.  
Last modified: 21 Nov 2019, 23:29:46 UTC

Forgive me if I'm not reading this correctly, but I thought that is precisely why we have the <cpu_usage> tag in the app_config.xml file. Am I not reading this right? For instance, recommendations to save 2 cores for GPU processing on my 6 core i7 should be set to .333333? I mean, yeah, there are likely a LOT of people who haven't read any of the readme files because most people won't even read help on the help menu when they have problems understanding a program.
ID: 2019992 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 2020002 - Posted: 21 Nov 2019, 23:50:21 UTC - in response to Message 2019984.  
Last modified: 22 Nov 2019, 0:40:49 UTC

Is this correct for app_config.xml to ensure each GPU gets a CPU? My memory needs jogging...

<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>1.0</gpu_usage>
			<cpu_usage>1.0</cpu_usage>
		</gpu_versions>
	</app>
	<app>
		<name>astropulse_v7</name>
		<gpu_versions>
			<gpu_usage>0.5</gpu_usage>
			<cpu_usage>1.0</cpu_usage>
		</gpu_versions>
	</app>
</app_config>

ID: 2020002 · Report as offensive
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2020018 - Posted: 22 Nov 2019, 0:33:34 UTC - in response to Message 2019878.  
Last modified: 22 Nov 2019, 0:48:26 UTC

The only way to stop the server from sending different Apps is to switch to Anonymous platform using the files at Crunchers Anonymous The instructions are in the Posts.


That may be true, but I've already migrated to Catalina, so that's no longer an option unless I want to dig through someone else's code and update it. With school, I have no time for that. Next resort is to just delete it.

$ cat /Library/LaunchDaemons/CleanSeti.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>EnvironmentVariables</key>
<dict/>
<key>Label</key>
<string>CleanSeti</string>
<key>ProgramArguments</key>
<array>
<string>/Library/Scripts/CleanSeti.sh</string>
</array>
<key>StandardErrorPath</key>
<string>/tmp/CleanSeti.err</string>
<key>StandardOutPath</key>
<string>/tmp/CleanSeti.out</string>
<key>StartInterval</key>
<integer>60</integer>
</dict>
</plist>

$ sudo cat /Library/Scripts/CleanSeti.sh
#!/bin/bash

cd /Library/Application\ Support/BOINC\ Data/projects/setiathome.berkeley.edu
/bin/rm *SoG*

I'll put in the test condition when I'm not lazy.

Makes me miss the old Unix days...contab -e
0-59 * * * * * /bin/rm /Library/Application\ Support/BOINC\ Data/projects/setiathome.berkeley.edu/*SoG*
ID: 2020018 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2020019 - Posted: 22 Nov 2019, 0:40:43 UTC - in response to Message 2020002.  

Yes, that will reserve One CPU core for each GPU instance even though the Mac AMD App doesn't use anywhere near One CPU core for One GPU instance. So, if you have say 4 GPUs, that setting will reserve 4 cores even though you really only need to reserve around 2 cores for 4 AMD GPUs. The problem is, Most users don't have an app_config.xml. Instead they rely on the built-in setting which only reserves a Fraction of a core per instance, and with just One GPU a Fraction of a core reserved will result in No part of a core being reserved. Which means their GPU will run poorly. You can also reserve CPU cores in the BOINC Preferences without having to create and place an app_config.xml.

You can also have different settings for each plan class, which helps if you want to convince the Server to send you tasks for just one App. Running 2 instances will lower your APR and the Server sends most tasks to the plan class with the highest APR. This setting will convince the Server to send you tasks for the New 8.23 App because the other plan classes will have a much Lower APR;
<app_config>
   <app>
      <name>setiathome_v8</name>
   </app>
   <app_version>
       <app_name>setiathome_v8</app_name>
       <plan_class>cuda42_mac</plan_class>
       <cmdline>-poll</cmdline>
       <avg_ncpus>1</avg_ncpus>
       <ngpus>0.5</ngpus>
   </app_version>
   <app_version>
       <app_name>setiathome_v8</app_name>
       <plan_class>opencl_nvidia_mac_old</plan_class>
       <cmdline></cmdline>
       <avg_ncpus>1</avg_ncpus>
       <ngpus>0.5</ngpus>
   </app_version>
   <app_version>
       <app_name>setiathome_v8</app_name>
       <plan_class>opencl_ati_mac</plan_class>
       <cmdline></cmdline>
       <avg_ncpus>1</avg_ncpus>
       <ngpus>0.5</ngpus>
   </app_version>
   <app_version>
       <app_name>setiathome_v8</app_name>
       <plan_class>opencl_ati5_mac</plan_class>
       <cmdline></cmdline>
       <avg_ncpus>1</avg_ncpus>
       <ngpus>0.5</ngpus>
   </app_version>
   <app_version>
       <app_name>setiathome_v8</app_name>
       <plan_class>opencl_ati5_SoG_mac</plan_class>
       <cmdline></cmdline>
       <avg_ncpus>1</avg_ncpus>
       <ngpus>1</ngpus>
   </app_version>
 </app_config>
This will only work if the APR numbers actually track your APR though. On My machines the APR numbers will Hang and not give an accurate APR number. I had to remove the CUDA Driver on my machine to get the Server to send OpenCL tasks to the NV GPU. Just as with the last Host, My CUDA APR is Hung and the Server was only sending CUDA tasks to my NV GPU.
ID: 2020019 · Report as offensive
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2020022 - Posted: 22 Nov 2019, 0:58:26 UTC - in response to Message 2020019.  
Last modified: 22 Nov 2019, 1:02:09 UTC

You can also have different settings for each plan class, which helps if you want to convince the Server to send you tasks for just one App.


This solution I think I like. If I want to short circuit a plan class like SoG I feed it a cmdline like -version to kill the app prematurely. Plus I get a more fine grained control over my apps.

Thanks for the added info regarding AMD GPUs. I may lower my cpus and test that out some.
ID: 2020022 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2020081 - Posted: 22 Nov 2019, 15:00:40 UTC

Well, that answers that.
2. CUDA 10.2 Release Notes
CUDA 10.2 (Toolkit and NVIDIA driver) is the last release to support macOS for developing and running CUDA applications. Support for macOS will not be available starting with the next release of CUDA.
It's dead Jim...
ID: 2020081 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2020085 - Posted: 22 Nov 2019, 15:49:59 UTC - in response to Message 2020081.  
Last modified: 22 Nov 2019, 16:06:22 UTC

you'll be able to still use the app on Mac using the "old" drivers and any GPU released as of now.

you just wont be able to use new, yet to be released, GPUs that will require the new drivers.

but maybe in 5-10 years when the software/hardware is just too old and infeasible to use anymore, it'll truly die
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2020085 · Report as offensive
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2020115 - Posted: 22 Nov 2019, 18:02:14 UTC - in response to Message 2020085.  

you'll be able to still use the app on Mac using the "old" drivers and any GPU released as of now.

you just wont be able to use new, yet to be released, GPUs that will require the new drivers.

but maybe in 5-10 years when the software/hardware is just too old and infeasible to use anymore, it'll truly die


All of this is on the chopping block today. The push by apple to move away from OpenCL in general has already been announced, in favor of their in-house METAL language. Support will remain for a time, but it is already on the block for deprication.
ID: 2020115 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2021223 - Posted: 30 Nov 2019, 22:31:28 UTC - in response to Message 2020081.  

Well, that answers that.
2. CUDA 10.2 Release Notes
CUDA 10.2 (Toolkit and NVIDIA driver) is the last release to support macOS for developing and running CUDA applications. Support for macOS will not be available starting with the next release of CUDA.
It's dead Jim...
At least the last Mac ToolKit still works as well as the older ones. It may also be a couple of seconds faster than 10.1, https://setiathome.berkeley.edu/result.php?resultid=8282449499. The only problem was the last Security Update for High Sierra totally borked the AMD driver on a MacPro 3,1. Before it was just semi-borked. You could boot with the RX 570 and get a desktop, it's just there wasn't any acceleration. Now you can't even boot to High Sierra if the 570 is installed...wonderful. The 570 still works fine in Sierra, but if you want to boot to HS you have to remove the card. Funny, the ancient HD 4670 still works fine in High Sierra on the 3,1.
ID: 2021223 · Report as offensive
Chris Adamek
Volunteer tester

Send message
Joined: 15 May 99
Posts: 251
Credit: 434,772,072
RAC: 236
United States
Message 2023342 - Posted: 16 Dec 2019, 14:27:40 UTC

Ran the betas for a couple of hours yesterday. GPU apps worked fine. All the CPU apps errored out at completion. Running Catalina 10.15.2 on a 2013 Mac Pro.

https://setiweb.ssl.berkeley.edu/beta/results.php?hostid=85165
ID: 2023342 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2023346 - Posted: 16 Dec 2019, 14:39:08 UTC - in response to Message 2023342.  

Hardly 'at completion' - they never got started:

<stderr_txt>
Process creation (../../projects/setiweb.ssl.berkeley.edu_beta/setiathome_8.06_i686-apple-darwin__osx_13) failed: Bad CPU type in executable (errno = -1)
</stderr_txt>
ID: 2023346 · Report as offensive
Chris Adamek
Volunteer tester

Send message
Joined: 15 May 99
Posts: 251
Credit: 434,772,072
RAC: 236
United States
Message 2023348 - Posted: 16 Dec 2019, 14:41:56 UTC - in response to Message 2023346.  

Good point, I dunno what it was doing for 2.5 hours then...lol

Thanks!
ID: 2023348 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2023357 - Posted: 16 Dec 2019, 15:12:04 UTC - in response to Message 2023348.  

If you look at the results you'll see some of the version 8.06 CPU tasks are working...some aren't. Seems to be a problem with the newer version, the older 8.05 version mostly works. When I couldn't get the Optimized CPU Apps to work with Catalina I tried the SETI-boinc build and quickly found the code is calling for a file that doesn't exist on a Mac. The file has to do with the part of the code that tests the CPU for type. Probably why the App can't figure out what CPU it's running on. I dunno, I don't build the SETI-boinc CPU Apps, and don't have a clue about who does. Meanwhile, the OpenCL GPU Apps are working well.
ID: 2023357 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : New Mac GPU Apps at BETA...


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.