Qs about GPU utilization

Message boards : Number crunching : Qs about GPU utilization
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992180 - Posted: 1 May 2019, 18:44:40 UTC

Does BOINC load up the GPU with tasks and let them run in the GPU's memory? Can the host's hardware significantly impact the GPU's performance for BOINC?

I have a 2009 Mac Pro with dual Xeon 6-core X5690s running Apple's Mojave. I have a similar generation Super Micro dual Xeon server mobo that inherited the Mac Pro's original 4-core X5670s, it runs Linux Mint. Both have PCI 2.0 16-lane slots. Both have 32GB RAM and SSDs.

Currently the Mac Pro has a MSI Radeon RX 560 4GB while the Super Micro has the Mac Pro's old NVIDIA GT120.

I'm getting a MSI Radeon RX 570 8GB which will go in the Mac Pro.

Would I be better off leaving the RX 560 in the Mac Pro or moving it to the Super Micro?

What makes this a difficult decision is that the Super Micro appears to be under-performing over all BOINC projects (some projects don't use the GPU). Its cores run at 100%, it is very well cooled. The Mac Pro's SETI average credit is roughly 2,200, the Super Micro's is 450. On BOINCstats the Mac Pro accumulates roughly 9x the credit of the Super Micro.

TIA,
Sam
ID: 1992180 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1992181 - Posted: 1 May 2019, 19:01:35 UTC - in response to Message 1992180.  

By modern standards, that NVIDIA GT120 counts as prehistoric. To my surprise, it is actually returning valid work - it's just young enough, by about six months - but it's taking over 5 hours to complete each task. The best modern GPUs can complete the tasks in under a minute.

It will also be very inefficient in terms of energy wasted. GPU computing has developed by leaps and bounds in the 10 years since that card was manufactured, and come down in price. Your best bet would be to donate that card to a museum and buy a modern one.
ID: 1992181 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992189 - Posted: 1 May 2019, 19:36:33 UTC - in response to Message 1992181.  
Last modified: 1 May 2019, 19:39:31 UTC

Richard->

It's better than nothing - barely... My main question is whether the RX 560 will work better as a second GPU in my Mac Pro or in the Super Micro mobo. What I don't know is how BOINC/SETI utilizes the GPUs and any constraints.

Sam
ID: 1992189 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992192 - Posted: 1 May 2019, 20:03:17 UTC - in response to Message 1992189.  
Last modified: 1 May 2019, 20:11:57 UTC

Hmmm, a terrible waste of a perfectly good PCIe Mac. If you plan on keeping the 560s it would probably be best to keep them in the Mac. That would leave the Linux machine free for a New Maxwell or higher nVidia GPU so you can run the CUDA Special App in Linux. You can also run the CUDA Special App in a Mac Pro, but not with Mojave, and not with an AMD card. IMO it really isn't worth running those CPUs and burning all that power. A cheap nVidia GTX 750 Ti would burn much less power and produce much more work. Take a look at my 2008 Mac Pro running an AMD 570 and two low end NV cards, even the 30 watt 750 Ti puts the 150 watt 570 to shame, https://setiathome.berkeley.edu/results.php?hostid=8097309&offset=480

If you insist on running the CPUs there is a Much better App for the older CPUs. The 'stock' SETI CPU App doesn't work well on the older CPUs, but you'd have to run Anonymous platform to use the better CPU App, as I'm doing. The 560s run about the same as my old ATI 6850s, and you see I'm not running those anymore. The good news is the 560s burn less power than a 6850 even though the run-times are similar. There really isn't much hope for the AMD cards, at SETI the nVidia cards run circles around them. You could try the same configure line I'm running on the 570, that should help the 560s out a little,
-sbs 256 -oclfft_tune_wg 256 -tune 1 64 1 4 -spike_fft_thresh 2048 -pref_wg_num_per_cu 6 -period_iterations_num 2
Hopefully you know about having to reset the file permissions every time you change something in the setiathome.berkeley.edu folder? If not there is a script you can run in the Terminal to make it a little easier, it makes my life much easier, Tools for Mac OS X
ID: 1992192 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992197 - Posted: 1 May 2019, 20:47:59 UTC - in response to Message 1992192.  
Last modified: 1 May 2019, 20:48:17 UTC

TBar->

Thanks for the wealth of info. I have some constraints which prevent me from optimizing my Mac Pros for SETI or other BOINC projects. The first is one of them has to run Mojave. The second is budget, to quote Buddy Guy: "I'm so broke I can't even spend the night"... I live in Seattle, with electric rates relatively low recouping the cost of new efficient equipment tales longer.

I needed a Metal GPU for Mojave and the RX 560 was the lowest priced one that people verified worked well in Mojave. The RX 570 just came up on a sale for less than a 560 so I grabbed one. Again: Metal capable.

The server board has no constraints, I bought it for BOINC. My other projects are Einstein@home, Rosetta@home, and World Community Grid.

So my main dilemma is whether to leave the RX 560 in the Mac Pro or use it in the server board.

If the 560 stays in the Mac Pro I'll try and pick up a cheap used graphics card for the server board at the computer scrappers. Any recommendations for good older boards that current Windows users & gamers wouldn't want?

Thanks again,
Sam
ID: 1992197 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1992205 - Posted: 1 May 2019, 21:29:22 UTC - in response to Message 1992189.  

It's better than nothing - barely

Given it's power requirements, and the amount of work it can do each hour, you would actually be better off with nothing IMHO.
Grant
Darwin NT
ID: 1992205 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992209 - Posted: 1 May 2019, 21:38:48 UTC - in response to Message 1992197.  
Last modified: 1 May 2019, 21:43:00 UTC

My first choice would be an eBay GTX 1060. I have quite a few of them, current price is around $125 for the EVGAs and around $100 for the Gigabyte GTX 1060 Windforce 3GB. I have both and there is little difference except the Gigabyte one has a backplate which makes it a tight fit for more than 1 card. Both of them use less than 100 watts crunching SETI and make good use of those watts using the CUDA Special App. Stay away from the 9 series, they are slower and use more power, especially the 970. If a 1060 is too much I'd recommend a GTX 950 on the low end. The 1070s are nice, but still twice as much as a 1060, without being twice as fast. I have a few 1070s in a Hack, BTW, those 150,372,024 credits means that will be a 'special' Mac in a few months. You can see how the CPU App works on my machine, SSE4.1 v3711.
ID: 1992209 · Report as offensive
Profile TimeLord04
Volunteer tester
Avatar

Send message
Joined: 9 Mar 06
Posts: 21140
Credit: 33,933,039
RAC: 23
United States
Message 1992328 - Posted: 2 May 2019, 21:08:20 UTC

DON'T forget!!! IF you stay with Mojave, you CANNOT run these NVIDIA Cards! This is because there are NO NVIDIA Web Drivers for Mojave, and CUDA Driver 418.105, (which DOES NOT work in Mojave), is REQUIRED to run TBar's and Petri's CUDA90 App for Mac - IN High Sierra.

I would recommend installing High Sierra 10.13.6, (I have this on my Mid-2010 Mac Pro 5,1 and on my Hackintosh), and Upgrading to the Latest Security Patch - (17G6030). Then Install the appropriate NVIDIA Web Driver, shut down the System, Install whichever NVIDIA Card you wish to use, (the 10x0, 9x0, 8x0, and the 750TI Cards are ALL Metal Capable BUT DON'T run in Mojave), restart the System, Install CUDA Driver 418.105. Then reinstall BOINC.

IF you choose to run TBar's CUDA90 App at that point someone will be able to assist you in getting that installed and configured for whatever NVIDIA Card you put into the System.

Right now, my Mac Pro is shut down due to CA's HIGH and EXORBITANT Electricity Rates. I'm only Crunching on the Hackintosh - iMac 18,3 Profile, with NVIDIA Driver 387.10.10.10.40.124 and CUDA Driver 418.105.

ALSO, you need to look up PSU and MOBO Specs for Power Consumption on that 2009! I found that my 2010 CAN support 300W Draw from the two 6-Pin Connectors on the MOBO including the PCI-e Draw of the GPU(s).

My 2010 Mac Pro has one Mac GTX-970 4GB GPU, and a Secondary EVGA GTX-1050 2GB GPU. The 970 takes up BOTH 6-Pin Ports on the MOBO. The 1050 ONLY pulls from the PCI-e Slot. AND, I agree with TBar, get a GTX-1060 Card. I have to have the older 970 because my Second SSD Drive has Sierra 10.12.1 with the old ADOBE CS-6 Suite. That combination DOES NOT recognize the 10x0 GPU Line. My Primary High Sierra 10.13.6 SSD makes use of both GPUs utilizing TBar's CUDA90 App.

I hope this information helps, and I'm sure that TBar can clarify any questions you may have on this.


TL
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join Calm Chaos
ID: 1992328 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992387 - Posted: 3 May 2019, 4:44:45 UTC - in response to Message 1992328.  

TimeLord04: Thanks for the advice. I'd put any NVIDIA I get in my old dual-Xeon server mobo running Linux Mint, not in the Mac Pros.

Sadly I'm aware of the Mojave/NVIDIA debacle. My dual-CPU Mac Pro is not dedicated to BOINC, it has to run Mojave, so for now it's stuck with ATI/AMD. The single-CPU Mac Pro could run an earlier macOS. I already had the Metal Radeon RX 560 for it so I'll leave it on Mojave for now.

I just shoe-horned the new Radeon RX 570 into my dual-CPU Mac Pro alongside the RX 560. The 570 encroaches on the remaining slot's space, luckily the USB-3 board I have just squeezed in. I had to move my boot drive from an OWC Accelsior PCI card to a sled using a NewerTech AdaptaDrive. And I gave up an eSATA card. The 570 uses 150W, the 560 uses 90W. The limit for the 2009 is the same as the 2010, 300W. It is running hot in the expansion slots though, I use Mac Fans Control to dial up the fans to keep things cooler.
ID: 1992387 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992422 - Posted: 3 May 2019, 12:14:13 UTC - in response to Message 1992387.  
Last modified: 3 May 2019, 12:38:28 UTC

Quite a bit of difference between my 570 and your 570. A large part is probably due to you aren't running a configure line in the mb_cmdline file . It will speed up the times considerably, and the first part of the Stderr output will look as such;
<stderr_txt>
Running on device number: 0
Maximum single buffer size set to:1024MB
oclFFT global radix override set to:256
oclFFT max WG size override set to:256
SpikeFind FFT size threshold override set to:2048
Preferred workgroups number per compute unit set to 6.
Number of period iterations for PulseFind set to 2
OpenCL platform detected: Apple.....

Look in /Library/Applications Support/BOINC Data/projects/setiathome.berkeley.edu for a file that starts with, mb_cmdline_mac_OpenCL.... and open it with TextEdit, it should be blank.
Then copy the line and paste it into the first line of the file. Save the results, Do Not Use save as,
-sbs 1024 -oclfft_tune_gr 256 -oclfft_tune_wg 256 -spike_fft_thresh 2048 -pref_wg_num_per_cu 6 -period_iterations_num 2
Then change the owner of the file back to boinc_master, either using the script or chown in the Terminal. I have copied the Mac_SA_Secure.sh script to My Application Support folder and just run that in the Terminal to change the File Permissions.
cd /Library/Application\ Support/BOINC\ Data
sudo sh /Library/Application\ Support/Mac_SA_Secure.sh
Also, reinstalling BOINC will change the owner back to boinc_master, the Terminal is much faster. If you used the Terminal you don't have to do anything further, the next time the App starts it will use the new configure line, and run much faster.

My current setup has the GTX 950 in the Top x4 slot. Since the 950 requires a six-pin, I have used a SATA power adapter to a six-pin and use the power from one of the removed HDDs. The Middle 750 Ti doesn't need an external connector. The AMD 570 requires an eight pin for a 150 watt card? It will do just fine running on One 75 watt cable with a six-pin to eight pin adapter. That leaves a Free 75 watt cable for the next time I want to replace the 750 Ti with a stronger card. I have used Macs Fan Control for a very long time.
ID: 1992422 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1992457 - Posted: 3 May 2019, 17:46:25 UTC - in response to Message 1992197.  


If the 560 stays in the Mac Pro I'll try and pick up a cheap used graphics card for the server board at the computer scrappers. Any recommendations for good older boards that current Windows users & gamers wouldn't want?

Thanks again,
Sam

If you can't afford a used gtx 1060 3GB then I know that used gtx 750Ti's and run well with both windows and Linux (special sauce).

Tom
A proud member of the OFA (Old Farts Association).
ID: 1992457 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992479 - Posted: 3 May 2019, 20:03:19 UTC - in response to Message 1992422.  

TBar->

Thanks again! BTW: I'm a Mac guru, very familiar with the command line, et al. I just don't know squat about BOINC's innards so I'm very appreciative of help.

The RX 570 requires 8-pin power, so I bought an 8-pin to dual 6-pin adapter. This worked because the RX 560 only uses the slot's power. The 570 says it's using 16 lanes, the 560 8. My gut feeling is that faster GPU-PCI speeds don't radically impact BOINC's speed. Is this correct?

FWIW: both my Mac Pros run headless, I use DVI dummy plugs so the displays don't default to a 1990s CRT size.

I modified the mb_ file with sudo nano mb_... so I didn't have to mess with permission changes. Since it's a Mac it was trivial to paste the line into Terminal. What's the easiest way to see the difference between my old and new setup?

The Mac Pro is hovering at about 400W power according to its supply. I'll stick a Kill-A-Watt on it to see what the actual draw is. It's heating the room even more with the RX 570... Now that spring is arriving in Seattle I will be wasting energy. Heating the room with a Mac Pro is no worse than baseboard heating...
ID: 1992479 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992481 - Posted: 3 May 2019, 20:06:21 UTC - in response to Message 1992457.  

Tom->

Thanks. I might splurge one last time and get a used GTX 1060. The dual-Xeon mobo's BOINCstats credit has been lower than what I thought it should be. Worst case scenario is that if the dual-Xeon isn't worth the electricity I'll put the GTX 1060 in my single-CPU Mac Pro and run an older macOS that supports NVIDIA.

Sam
ID: 1992481 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992490 - Posted: 3 May 2019, 20:52:36 UTC - in response to Message 1992479.  

I modified the mb_ file with sudo nano mb_... so I didn't have to mess with permission changes. Since it's a Mac it was trivial to paste the line into Terminal. What's the easiest way to see the difference between my old and new setup?
You would see it first in the BOINC Data/slots/X/stderr.txt file, once the task is finished the results are printed to the Boinc Data/client_state.xml file. Once reported the results appear on the Host Web page here, https://setiathome.berkeley.edu/results.php?hostid=8699856&offset=20. Looking at the last result it appears the change didn't take effect, it's not showing up in the Stderr output. You should have only one file that begins with mb_cmdline_mac... If you have more make sure it's the one that has ATI in the name.

Looking at your Macs with the very old GPUs, it should be apparent the old GPUs are slower than the CPUs, when they work. The best you can do for those machines is to disable the GPU computing and just use the Optimized CPU App on those machines There are a few optimized CPU Apps in this thread, https://arkayn.us/forum/index.php?topic=191.msg4368#msg4368, just make sure to choose the correct one. If there isn't a GPU App in the app_info.xml, then the server will not send any GPU tasks and the GPU will effectively be 'disabled'. So, by 'installing' only the CPU App package the GPU will be disabled.
ID: 1992490 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992497 - Posted: 3 May 2019, 21:45:54 UTC

Well, I see a couple results now that show the change, However, there doesn't appear to be any noticeable improvement. That shouldn't be, unless, you don't have a couple CPU cores Reserved for the GPUs. I don't see any SETI CPU tasks finishing, are you running the CPUs at all? If you haven't reserved any CPU for the GPUs try setting the 'Options/Computing preferences/Computing' to 'Use at most 90% of the CPUs'. If that isn't the problem, I'm not sure what else could be. Looks as though someone else is having the same problem. My machine is only running 2 CPU tasks with 3 GPU tasks, and shows a CPU usage of around 60% total.
ID: 1992497 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992502 - Posted: 3 May 2019, 23:26:42 UTC - in response to Message 1992497.  

On the dual-CPU Mac Pro I changed the prefs to 92% so 2 of the 24 cores would be free. But there were only 23 cores total in use. I set the percentage to 96, which should have left only 1 CPU free, but 22 are in full use and two for the GPUs.

On the single-CPU Mac Pro the math is weirder. I set the % to 92 which should have left 1 of the 12 cores for GPU, but it left 2. I kept increasing the % to 99, it was still only using 10. So I tried setting it to 100% again and now there are 11 CPU only tasks and one CPU/GPU task. Huh? On both Mac Pros I was using my global settings from BOICstatsBAM, perhaps that made a different when I localized them.

I'll have to wait for tasks in the other projects to switch back to SETI to see if this worked correctly. I upped SETI's resource share to get more data.

I may retire the old Mac laptops and one of the minis. The MacBook Pros were ready for recycling. I hacked their cases so I could sit huge heat sinks on top of their heat pipes. They run cool at 100% load. They might not be worth the small amount of electricity they consume. It was a fun project, I had no great expectations. Now that it's spring and my home's heat is off the Macs' heat is a liability, not an alternative.

On the plus side I just setup another Raspberry Pi and installed BOINC. They only run SETI since none of the other projects I'm on support ARM. The RPis have other purposes, I did not buy them for BOINC.
ID: 1992502 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1992504 - Posted: 3 May 2019, 23:49:07 UTC

ATM I wouldn't worry about how many cores are being used, what you should be looking at now is your task completion times. ;-)

Cheers.
ID: 1992504 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992515 - Posted: 4 May 2019, 0:34:05 UTC - in response to Message 1992502.  

...But there were only 23 cores total in use....
Yes, I had a feeling it had to be lack of CPU resources causing the GPUs to run so poorly. I think people have found with the High core count CPUs they have to reserve more CPU than with the Lower core count. There is still a Large difference between Your 570 and My 570. A good test would be to keep lowering your CPU tasks count and observe your GPU times. I think you'll find when the total CPU usage drops below ~90% the GPUs will be running much better. I use my real Mac for other non-SETI tasks, so, I keep the CPU usage down to around 60-70% total, and you see how well my 570 is running. It's just a little slower than my 30 watt 750 Ti ;-)
ID: 1992515 · Report as offensive
Profile CyborgSam
Avatar

Send message
Joined: 28 Apr 99
Posts: 63
Credit: 4,541,759
RAC: 5
United States
Message 1992546 - Posted: 4 May 2019, 6:18:07 UTC - in response to Message 1992515.  

Should I need just a single free CPU for each GPU? So if other tasks use 22 of the 24 I should be OK, correct?

I've set the usage down to 90% for the night, let's see what happens.
ID: 1992546 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1992581 - Posted: 4 May 2019, 13:45:13 UTC - in response to Message 1992546.  

Some people say they need to reserve more to have the GPUs run normally. Right now it appears you are only running 1 GPU, I don't see any 570 results.
ID: 1992581 · Report as offensive
1 · 2 · 3 · Next

Message boards : Number crunching : Qs about GPU utilization


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.