Linux CUDA 'Special' App finally available, featuring Low CPU use

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 23 · 24 · 25 · 26 · 27 · 28 · 29 . . . 83 · Next

AuthorMessage
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 23000
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1867082 - Posted: 13 May 2017, 7:42:07 UTC

I did, in the very early days, and the results were disastrous, errors, and highly extended run times. The more recent versions of the application may be better behaved, but given the application is designed to use "all" the compute cores on any given GPU, and as much memory as it needs, then I can't see running two per-GPU being of any real advantage.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1867082 · Report as offensive
Profile tazzduke
Volunteer tester

Send message
Joined: 15 Sep 07
Posts: 190
Credit: 28,269,068
RAC: 5
Australia
Message 1867091 - Posted: 13 May 2017, 9:49:24 UTC - in response to Message 1867082.  

Greetings

Agreed
ID: 1867091 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1867136 - Posted: 13 May 2017, 16:53:22 UTC

Read the README_x41p_zi3t2b.txt file in docs for best use...
2) Run One task per GPU
3) Is about GPUs with 2 GB of vRam or less and how a 2GB GPU may not be able to use unroll 8. My 960 on my Mac runs out of Memory at unroll 8 if attached to a monitor.

Anyone with a 2GB 960 may want to look at nvidia-smi and see how close they are to running out of memory when running just 1 task.
Lastly, if there were any advantage whatsoever to running 2 tasks, I can assure you Petri would be running 2 tasks...he isn't.

On another note, it appears the downloads at Crunchers Anonymous are broken. I have no idea how long it will take to fix that.
ID: 1867136 · Report as offensive
Profile scocam
Avatar

Send message
Joined: 28 Feb 17
Posts: 27
Credit: 15,120,999
RAC: 0
United States
Message 1867155 - Posted: 13 May 2017, 20:19:55 UTC - in response to Message 1867136.  
Last modified: 13 May 2017, 20:20:27 UTC

Lastly, if there were any advantage whatsoever to running 2 tasks, I can assure you Petri would be running 2 tasks...he isn't.


Very true. Perhaps I'll run a single wu/gpu this week and take another look at my results. Thanks!


scocam
ID: 1867155 · Report as offensive
Profile tazzduke
Volunteer tester

Send message
Joined: 15 Sep 07
Posts: 190
Credit: 28,269,068
RAC: 5
Australia
Message 1867394 - Posted: 15 May 2017, 3:54:53 UTC

Greetings all

Am running the latest version of the special app and have noticed a low amount of inconclusives and over two Cards going hard st it for the last two weeks. 26 is my number.

Regards
ID: 1867394 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1867401 - Posted: 15 May 2017, 5:07:59 UTC - in response to Message 1867155.  

Lastly, if there were any advantage whatsoever to running 2 tasks, I can assure you Petri would be running 2 tasks...he isn't.


Very true. Perhaps I'll run a single wu/gpu this week and take another look at my results. Thanks!


scocam


. . Hi Scocam,

. . I noticed today that you are already at number 6 on the hit parade with a bullet. After what, less than 2 weeks? I doubt that you will make number one, or even number 2, but you are still on the rise. :) Good going.

Stephen

:)
ID: 1867401 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1867403 - Posted: 15 May 2017, 6:03:12 UTC - in response to Message 1867401.  
Last modified: 15 May 2017, 6:18:39 UTC

It's not difficult to see were the Host will settle out. Just compare the daily totals to the other Hosts. They even provide a nifty graph to make it easy;
scocam *


Brent *

It certainly appears the Four 1070s are producing more than the Mixed breed currently at #2

Looking at recent times, it could be there is a slight advantage to running Two tasks If you are running with the Blocking Sync enabled. Since Petri doesn't use the Blocking Sync, it could be there isn't any advantage is his case. The times are very close and probably wouldn't matter to most people, not to mention running Two tasks hasn't been tested and could result in additional Inconclusives, Errors, and Invalids. You would also need to make sure there was enough vRam available, the higher the Unroll the Higher amount of vRam is needed. Running nvidia-smi in the Terminal will print how much Memory is being used. You can see the same in the NVIDIA X Server Settings App.
ID: 1867403 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1867405 - Posted: 15 May 2017, 6:41:09 UTC - in response to Message 1867403.  

I knew as soon as scocam fired his rig up it would do better than mine.
Even just looking at CU/SMI, he has 60 and I have 56, or 7% more.

I made it to #2, but that is because that Windoze box lost a GPU, now at 6.
ID: 1867405 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 14022
Credit: 208,696,464
RAC: 304
Australia
Message 1867406 - Posted: 15 May 2017, 6:42:50 UTC - in response to Message 1867403.  

Looking at recent times, it could be there is a slight advantage to running Two tasks If you are running with the Blocking Sync enabled.

Did you mention in a previous post that -poll no longer works?
When running CUDA50 with CPU cores to spare, using -poll gave a huge boost to output with either 1 or 2 WUs running at a time. The less time the GPU spent waiting on the CPU, the more work it could do.
Grant
Darwin NT
ID: 1867406 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 14022
Credit: 208,696,464
RAC: 304
Australia
Message 1867407 - Posted: 15 May 2017, 6:48:41 UTC - in response to Message 1867405.  
Last modified: 15 May 2017, 6:50:25 UTC

I knew as soon as scocam fired his rig up it would do better than mine.
Even just looking at CU/SMI, he has 60 and I have 56, or 7% more.

I made it to #2, but that is because that Windoze box lost a GPU, now at 6.

Time to replace the GTX 980 & GTX 1080s with TitanXps... (if only their price wasn't so ridiculous).

GTX 1080
CUDA cores 3584
Boost 1582 MHz
Memory bandwidth 484 GB/s

TitanXp
CUDA Cores 3840
Boost 1582 MHz
Memory bandwidth 547.7 GB/s
Grant
Darwin NT
ID: 1867407 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1867434 - Posted: 15 May 2017, 14:55:22 UTC - in response to Message 1867406.  

Looking at recent times, it could be there is a slight advantage to running Two tasks If you are running with the Blocking Sync enabled.

Did you mention in a previous post that -poll no longer works?
When running CUDA50 with CPU cores to spare, using -poll gave a huge boost to output with either 1 or 2 WUs running at a time. The less time the GPU spent waiting on the CPU, the more work it could do.
About the only thing the Old CUDA App has in common with the new one is they both work lousy with the latest Mac nVidia drivers. Try comparing the times on these two 1070 machines. One is using a Full CPU, the other is using very little. The times are so close they could be caused by minute differences in the machines;
https://setiathome.berkeley.edu/results.php?hostid=7843077&offset=280
https://setiathome.berkeley.edu/results.php?hostid=8257416&offset=820
The major difference is one will light up your CPU & the power meter, the other won't.
ID: 1867434 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1867443 - Posted: 15 May 2017, 16:27:34 UTC

Any news regarding checking of Jason's proposal regarding race condition?
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1867443 · Report as offensive
Profile scocam
Avatar

Send message
Joined: 28 Feb 17
Posts: 27
Credit: 15,120,999
RAC: 0
United States
Message 1867494 - Posted: 15 May 2017, 19:32:13 UTC
Last modified: 15 May 2017, 19:35:12 UTC

I was quite surprised when I just took a look at the numbers. I didn't expect this machine to do this well so quickly. It is a beast of a machine so I knew it would be competitive but I had no expectation of the potential until reading everyone's comments regarding rank. My goal was to make it to the first page of Hosts (top 20) but I never expected it to get within the top 20 in a week and be #6 soon after that. Perhaps I built too much machine? I was planning on adding 3 more 1070s once prices dropped but I think I'm fairly content with the performance for now. Temps are cool, bugs have been worked out and it's running like a top (knock on wood). I can't tell you how much I appreciate all the support on these forums. So much knowledge here.

As of this morning, I'm running a single wu/gpu and plan to do so until Sunday. It may be difficult to compare graphs since it's such a new build but we'll see where it shakes out.

One thing that I've been wondering is how I can obtain additional tasks. I've noticed that many of the top hosts have hundreds or thousands more "In Progress" tasks than I can ever seem to get (I max out at 500 tasks). Is there some type of trick I'm missing? I'll run out of all tasks; gpu and cpu within hours of the Tuesday maintenance windows.

Whoops! Today is going to be a slow day for this machine, I just now realized that I forgot re-enable "Allow new tasks" while doing some ghost maintenance this morning so I've been crunching nothing but CPU WUs for the past few hours.

scocam
ID: 1867494 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1867529 - Posted: 15 May 2017, 23:16:43 UTC - in response to Message 1867494.  

My goal was to make it to the first page of Hosts (top 20) but I never expected it to get within the top 20 in a week and be #6 soon after that. Perhaps I built too much machine? I was planning on adding 3 more 1070s once prices dropped but I think I'm fairly content with the performance for now. Temps are cool, bugs have been worked out and it's running like a top (knock on wood). I can't tell you how much I appreciate all the support on these forums. So much knowledge here.

It's a good thing you didn't or you would push poor Petri out of his comfortable chair :)


One thing that I've been wondering is how I can obtain additional tasks. I've noticed that many of the top hosts have hundreds or thousands more "In Progress" tasks than I can ever seem to get (I max out at 500 tasks). Is there some type of trick I'm missing? I'll run out of all tasks; gpu and cpu within hours of the Tuesday maintenance windows.

There is a system limit, 100 tasks for your CPU and 100 more for each GPU, hence 100+400 gives your limit of 500. Those with "in progress" tasks numbering in the thousands are either using special tricks or have delinquent machines with lots of ghosted tasks.

Whoops! Today is going to be a slow day for this machine, I just now realized that I forgot re-enable "Allow new tasks" while doing some ghost maintenance this morning so I've been crunching nothing but CPU WUs for the past few hours.

scocam


Been there and done that, and quite recently too :(

Stephen

:)
ID: 1867529 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1867546 - Posted: 16 May 2017, 1:31:11 UTC - in response to Message 1867494.  

One thing that I've been wondering is how I can obtain additional tasks. I've noticed that many of the top hosts have hundreds or thousands more "In Progress" tasks than I can ever seem to get (I max out at 500 tasks). Is there some type of trick I'm missing? I'll run out of all tasks; gpu and cpu within hours of the Tuesday maintenance windows.
There's one approach to the problem with the Tuesday outages draining work buffers that I don't think has really been discussed much. That's simply to make use of a dual-boot setup, with what is essentially two hosts sharing the same hardware. When the primary host runs dry (or gets close to it), just boot over to the other one and, assuming a sufficient work buffer was in place before the outage, keep on crunching. The credits and RAC obviously get assigned to the second host, but more S@h work gets squeezed out of the same physical box.

Last Tuesday was the first outage where I was running Linux and a Special App on one of my boxes, and the first one where I ran through nearly all 400 of its GPU tasks. When it got near the end, I just switched back to Windows for the remaining hours, since I had maintained a limited work buffer there. I expect to do the same tomorrow, with two boxes this time.

I imagine such an approach could also work with a VM setup, but I have no experience in that arena.
ID: 1867546 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1867550 - Posted: 16 May 2017, 1:51:25 UTC - in response to Message 1867529.  


There is a system limit, 100 tasks for your CPU and 100 more for each GPU, hence 100+400 gives your limit of 500. Those with "in progress" tasks numbering in the thousands are either using special tricks or have delinquent machines with lots of ghosted tasks.

Whoops! Today is going to be a slow day for this machine, I just now realized that I forgot re-enable "Allow new tasks" while doing some ghost maintenance this morning so I've been crunching nothing but CPU WUs for the past few hours.

scocam


Been there and done that, and quite recently too :(

Stephen

:)

With the ongoing poor release of any BLC tasks, I have been having a field day simply using Mr. Kevvy's rescheduler to stock up on GPU tasks for Numbskull. The rescheduler is working quite well without making new ghosts, unlike my attempts of using the CPU2GPU script. I wish the same task mix was available for the Windows 7 machines. I am at 500 or so total tasks right now, so 200 above my normal allotment. Does anybody know what the maximum tasks allowed by the project is?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1867550 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1867557 - Posted: 16 May 2017, 2:34:14 UTC - in response to Message 1867546.  

One thing that I've been wondering is how I can obtain additional tasks. I've noticed that many of the top hosts have hundreds or thousands more "In Progress" tasks than I can ever seem to get (I max out at 500 tasks). Is there some type of trick I'm missing? I'll run out of all tasks; gpu and cpu within hours of the Tuesday maintenance windows.
There's one approach to the problem with the Tuesday outages draining work buffers that I don't think has really been discussed much. That's simply to make use of a dual-boot setup, with what is essentially two hosts sharing the same hardware. When the primary host runs dry (or gets close to it), just boot over to the other one and, assuming a sufficient work buffer was in place before the outage, keep on crunching. The credits and RAC obviously get assigned to the second host, but more S@h work gets squeezed out of the same physical box.

Last Tuesday was the first outage where I was running Linux and a Special App on one of my boxes, and the first one where I ran through nearly all 400 of its GPU tasks. When it got near the end, I just switched back to Windows for the remaining hours, since I had maintained a limited work buffer there. I expect to do the same tomorrow, with two boxes this time.

I imagine such an approach could also work with a VM setup, but I have no experience in that arena.

Or just run another instance of BOINC. It is quite a bit less work.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1867557 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1867573 - Posted: 16 May 2017, 4:16:51 UTC - in response to Message 1867546.  


Last Tuesday was the first outage where I was running Linux and a Special App on one of my boxes, and the first one where I ran through nearly all 400 of its GPU tasks. When it got near the end, I just switched back to Windows for the remaining hours, since I had maintained a limited work buffer there. I expect to do the same tomorrow, with two boxes this time.


. . That's one way to address the issue. I have enough trouble keeping track of what I am doing when running just one host per machine, I don't think I could cope with 6 hosts ... :(

Stephen

:)
ID: 1867573 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1867576 - Posted: 16 May 2017, 4:23:16 UTC - in response to Message 1867573.  

It's not that hard really. I have a bootable USB stick that I use for extra tasks. It works, I can load it with 300 tasks from one box and then run it in whatever box I want if I run out by booting from it, or simply use the data folder on the current OS.

/usr/bin/boinc --check_all_logins --redirectio --dir /media/brent/9e33424a-b63a-43ff-886e-0a660ec3fbac/var/lib/boinc-client &
ID: 1867576 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1867580 - Posted: 16 May 2017, 4:33:08 UTC - in response to Message 1867550.  


With the ongoing poor release of any BLC tasks, I have been having a field day simply using Mr. Kevvy's rescheduler to stock up on GPU tasks for Numbskull. The rescheduler is working quite well without making new ghosts, unlike my attempts of using the CPU2GPU script. I wish the same task mix was available for the Windows 7 machines. I am at 500 or so total tasks right now, so 200 above my normal allotment. Does anybody know what the maximum tasks allowed by the project is?


. . Hi Keith,

. . I had a look at the CPU2GPU script and could not understand what half of it was about so I put than on the backburner. It would be wonderful if Mr Kevvy managed to rejig his app to do the things he postulated, but right now my CPU queue is overflowing with Guppis and so there is nothing there for it to work on :(. If he gets a round tuit then we can have a party :)

. . I also use Stubbles' script which has been serving me well on the original i5, but the last two times I used it on this i5 it has ghosted hundreds of jobs. :( I am now trying to wade through the chore of getting them all back, 20 at a time. It would be so nice that if there were over 1 hundred ghosts it upped that limit to say 50. then it would only take me half a dozen runs through. But as it is I will be doing that process until the middle of next week. Needless to say I have stopped using Stubbles' script until I have this cleaned up and get a chance to work out what the problem is.

. . On the subject of an absolute limit, Stubbles always maintained that you could NOT get more than 1000 WUs but since there are delinquent hosts out there with several times that number (most of which I am guessing are ghosted WUs) I am not sure if that is correct. But apart from Petri and Scocam there are not many who would need more than that per machine to survive even the current 12 hour plus outages.

Stephen

..
ID: 1867580 · Report as offensive
Previous · 1 . . . 23 · 24 · 25 · 26 · 27 · 28 · 29 . . . 83 · Next

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use


 
©2026 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.