No more guppi's=vlars on the gpu please

Message boards : Number crunching : No more guppi's=vlars on the gpu please
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65757
Credit: 55,293,173
RAC: 49
United States
Message 1793468 - Posted: 4 Jun 2016, 20:05:18 UTC
Last modified: 4 Jun 2016, 20:09:33 UTC

I don't have the money currently for a newer video card, I'd at least like to choose, it's My power bill, I pay for it, not anybody else.

I'd love to have a 1080 or a 1070 or even a 970 that is Corsair HG10 N970 compatible, but I have more important fish to fry for the moment.

Right now all I have is a PNY LC 580 at My disposal.

I was limited by failing hardware as to what project I could run, that is now no longer the case, I have an Asus Rampage IV Extreme, plus recently I picked up a 4820K cpu.

If it means goodbye, then that says no one wants Me around, except on their terms.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1793468 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22216
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1793473 - Posted: 4 Jun 2016, 20:21:48 UTC

Vic, I don't see anyone other than you saying you need a new GPU - what you do need to do is install the correct drivers, and apply the guidance given by Raistmer and others as how to get you current GPU running smoothly.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1793473 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65757
Credit: 55,293,173
RAC: 49
United States
Message 1793480 - Posted: 4 Jun 2016, 20:34:24 UTC - in response to Message 1793473.  

Vic, I don't see anyone other than you saying you need a new GPU - what you do need to do is install the correct drivers, and apply the guidance given by Raistmer and others as how to get you current GPU running smoothly.

I'm using 353.06 on Windows 7 Pro x64, that's good enough for My card, at full load the card runs at about 65C, idle is about 35C, better than some, an older card is what I have, normal wus take about 22 mins, guppis about 35-45 mins.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1793480 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22216
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1793482 - Posted: 4 Jun 2016, 20:40:13 UTC

Even on my 980s guppi run times are more than double the run time of a more normal task. Its all to do with the way the Nvidia GPUs do the sort of maths needed to resolve the data from the small angular changes in the data - I know some are working on possible solutions, but none of those are actually returning acceptable rates of valid results (yet).
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1793482 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13744
Credit: 208,696,464
RAC: 304
Australia
Message 1793527 - Posted: 4 Jun 2016, 23:08:23 UTC - in response to Message 1793480.  

normal wus take about 22 mins, guppis about 35-45 mins.

And the problem is?
Grant
Darwin NT
ID: 1793527 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13744
Credit: 208,696,464
RAC: 304
Australia
Message 1793531 - Posted: 4 Jun 2016, 23:11:40 UTC - in response to Message 1793292.  

I thought it was agreed that vlars would not be run on the gpu???

As I posted in the other thread when you asked this question/made this statement.
That was for MB VLARs and MB VLARs still aren't released to GPUs.

MB VLARs caused significant system usability issues, so that's why they don't go to GPUs. Guppie VLARs don't cause such issues, so they do.
Grant
Darwin NT
ID: 1793531 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65757
Credit: 55,293,173
RAC: 49
United States
Message 1793545 - Posted: 4 Jun 2016, 23:53:25 UTC - in response to Message 1793527.  

normal wus take about 22 mins, guppis about 35-45 mins.

And the problem is?

I could say, not enough heat or not fast enough, you would probably reject both.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1793545 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13744
Credit: 208,696,464
RAC: 304
Australia
Message 1793550 - Posted: 5 Jun 2016, 0:08:50 UTC - in response to Message 1793545.  

normal wus take about 22 mins, guppis about 35-45 mins.

And the problem is?

I could say, not enough heat or not fast enough, you would probably reject both.

Of course.
The idea is to try to find signs of extra terrestrial life, not to provide people with a heat source.
As to taking longer than other WUs, so what? Seti on BOINC took longer to process than the original Seti WUs. v7 took longer to process than v6, v8 takes longer than v7. I expect v9 will take longer than v8.
Over time optimised versions will be released that can do the work in less time, but until then what we have is what we have.
Taking longer than other WUs isn't an issue.
Grant
Darwin NT
ID: 1793550 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793564 - Posted: 5 Jun 2016, 0:42:47 UTC - in response to Message 1793346.  

The problem is this isn't just "RAC-obsession". VLARs were kept off the NVidia GPUs for a long time because they can cause the machine to stutter or hang or the work unit to crash. ie my wife's machine starting generating piles of errors as every time she watched any sort of streaming video ie YouTube or Facebook the VLAR work unit in progress would crash.

Even if the volunteer chooses not to accept them, their machine(s) will still get them on their CPUs where they will run just fine and cause no issues (for some, even faster than non-VLARs) rather than 2.5x or more slower, at least while there are some non-VLARs. (I keep reading that they are going away, but I haven't seen any indication when. The indication from the team was that it was going to remain about half and half.) If the more efficient GPUs stick to the non-VLAR work, this frees up more CPUs out there for the VLARs (even on my own machine I see this with the GPUs choking on a stack of GUPPIs while the CPUs have all the regular non-VLAR MB work they should be getting. Random scheduling is inefficient.) It's possible that the net result of allowing them on NVidia GPUs is hardly any improvement in compute capacity, just an increase in grief and people leaving the project... yes, some have over it.

I'm still puzzled that the code to keep the GUPPIs away was already in place and worked just fine for months(?), but apparently when it was put back on Beta with a setting in the project prefs. to allow or deny them, the result was no GPU work at all. Something must have not been done the same, because the code was already in place and working. I hope it is retried, so this issue can finally be put to rest.



. . Your message caused me a sense of Deja-Vu, and I agree with you 100%. It makes no sense to me to put work that will run wonderfully on GPUs out as CPU tasks and then issue work, which runs well on CPUs but should be kept off GPUs, out as GPU tasks while they could be replacing the preferred GPU tasks in CPU WU format. Horses for courses please! Send the work where it will be best processed.

. . The only good I can see coming from running Guppis on GPUs is that someone along the way may resolve the conflicts that cause them to perform so badly.
ID: 1793564 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793565 - Posted: 5 Jun 2016, 0:49:35 UTC - in response to Message 1793357.  

So hopefully i can add some params to the read.me for the next installer before final release if Richard is O.K. with it.

No problems with that. With any luck, I'll do another slice of installer work this afternoon and tomorrow (but must shop first).

My main irritation with guppi_VLAR on GPUs is how slowly they run - and in general, the anti-lag tuning options come at the expense of runtime or CPU availability. I'd prefer to see VLARs sent preferentially to CPUs, because that way we actually get through the recordings quicker and find whatever there is to be found. Looking at the CPU-only cruncher beside me, 10 of the 18 tasks currently scheduled are non-VLAR.

At the moment, I am running guppi VLAR on GPU. If the option arrives to disallow that, I'll be in something of a dilemma - to switch or not to switch - unless I can compensate by upping the proportion of VLARs on CPU as well.


. . Now a magic app that would turn a GPU VLAR WU into a CPU WU and a CPU based nonVLAR WU into a GPU WU would go down well :)
ID: 1793565 · Report as offensive
Profile Mr. Kevvy Crowdfunding Project Donor*Special Project $250 donor
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 3776
Credit: 1,114,826,392
RAC: 3,319
Canada
Message 1793566 - Posted: 5 Jun 2016, 1:03:31 UTC - in response to Message 1793565.  
Last modified: 5 Jun 2016, 1:06:14 UTC

. . Now a magic app that would turn a GPU VLAR WU into a CPU WU and a CPU based nonVLAR WU into a GPU WU would go down well :)


Thank you for the below. :^)

I thought about writing this as a practice app, but quickly realized that with it, all the existing MB work units would be shuffled to the GPU and complete quickly, while the work would still be randomly assigned as half-and-half GUPPI VLAR. Soon, the machine would have nothing but GUPPI VLAR work units so would be in the same situation, unless the owner starting aborting them. And this even assumes that the validators allow a CPU-assigned work unit to be completed by a GPU or vice versa. That I don't know. Raistmer mentioned a rescheduling app so I think he does...
ID: 1793566 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793567 - Posted: 5 Jun 2016, 1:07:50 UTC - in response to Message 1793360.  


My main irritation with guppi_VLAR on GPUs is how slowly they run - and in general, the anti-lag tuning options come at the expense of runtime or CPU availability. I'd prefer to see VLARs sent preferentially to CPUs, because that way we actually get through the recordings quicker and find whatever there is to be found. Looking at the CPU-only cruncher beside me, 10 of the 18 tasks currently scheduled are non-VLAR.

Then re-read my last group mail and maybe you could devise another answer than "something bigger needed".



. . Perhaps you could steer me in the right direction. Running SOG I turned -use_sleep on and was pleased with the result of freeing up a CPU core to return to crunching. Downside though is runtimes lengthened, from 12 mins to 20 mins for non-VLARs and from nearly 30 mins to over 50 mins for VLARs.

. . I monitor the machines operation (temps and processor usage) and noticed that the GPU was being heavily under-utilised, running at about 30% and very sporadically. So I instituted a reasonable adjustment by running multiple GPU WUs and with nonVLAR tasks this worked like a dream, I am now running triples with about 100% GPU utilisation and runtimes went from 20 mins to 36 mins. This is marginally better than my expectations (GPU is GTX950). However Guppi WUs defied my reasoning and instead of combining to a 100% GPU workload seem to be playing tag team with the GPU and continuing to run with only 30% utilisation, runtimes went from about 52 mins to 3 and a half hours. I was hoping it would be about 1 hour 15 mins and expecting it to be less than 1.5 hours. Is there a setting that I can tweak to persuade the Guppi WU's to truly run concurrently and behave as the nonVLAR WUs do?
ID: 1793567 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13744
Credit: 208,696,464
RAC: 304
Australia
Message 1793568 - Posted: 5 Jun 2016, 1:11:47 UTC - in response to Message 1793567.  

Is there a setting that I can tweak to persuade the Guppi WU's to truly run concurrently and behave as the nonVLAR WUs do?

Nope.
It's the application that determines how the Guppies run, not just the applications settings.
Improved applications are being worked on (eg SoG), but there's still a lot of work to be done.
Grant
Darwin NT
ID: 1793568 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793571 - Posted: 5 Jun 2016, 1:23:53 UTC - in response to Message 1793367.  

I just wanted what the thread title says, nothing more, please.

Guppi's/vlar's on the cpu, not on the gpu.

Thanks.

Yes but I, and I suspect others are fine with them so there has to be a way to "select" them or not. I personally do not want a blanket ban.



. . A method of selection would be the optimal path.
ID: 1793571 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793575 - Posted: 5 Jun 2016, 1:43:08 UTC - in response to Message 1793382.  

Remember you can always use the SETI Preferences to set computer locations, Default, Home, Work, School.

You can use those locations to set one computer to use CPU only, CPU + GPU, or GPU only ... and other computers to do something else.


. . That does not provide a general solution to the issue. For a contributor with a single host it would prevent getting unwanted VLAR work on a GPU but also all other GPU work as well. It would only be of use for a multiple host system where one host has a very weak CPU but a more powerful GPU and another one has no GPU or a weak unit but multiple CPU's.
ID: 1793575 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793578 - Posted: 5 Jun 2016, 1:50:56 UTC - in response to Message 1793419.  

I'm firmly in the "If they are sent to my crunchers I'll let them crunch" camp. I would rather one of mine didn't get so many guppi, but it does, so it carries on crunching.

Indeed. The original reason for not sending VLARs to GPUs was that, at the time, they could cause a host to lockup or crash.
I don't see the argument that they run less efficiently on GPUs now as valid. If we are going that route we might as well disallow NV GPUs from downloading AP tasks. Since they are less efficient than Radeon GPUs at processing the tasks.

Having options from the project to tailor which apps and what kind of work run on specific hardware would be nice. Something similar to what Collatz or PrimeGrid have would probably be ideal, but require a lot of work.

Out of curiosity do we know how iGPUs handle these tasks?


. . On the first part ... No! APs run very nicely on my Nvidia cards thank you very much :)

. . On the second part I agree.

. . On the third part, I disabled GPU crunching on this machine long before the Guppi flood. It was killing the productivity of this unit something awful. But maybe someone is across this?
ID: 1793578 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793579 - Posted: 5 Jun 2016, 1:57:08 UTC - in response to Message 1793468.  

I don't have the money currently for a newer video card, I'd at least like to choose, it's My power bill, I pay for it, not anybody else.

I'd love to have a 1080 or a 1070 or even a 970 that is Corsair HG10 N970 compatible, but I have more important fish to fry for the moment.

Right now all I have is a PNY LC 580 at My disposal.

I was limited by failing hardware as to what project I could run, that is now no longer the case, I have an Asus Rampage IV Extreme, plus recently I picked up a 4820K cpu.

If it means goodbye, then that says no one wants Me around, except on their terms.



. . Aaaahh, a pair of 1080's ... <sigh>

. . But I agree completely you are paying the power bill for your rigs and that gives you the right to choose. And until/unless there is a system option/selection to tailor your workload than the dreaded Abort button is your only means to exercise that choice. And none of my comment should be taken out of context.
ID: 1793579 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793583 - Posted: 5 Jun 2016, 2:16:57 UTC - in response to Message 1793568.  

Is there a setting that I can tweak to persuade the Guppi WU's to truly run concurrently and behave as the nonVLAR WUs do?

Nope.
It's the application that determines how the Guppies run, not just the applications settings.
Improved applications are being worked on (eg SoG), but there's still a lot of work to be done.



. . Yep, that's why I am running SoG. I was hoping for improvements in Guppi processing. But it seems strange to me, that the one application behaves so contrarily in dealing with the two different types of WU. With nonVLAR it combines them and fully utilises the GPU, but with Guppis it does almost the opposite. It totally defeats the purpose of the exercise.
ID: 1793583 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13744
Credit: 208,696,464
RAC: 304
Australia
Message 1793589 - Posted: 5 Jun 2016, 2:42:37 UTC - in response to Message 1793583.  

. . Yep, that's why I am running SoG. I was hoping for improvements in Guppi processing. But it seems strange to me, that the one application behaves so contrarily in dealing with the two different types of WU. With nonVLAR it combines them and fully utilises the GPU, but with Guppis it does almost the opposite. It totally defeats the purpose of the exercise.

It is no different for MB units.
Running more than 1 WU produces the best throughput for most hardware. Running more than 2 WUs reduces throughput. It might be better for some WUs, but it's so much worse for others that throughput drops away. Maximum GPU load, or maximum throughput- it's up to you to choose.

If you choose to run so many WUs that throughput falls off, then that's your choice.
You can set it to give best throughput, or you can set it so the GPU load is maxed out. They haven't been the same thing in the past, and they're not likely to in the future. GPU loads that drop no lower than 95% have pretty much always resulted in less throughput. If the load spikes to 99% occasionally, but is usually around 85% (or even 75%) then you'll do the most work per day.
As mentioned, the application is still in it's early stages of development- how you use it is up to you. Max out the GPU load or do the most work, it's your choice.
Grant
Darwin NT
ID: 1793589 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1793602 - Posted: 5 Jun 2016, 4:50:23 UTC - in response to Message 1793589.  
Last modified: 5 Jun 2016, 4:57:03 UTC

. . Yep, that's why I am running SoG. I was hoping for improvements in Guppi processing. But it seems strange to me, that the one application behaves so contrarily in dealing with the two different types of WU. With nonVLAR it combines them and fully utilises the GPU, but with Guppis it does almost the opposite. It totally defeats the purpose of the exercise.

It is no different for MB units.
Running more than 1 WU produces the best throughput for most hardware. Running more than 2 WUs reduces throughput. It might be better for some WUs, but it's so much worse for others that throughput drops away. Maximum GPU load, or maximum throughput- it's up to you to choose.

If you choose to run so many WUs that throughput falls off, then that's your choice.
You can set it to give best throughput, or you can set it so the GPU load is maxed out. They haven't been the same thing in the past, and they're not likely to in the future. GPU loads that drop no lower than 95% have pretty much always resulted in less throughput. If the load spikes to 99% occasionally, but is usually around 85% (or even 75%) then you'll do the most work per day.
As mentioned, the application is still in it's early stages of development- how you use it is up to you. Max out the GPU load or do the most work, it's your choice.



. . Sorry but you have missed the gist of what I said. Running SoG both the nonVLARs and the Guppis use very little of the GPU processing time (with sleep ON) both running at about 30%. When you run multiples the nonVLARs multithread and combine to increase the GPU workload. As it happens threesies give the maximum (best) throughput. Runtimes singly were 20 mins, doubles - 28 to 32 mins, trebles (threesies) 35 to 37 mins. Any more and you will choke the GPU and lose productivity as the usage is in the 90% range doing 3. BUT, the Guppis will not multithread (or so it seems) as they do not combine but seem to timeshare the GPU, each running it's bit one at a time so the GPU workload does NOT increase one little bit, instead the runtimes just blow out to blue blazes. Singly they were 52 mins, I didn't test them as doubles but as triples they take 3 hours and 15 mins to 3 hours 35 mins. And the whole time the GPU load stays at 30% :(

. . These guppies are very contrary critters.

. . That is why I asked if there was a way to make them multithread on the GPU, because if they combined and worked truly concurrently then I would expect runtimes to be around the 95 to 100 min mark. That way they would not cripple the nonVLAR tasks they may be sharing the GPU with, but they really do not like to play nice with others.
ID: 1793602 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : No more guppi's=vlars on the gpu please


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.