GBT ('guppi') .vlar tasks will be send to GPUs, what you think about this?

Message boards : Number crunching : GBT ('guppi') .vlar tasks will be send to GPUs, what you think about this?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 10 · Next

AuthorMessage
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1786806 - Posted: 11 May 2016, 19:21:56 UTC - in response to Message 1786804.  
Last modified: 11 May 2016, 19:25:37 UTC


"Ja, men tider dom var uppladdat är inte samma. Dom var något interlaced."
Yes, but the times they were uploaded are not the same. They were somewhat interlaced. (Not runnig the whole time together, and may have had other kind of tasks running at the same time)

In this case it doesn't matter much at all. I've run hundreds of them on BETA, 3 at a time, when I only had GBT VLARs onboard, and I know that the times are 28-30 minutes each.

Just wait until there are plenty of them available here, and you'll see that those are the standard times my 980 runs GBT VLARS with SoG, when run 3 at a time.


OK, jag ska vänta på det.
OK, I'll wait.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1786806 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1786819 - Posted: 11 May 2016, 20:57:38 UTC
Last modified: 11 May 2016, 20:58:29 UTC

Yeah, resurrected 780 on Mac pro, while I snoozed running 2 up with baseline code looks like an hour-ish, so 28-30 mins.

Looks to me, if it continues that way, we might all have the tasks equalising on the VRAM speeds (different apps/cards, similar VRAM similar tasks).

@Mark, yeah 3 might be saturating with these kindof tasks (if settings don;t help much).
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1786819 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1786820 - Posted: 11 May 2016, 21:02:51 UTC - in response to Message 1786819.  

If you chose to do 3 make sure to use commandlines and have a -use_sleep to try and help.

I don't use it as I have a large enough CPU and don't max it out.

Runs 75%
ID: 1786820 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1786829 - Posted: 11 May 2016, 21:32:48 UTC
Last modified: 11 May 2016, 21:34:20 UTC

looks like on my 980 host, (better settings, but slow CPU host) does 2 up in ~34 mins while relatively idle (~17mins each effective), longer when watching twitch streams or youtube vids on the second monitor. No particularly noticeable screen-lag, which I'm surprised about.

Firing the 680 Linux host back up to compare.

Ignoring the credits for now, since they look scary, lol.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1786829 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1786854 - Posted: 11 May 2016, 23:34:04 UTC

I've got a feeling ....

The credit will turn up now since all kind of hosts and especially standard NV app is doing Guppi too.

That is a feeling, but I hope I'll wake up to a better world, if not tomorrow then in three weeks.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1786854 · Report as offensive
Profile ace_quaker

Send message
Joined: 12 Oct 01
Posts: 17
Credit: 33,678,474
RAC: 1
United States
Message 1786858 - Posted: 12 May 2016, 0:00:00 UTC

I came here to figure out why my quad 780 system video response was so slow and power usage down so much. (Creditscrew down too, but with any change at all from this project that's a given.)

But look! Its another poorly optimized application thrown into the wild WAY before its ready! You should merge Seti@home and Seti@home beta because there is obviously no functional separation of the two.
ID: 1786858 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 1786874 - Posted: 12 May 2016, 0:46:14 UTC
Last modified: 12 May 2016, 0:50:50 UTC

In SETI Beta I saw that guppi .vlars crunched with atiapu_SoG on my Linux box with an AMD HD 7770 take twice the time of those crunched with ati5_cal132 and get the double of credits.
Tullio
ID: 1786874 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1786890 - Posted: 12 May 2016, 3:10:10 UTC - in response to Message 1786874.  

I'm seeing times around 1 hour for processing of guppi vlars. Running 2 per card across 2 cards (4 gpu guppies at a time). It seems to be similar regardless of card; 780 TI, 680 Classified, or 580.

I have all the boxes set to 0.25 cpu for gpu wus so that I would end up with a single core feeding the gpus and the rest of the cores free for cpu wus.

HT is turned off.

It has worked well to date, but does not seem to be at the moment given some of the other comments around processing times.

I was wondering if I set core usage to 0.50 cpu for gpu wus whether that might improve things a little by throwing a bit more freedom into the systems.
ID: 1786890 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1786924 - Posted: 12 May 2016, 6:52:48 UTC - in response to Message 1786890.  
Last modified: 12 May 2016, 7:32:09 UTC

Both Raistmer's OpenCL SoG builds, and Petri's optimisations seem to be saying that throwing some CPU at the VLARs helps. Provided you reserve enough CPU cores, for the older generic baseline Cuda builds you can achieve something along those lines by setting the -poll command line option via app_info.xml.

Still working out what's going on here, but yeah, seems the long pulsefinding on recent cards and generic builds is holding up to the changes, but will tend to make cards more equal because of its dependence on VRAM speed (as opposed to core).

That just means different sets of optimisations/apps will prove better for different kinds of tasks. Complicates the short term figuring out phase, but probably will lead to more powerful applications.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1786924 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1786938 - Posted: 12 May 2016, 7:48:30 UTC - in response to Message 1786890.  
Last modified: 12 May 2016, 8:08:22 UTC

I'm seeing times around 1 hour for processing of guppi vlars. Running 2 per card across 2 cards (4 gpu guppies at a time). It seems to be similar regardless of card; 780 TI, 680 Classified, or 580.

I have all the boxes set to 0.25 cpu for gpu wus so that I would end up with a single core feeding the gpus and the rest of the cores free for cpu wus.

HT is turned off.

It has worked well to date, but does not seem to be at the moment given some of the other comments around processing times.

I was wondering if I set core usage to 0.50 cpu for gpu wus whether that might improve things a little by throwing a bit more freedom into the systems.


My GTX 750Tis appear to be doing the Guppie VLARs in approx. 1hr 10min on 2 different systems with 2 different settings.

i7 system, Win10 64bit- 2 WUs at a time, 1 CPU core per WU using the -poll option (general use system, 2 video cards).
C2D system, Win Vista 32bit- 2 WUs at a time, processpriority=high (dedicated cruncher, and now only 1 video card).

I've tried the various other mbcuda.cfg settings, but found that they had little if any effect.


It's interesting looking at GPU-Z while the Guppie VLARs are running. As others have noted, power use drops off when running the VLARs. Interestingly, that coincides with 99% GPU load. When the load drops down occasionally to more normal 85-95% levels, power usage increases again.
So those extended periods of 99% GPU load, while the GPU is busy, it's not actually doing much work.
Grant
Darwin NT
ID: 1786938 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1786944 - Posted: 12 May 2016, 8:20:51 UTC - in response to Message 1786938.  
Last modified: 12 May 2016, 8:30:32 UTC

I've tried the various other mbcuda.cfg settings, but found that they had little if any effect.


Those pulsefinding settings relate to filling computational cores on large devices, so no effect if the VRAM bus is saturated (the bottleneck) instead, is as expected.

You can confirm this probably by first lowering the core clock to see little effect, then putting it back, then lowering the VRAM clock to see if the change is more significant.

What's needed to relieve the memory subsystem load for the new data character, are applications better suited to this search. (full picture of which direction to head isn't clear yet, though likely a combination of supplied optimised code + some CPU use + configuration will result)
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1786944 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1786952 - Posted: 12 May 2016, 8:57:47 UTC - in response to Message 1786944.  
Last modified: 12 May 2016, 8:58:45 UTC

What's needed to relieve the memory subsystem load for the new data character, are applications better suited to this search. (full picture of which direction to head isn't clear yet, though likely a combination of supplied optimised code + some CPU use + configuration will result)

Why is it that the GPUs ability to process work is so heavily dependant on the CPU?

Bus Mastering & the like has been around for decades.
I would have thought that the CUDA/OpenCL application would execute on the GPU. The only times the CPU would be required would be when starting a new WU, to get that WU, load it up & then off it goes on the GPU and once crunching has completed to transfer the result file to the BOINC manager to upload & report it, and then get the next WU.
While crunching, the GPU would need to send out progress reports (for the progress bar & estimated completion time), but I would have thought those could be asynchronous- no need to wait for a response from the CPU. The only input from the CPU during crunching would be to monitor if crunching was to pause (Snooze GPU), start another project to honor resource shares (ideally that should be able to wait till the WU is done in the case of the short runtimes for Seti work) or to exit (BOINC shutting down or system shutting down).

There would be the usual communication traffic between the CPU & GPU for display purposes, but why the significant need for CPU resources for something being executed on the GPU?
Grant
Darwin NT
ID: 1786952 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1786959 - Posted: 12 May 2016, 9:12:06 UTC - in response to Message 1786952.  
Last modified: 12 May 2016, 9:14:00 UTC

Primarily an increase over time (since ye old XPDM days) in latencies associated with feeding the GPUs (mostly due to memory model unification, i.e. virtualisation, unification and added complexity).

These are the considerations driving DirectX12 and Vulkan (since AMDs Çlose to Metal and Mantle bypassing bloated DirectX and OpenGL became a thing)

Add to that, you have more less or relatively static CPU/host performance over the last 5 years, and increasing computational capacity in the GPUs by orders of magnitude .... So what was once very little CPU, now looks like a lot :D A CPU portion reducing results that cannot be so easily parallelised, while GPU portion shrinks.

Shifting to new (reduced) synchronisation techniques is the next stage, and Raistmer's SoG builds give some hints as what can help there. Here though we start to move into territory there aren't any books for (yet :) )
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1786959 · Report as offensive
Bruce
Volunteer tester

Send message
Joined: 15 Mar 02
Posts: 123
Credit: 124,955,234
RAC: 11
United States
Message 1786962 - Posted: 12 May 2016, 9:17:02 UTC

Looking over my current valid guppi vlars, and this is just a rough eyeball estimate, the run times seem to be about 555 seconds (9 minute 15 seconds). Some are slower, some are faster. Just a rough average guesstimate.

I run a single WU at a time with the 3430 SoG app and a optimized tune.

Feel free to check out my results My v8 Tasks.
Bruce
ID: 1786962 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1786995 - Posted: 12 May 2016, 13:16:57 UTC - in response to Message 1786938.  
Last modified: 12 May 2016, 13:18:15 UTC

It's interesting looking at GPU-Z while the Guppie VLARs are running. As others have noted, power use drops off when running the VLARs. Interestingly, that coincides with 99% GPU load. When the load drops down occasionally to more normal 85-95% levels, power usage increases again.
So those extended periods of 99% GPU load, while the GPU is busy, it's not actually doing much work.

I am running 3 per card on my 750Tis. I am seeing similar times as others, but what I see is that my GPU memory usage is constantly changing from 30 to 90%,(IN the past it was steady at about 85%) My GPU Meter on the desktop shows this as well as GPUZ. I see mixed units running, both Guppie and non-Guppie. I was wondering if this is normal or has some meaning. The GPU usage shows 85-95% usage most of the time.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1786995 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1787007 - Posted: 12 May 2016, 13:35:55 UTC - in response to Message 1786874.  

In SETI Beta I saw that guppi .vlars crunched with atiapu_SoG on my Linux box with an AMD HD 7770 take twice the time of those crunched with ati5_cal132 and get the double of credits.
Tullio

And link to your report about that on beta is?....
If someone think that we have any manpower to look ALL results... well, saying very politely you are wrong.
Beta testers (instead of beta hosts) assumed to be sentient beings actively participating in project, not just mentioning peculiarities "by the way" in chatting thread.

To make it plain clear: in PulseFind area SoG performance should be exactly equal non-SoG one. Taking into account that VLAR is almost PulseFind, SoG and non-SoG times should be close. All else worth reporting.

That's beta about.
ID: 1787007 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1787011 - Posted: 12 May 2016, 13:43:56 UTC - in response to Message 1786938.  


It's interesting looking at GPU-Z while the Guppie VLARs are running. As others have noted, power use drops off when running the VLARs. Interestingly, that coincides with 99% GPU load. When the load drops down occasionally to more normal 85-95% levels, power usage increases again.
So those extended periods of 99% GPU load, while the GPU is busy, it's not actually doing much work.

Yep, that's once more reminds to think about what particular metric tells in reality, not what one thinks it should tell. Just as with RAC as host performance metric, GPU load metric loosely related to processing performance.
Higher load just means more time spend in kernel processing.
Nothing more. What kernel does, how effectively kernel did what it does - no reflected in GPU usage metric. So, "at all other equal conditions" only GPU usage relates to performance. VLAR task imply very long kernels in PulseFind. That's the reason sluggishness occur. So, naturaly it results in more "busy with kernel"/"idle" time ratio. I suspect even how many CUs loaded not reflected in that metric...
ID: 1787011 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1787013 - Posted: 12 May 2016, 13:53:56 UTC - in response to Message 1786952.  


Why is it that the GPUs ability to process work is so heavily dependant on the CPU?

In very short: cause there are parts of control code that should be executed serially. And cause currently they executed on CPU.
One should realize that GPU is distinctly another device than CPU is. Most modern CPUs have no more then 32(? ) threads. Even low-end GPU has thousands threads. Go figure.

All these "it should" and "it shouldn't" make me boil ... :P
And then , when communication CPU<=>GPU really decreased, "my BOINC doesn't tick " appears...
ID: 1787013 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1787016 - Posted: 12 May 2016, 14:04:05 UTC - in response to Message 1787011.  
Last modified: 12 May 2016, 14:04:32 UTC

I suspect even how many CUs loaded not reflected in that metric...


Correct, For Cuda devices it's time slice occupation on the first core (SMX), so pretty misleading.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1787016 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1787017 - Posted: 12 May 2016, 14:05:32 UTC - in response to Message 1786995.  


I am running 3 per card on my 750Tis. I am seeing similar times as others, but what I see is that my GPU memory usage is constantly changing from 30 to 90%,(IN the past it was steady at about 85%) My GPU Meter on the desktop shows this as well as GPUZ. I see mixed units running, both Guppie and non-Guppie. I was wondering if this is normal or has some meaning. The GPU usage shows 85-95% usage most of the time.

Depends on what this particular metrics actually measure. If it measure device global memory domain uncached accesses then it roughly equal to "cache misses" in CPU world. And then, apparently, it's bad thing. If it measures just memory accesses w/o distinction between cache hits/misses... then it's bad thing too but for another reason and quite non-fixable one. It would mean low to not compute-intensive parts of code prevail. In other words, too few arithmetic operations on single memory load. Just the very thing that make app to be memory-constrained.

Instead of Gaussian where quite high computation load per each sample implied, PulseFind very similar to AstroPulse's algorithms. Maybe that's why AMD owners feel different then NV ones with VLAR and AP - better memory subsystem (in older generations at least).
ID: 1787017 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 10 · Next

Message boards : Number crunching : GBT ('guppi') .vlar tasks will be send to GPUs, what you think about this?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.