CPU vs. GPU


log in

Advanced search

Message boards : Number crunching : CPU vs. GPU

1 · 2 · Next
Author Message
Qax
Volunteer tester
Send message
Joined: 5 Dec 07
Posts: 19
Credit: 2,563,187
RAC: 49
United States
Message 1462370 - Posted: 9 Jan 2014, 23:48:24 UTC

Hello.

I run 15 boinc projects. I am building another box and I have decided to run 2 projects on it. On it - I am going to run the GPU project that I am "ranked" the lowest in for credits (this one), and then the CPU project that I am "ranked" the lowest in for credits (worldcommunitygrid). Anyway - because I am trying to contribute more to these projects - I want the GPU workin *only* on seti GPU WUs...and then I will have another computer that I'd prefer only work on seti CPU WUs (but that is flexible). The first computer I purely want to work on CPU credits for....the project that does not have GPU enabled. Anyway - I see there are three projects SETI is running. Maybe someone could enlighten me as to how I could achieve what I am trying to do. I know it's going to involve like a "work" and "home" setup.......what I need to know is which of these crunches which types of WUs:

SETI@home Enhanced
SETI@home v7
AstroPulse v6

Thanks muchly!

OzzFan
Volunteer tester
Avatar
Send message
Joined: 9 Apr 02
Posts: 13625
Credit: 31,002,371
RAC: 21,036
United States
Message 1462392 - Posted: 10 Jan 2014, 0:23:32 UTC - in response to Message 1462370.

Yes, create a special venue (e.g. Work or Home, etc.) to put this dedicated computer into. Then, go into your Project preferences and make sure to uncheck Use CPU.

All three types of workunits (they're not projects) have GPU equivalent executables, so there's no need to specify one over the other. However, it is worth pointing out that SETI@home Enhanced is going to be deprecated real soon, so the only two left will be SETI@home v7 and AstroPulse v6. Again, both have the ability to be crunched on the GPU, so you can leave them both enabled.

Qax
Volunteer tester
Send message
Joined: 5 Dec 07
Posts: 19
Credit: 2,563,187
RAC: 49
United States
Message 1466710 - Posted: 20 Jan 2014, 18:02:13 UTC - in response to Message 1462392.

Follow up to this. . .

Is it even worth it for me to crunch CPU WUs if I can crunch GPU ones? Are there some WUs that *need* to be processed by a CPU, or are those WUs only made for people without GPUs essentially? Basically are these resources better spent just going towards a CPU only project (and might they take away what SETI gets on the GPU?). Just - is there honestly a reason for me to crunch CPU WUs for SETI if I can crunch GPU ones as well?

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8631
Credit: 51,471,645
RAC: 49,185
United Kingdom
Message 1466714 - Posted: 20 Jan 2014, 18:15:31 UTC - in response to Message 1466710.
Last modified: 20 Jan 2014, 18:17:12 UTC

Follow up to this. . .

Is it even worth it for me to crunch CPU WUs if I can crunch GPU ones? Are there some WUs that *need* to be processed by a CPU, or are those WUs only made for people without GPUs essentially? Basically are these resources better spent just going towards a CPU only project (and might they take away what SETI gets on the GPU?). Just - is there honestly a reason for me to crunch CPU WUs for SETI if I can crunch GPU ones as well?

Yes, it is.

You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing.

These tasks are the so-called VLARs (marked as such in the task name): it stands for 'Very Low Angle Range'. That comes about because they are recorded when the Arecibo telescope is looking at a single point in the sky for an extended period. Arguably, if that point source is a star with planets, the extended recordings might be the best bet for achieving success in the project's search for ET. So yes, please run VLARs, and for that, you need to enable crunching on your CPUs.

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,953,874
RAC: 14,336
United States
Message 1466790 - Posted: 20 Jan 2014, 21:21:11 UTC - in response to Message 1466714.

Not everybody has a problem running VLARS on their Nvidia
cards.

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8631
Credit: 51,471,645
RAC: 49,185
United Kingdom
Message 1466797 - Posted: 20 Jan 2014, 21:42:05 UTC - in response to Message 1466790.

Not everybody has a problem running VLARS on their Nvidia
cards.

But enough do, and enough complaints were received (including people saying would stop running the project if their computers kept running so sluggishly) that the decision was taken to cut off the supply.

Also, given the value placed on credits round here, the low rate of return (same credits, longer processing time, equals lower RAC) leads some people to feel it's not worth it.

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,953,874
RAC: 14,336
United States
Message 1466805 - Posted: 20 Jan 2014, 22:05:59 UTC - in response to Message 1466797.

Not everybody has a problem running VLARS on their Nvidia
cards.

But enough do,


yep

and enough complaints were received (including people saying would stop running the project if their computers kept running so sluggishly) that the decision was taken to cut off the supply.


People threaten to leave unless they get their way all the time. Personally I doubt if enough people would have left to notice. It was enough of a problem
that the project admins made the correct call.

It's just not accurate to say or even infer that everybody did have
problems.

Also, given the value placed on credits round here, the low rate of return (same credits, longer processing time, equals lower RAC) leads some people to feel it's not worth it.


Well, not everybody worships at the credit altar.

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8631
Credit: 51,471,645
RAC: 49,185
United Kingdom
Message 1466807 - Posted: 20 Jan 2014, 22:15:46 UTC - in response to Message 1466805.

It's just not accurate to say or even infer that everybody did have
problems.

Er, I'm not sure I did that? Just gave some background to explain the decision not to send them out as a matter of course.

Profile MikeProject donor
Volunteer tester
Avatar
Send message
Joined: 17 Feb 01
Posts: 24485
Credit: 33,820,917
RAC: 24,425
Germany
Message 1466811 - Posted: 20 Jan 2014, 22:57:54 UTC

Considering that cuda is much closer to the hardware than OpenCL is, i would imagine it should possible to configure the cuda apps so its possible to process VLARs on Nvidia cards as well.
Its like we would only process zero blanked astropulse units on GPU`s.
Evenso time and credits are no valid arguments in this case because this project is about science, nothing else.

Just my point of view tho.
____________

Lionel
Send message
Joined: 25 Mar 00
Posts: 576
Credit: 235,322,536
RAC: 223,158
Australia
Message 1466817 - Posted: 20 Jan 2014, 23:12:40 UTC

Qax

I feel some of the commentary above may have missed the point regarding your first post at the top of this thread.

If you wish to increase your credits for seti (as you implied above), process seti on GPU only. Look at processing MB and AP WUs. MB WUs give around 60%-70% of the recognition that AP WUs give, so best to process AP WUs above MB WUs. Availability of AP WUs is periodic in that there are periods where they are available and periods when they are not available. Also, due to the fact that AP recognition is higher, they are in greater demand when available.

Processing seti MB on CPU is not worth the effort due to the low level of recognition received. You are better off assigning a project to the CPU that does not have a GPU application enabled (as you thought above).

cheers
____________

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,953,874
RAC: 14,336
United States
Message 1466859 - Posted: 21 Jan 2014, 3:18:35 UTC - in response to Message 1466807.

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

Profile WilliamProject donor
Volunteer tester
Avatar
Send message
Joined: 14 Feb 13
Posts: 1610
Credit: 9,469,907
RAC: 44
Message 1466938 - Posted: 21 Jan 2014, 10:00:32 UTC - in response to Message 1466859.
Last modified: 21 Jan 2014, 10:04:05 UTC

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

All cards struggle - the crunching time is disproportionately high.
But only 'some' notice a difference in the day to day running. I've known rigs that became virtually unusable when the GPU got VLAR and I've known rigs where you'd only notice a slightly increased display response time, if at all.

So, just because YOU don't notice anything doesn't mean it's not there. You're an absolute minority there. And mind you it's not even card specific, it depends on the whole system makeup.

Edit: and btw the reason why Richard is advocating putting a bit of CPU on the project is that somebody needs to mop up the VLARs that don't go to NV.
____________
A person who won't read has no advantage over one who can't read. (Mark Twain)

Profile Raistmer
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 16 Jun 01
Posts: 3490
Credit: 47,528,593
RAC: 36,811
Russia
Message 1467028 - Posted: 21 Jan 2014, 21:32:24 UTC

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...
____________

Profile Wiggo
Avatar
Send message
Joined: 24 Jan 00
Posts: 7323
Credit: 96,734,423
RAC: 67,904
Australia
Message 1467035 - Posted: 21 Jan 2014, 21:42:05 UTC - in response to Message 1467028.

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...

+1

Cheers.

Profile HAL9000
Volunteer tester
Avatar
Send message
Joined: 11 Sep 99
Posts: 4428
Credit: 118,731,024
RAC: 137,723
United States
Message 1467044 - Posted: 21 Jan 2014, 21:53:01 UTC - in response to Message 1467028.

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...

I have said a few times I wish out project preferences were more like that of PrimeGrid. Able to select select CPU, NVIDIA, OpenCL for each type of data separately. Also they added "CPU SSE3 (normal), CPU SSE2 (slower), & CPU AVX (faster)" under CPU for some types now. So advanced users can simply select the correct the correct type for their system. Seems like something along those lines for CUDA and OpenCL versions would be a good idea here for the default apps.
____________
SETI@home classic workunits: 93,865 CPU time: 863,447 hours

Join the BP6/VP6 User Group today!

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,953,874
RAC: 14,336
United States
Message 1467061 - Posted: 21 Jan 2014, 22:41:43 UTC - in response to Message 1466938.

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

All cards struggle - the crunching time is disproportionately high.
But only 'some' notice a difference in the day to day running. I've known rigs that became virtually unusable when the GPU got VLAR and I've known rigs where you'd only notice a slightly increased display response time, if at all.

So, just because YOU don't notice anything doesn't mean it's not there. You're an absolute minority there. And mind you it's not even card specific, it depends on the whole system makeup.


If I notice no lag, then there's no lag to worry about. While my cards may be a minority they can still crunch VLARS with no noticeable lag and no errors. That means any statement that infers or says that all Nvidia cards can't process VLARS is incorrect.

Edit: and btw the reason why Richard is advocating putting a bit of CPU on the project is that somebody needs to mop up the VLARs that don't go to NV.


Yes, I got that. It has nothing to do with what I said. And just a thought,
doing VLARS on my gpu would free up cpu compute capacity for other projects that have no ability to run on a gpu. And I'm not the only one that can run VLARS on their gpu with no problems, although we are in the minority.

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8631
Credit: 51,471,645
RAC: 49,185
United Kingdom
Message 1467076 - Posted: 21 Jan 2014, 23:19:40 UTC - in response to Message 1467061.

Yes, I got that. It has nothing to do with what I said. And just a thought, doing VLARS on my gpu would free up cpu compute capacity for other projects that have no ability to run on a gpu. And I'm not the only one that can run VLARS on their gpu with no problems, although we are in the minority.

I'd agree with that. And I'd also agree with Raistmer's suggestion a few posts back that users should (in an ideal world) be given an opt-in preference allowing them to run VLARs on NVidia (and Intel) GPUs if their particular circumstances make it a viable option.

But until we reach that happy nirvana, there are two flies in the ointment.

1) Any extra options require two thing to happen. Some human being has to write web code to add the extra option controls to the preferences page. And some human being has to write some scheduler code to read and act upon those preferences. PrimeGrid may have spare human beings they could lend us, but the last I heard they were in short supply here.

2) For the time being, VLAR tasks are being issued by the scheduler to CPUs and ATI GPUs. The servers keep track of that information, and assign credit appropriately when the task is returned. You've stated that you, personally, don't crunch for credit (bravo! nor do I), but I do feel that everyone - including the 90% 'silent majority' who never post here - deserves a fair and accurate credit calculation if they want it. It's been asserted that 'rescheduled' tasks - tasks not computed by the compute resource they were allocated to - get awarded distorted credits: not just for the person doing the rescheduling, but for their wingmates too.

Until our CreditNew hounds (hint, hint) achieve the implementation phase of their quest, it sounds as if (I haven't tested the assertion for myself) people who have their own idea of how the project should be run - and run it that way, unilaterally - may be interfering with the job satisfaction of a second group of users. If you say you are not the only one doing this, then I suggest you invite the others in your group to come here and discuss the pros and cons of their actions with the rest of us. I'm not saying there's a 'right' and a 'wrong' here: just constraints and consequences.

Profile jason_gee
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 24 Nov 06
Posts: 5051
Credit: 73,829,732
RAC: 12,805
Australia
Message 1467078 - Posted: 21 Jan 2014, 23:27:42 UTC - in response to Message 1467076.

... Until our CreditNew hounds (hint, hint)...
The CreditNew hounds are currently in special training, so as not to become well bruised dinner. One of us is hiding under the sofa periodically, and the other is busy weeing on every shrub it can find. Which is which, I often wonder.
____________
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

ClaggyProject donor
Volunteer tester
Send message
Joined: 5 Jul 99
Posts: 4139
Credit: 33,506,085
RAC: 23,553
United Kingdom
Message 1467082 - Posted: 21 Jan 2014, 23:33:11 UTC - in response to Message 1467076.

2) For the time being, VLAR tasks are being issued by the scheduler to CPUs and ATI GPUs.

VLARs haven't been issued to ATI GPUs in a while, only CPUs get them at present.

Claggy

Batter UpProject donor
Avatar
Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,858,651
RAC: 0
United States
Message 1467116 - Posted: 22 Jan 2014, 1:36:53 UTC - in response to Message 1467078.

One of us is hiding under the sofa periodically, and the other is busy weeing on every shrub it can find.

Bring me a shrubbery; ni, ni.
____________

1 · 2 · Next

Message boards : Number crunching : CPU vs. GPU

Copyright © 2014 University of California