CPU vs. GPU

Message boards : Number crunching : CPU vs. GPU
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Qax
Volunteer tester

Send message
Joined: 5 Dec 07
Posts: 19
Credit: 2,974,646
RAC: 0
United States
Message 1462370 - Posted: 9 Jan 2014, 23:48:24 UTC

Hello.

I run 15 boinc projects. I am building another box and I have decided to run 2 projects on it. On it - I am going to run the GPU project that I am "ranked" the lowest in for credits (this one), and then the CPU project that I am "ranked" the lowest in for credits (worldcommunitygrid). Anyway - because I am trying to contribute more to these projects - I want the GPU workin *only* on seti GPU WUs...and then I will have another computer that I'd prefer only work on seti CPU WUs (but that is flexible). The first computer I purely want to work on CPU credits for....the project that does not have GPU enabled. Anyway - I see there are three projects SETI is running. Maybe someone could enlighten me as to how I could achieve what I am trying to do. I know it's going to involve like a "work" and "home" setup.......what I need to know is which of these crunches which types of WUs:

SETI@home Enhanced
SETI@home v7
AstroPulse v6

Thanks muchly!
ID: 1462370 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1462392 - Posted: 10 Jan 2014, 0:23:32 UTC - in response to Message 1462370.  

Yes, create a special venue (e.g. Work or Home, etc.) to put this dedicated computer into. Then, go into your Project preferences and make sure to uncheck Use CPU.

All three types of workunits (they're not projects) have GPU equivalent executables, so there's no need to specify one over the other. However, it is worth pointing out that SETI@home Enhanced is going to be deprecated real soon, so the only two left will be SETI@home v7 and AstroPulse v6. Again, both have the ability to be crunched on the GPU, so you can leave them both enabled.
ID: 1462392 · Report as offensive
Qax
Volunteer tester

Send message
Joined: 5 Dec 07
Posts: 19
Credit: 2,974,646
RAC: 0
United States
Message 1466710 - Posted: 20 Jan 2014, 18:02:13 UTC - in response to Message 1462392.  

Follow up to this. . .

Is it even worth it for me to crunch CPU WUs if I can crunch GPU ones? Are there some WUs that *need* to be processed by a CPU, or are those WUs only made for people without GPUs essentially? Basically are these resources better spent just going towards a CPU only project (and might they take away what SETI gets on the GPU?). Just - is there honestly a reason for me to crunch CPU WUs for SETI if I can crunch GPU ones as well?
ID: 1466710 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1466714 - Posted: 20 Jan 2014, 18:15:31 UTC - in response to Message 1466710.  
Last modified: 20 Jan 2014, 18:17:12 UTC

Follow up to this. . .

Is it even worth it for me to crunch CPU WUs if I can crunch GPU ones? Are there some WUs that *need* to be processed by a CPU, or are those WUs only made for people without GPUs essentially? Basically are these resources better spent just going towards a CPU only project (and might they take away what SETI gets on the GPU?). Just - is there honestly a reason for me to crunch CPU WUs for SETI if I can crunch GPU ones as well?

Yes, it is.

You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing.

These tasks are the so-called VLARs (marked as such in the task name): it stands for 'Very Low Angle Range'. That comes about because they are recorded when the Arecibo telescope is looking at a single point in the sky for an extended period. Arguably, if that point source is a star with planets, the extended recordings might be the best bet for achieving success in the project's search for ET. So yes, please run VLARs, and for that, you need to enable crunching on your CPUs.
ID: 1466714 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1466790 - Posted: 20 Jan 2014, 21:21:11 UTC - in response to Message 1466714.  

Not everybody has a problem running VLARS on their Nvidia
cards.
ID: 1466790 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1466797 - Posted: 20 Jan 2014, 21:42:05 UTC - in response to Message 1466790.  

Not everybody has a problem running VLARS on their Nvidia
cards.

But enough do, and enough complaints were received (including people saying would stop running the project if their computers kept running so sluggishly) that the decision was taken to cut off the supply.

Also, given the value placed on credits round here, the low rate of return (same credits, longer processing time, equals lower RAC) leads some people to feel it's not worth it.
ID: 1466797 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1466805 - Posted: 20 Jan 2014, 22:05:59 UTC - in response to Message 1466797.  

Not everybody has a problem running VLARS on their Nvidia
cards.

But enough do,


yep

and enough complaints were received (including people saying would stop running the project if their computers kept running so sluggishly) that the decision was taken to cut off the supply.


People threaten to leave unless they get their way all the time. Personally I doubt if enough people would have left to notice. It was enough of a problem
that the project admins made the correct call.

It's just not accurate to say or even infer that everybody did have
problems.

Also, given the value placed on credits round here, the low rate of return (same credits, longer processing time, equals lower RAC) leads some people to feel it's not worth it.


Well, not everybody worships at the credit altar.
ID: 1466805 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1466807 - Posted: 20 Jan 2014, 22:15:46 UTC - in response to Message 1466805.  

It's just not accurate to say or even infer that everybody did have
problems.

Er, I'm not sure I did that? Just gave some background to explain the decision not to send them out as a matter of course.
ID: 1466807 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34253
Credit: 79,922,639
RAC: 80
Germany
Message 1466811 - Posted: 20 Jan 2014, 22:57:54 UTC

Considering that cuda is much closer to the hardware than OpenCL is, i would imagine it should possible to configure the cuda apps so its possible to process VLARs on Nvidia cards as well.
Its like we would only process zero blanked astropulse units on GPU`s.
Evenso time and credits are no valid arguments in this case because this project is about science, nothing else.

Just my point of view tho.


With each crime and every kindness we birth our future.
ID: 1466811 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1466817 - Posted: 20 Jan 2014, 23:12:40 UTC

Qax

I feel some of the commentary above may have missed the point regarding your first post at the top of this thread.

If you wish to increase your credits for seti (as you implied above), process seti on GPU only. Look at processing MB and AP WUs. MB WUs give around 60%-70% of the recognition that AP WUs give, so best to process AP WUs above MB WUs. Availability of AP WUs is periodic in that there are periods where they are available and periods when they are not available. Also, due to the fact that AP recognition is higher, they are in greater demand when available.

Processing seti MB on CPU is not worth the effort due to the low level of recognition received. You are better off assigning a project to the CPU that does not have a GPU application enabled (as you thought above).

cheers
ID: 1466817 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1466859 - Posted: 21 Jan 2014, 3:18:35 UTC - in response to Message 1466807.  

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.
ID: 1466859 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1466938 - Posted: 21 Jan 2014, 10:00:32 UTC - in response to Message 1466859.  
Last modified: 21 Jan 2014, 10:04:05 UTC

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

All cards struggle - the crunching time is disproportionately high.
But only 'some' notice a difference in the day to day running. I've known rigs that became virtually unusable when the GPU got VLAR and I've known rigs where you'd only notice a slightly increased display response time, if at all.

So, just because YOU don't notice anything doesn't mean it's not there. You're an absolute minority there. And mind you it's not even card specific, it depends on the whole system makeup.

Edit: and btw the reason why Richard is advocating putting a bit of CPU on the project is that somebody needs to mop up the VLARs that don't go to NV.
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1466938 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1467028 - Posted: 21 Jan 2014, 21:32:24 UTC

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1467028 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1467035 - Posted: 21 Jan 2014, 21:42:05 UTC - in response to Message 1467028.  

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...

+1

Cheers.
ID: 1467035 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1467044 - Posted: 21 Jan 2014, 21:53:01 UTC - in response to Message 1467028.  

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...

I have said a few times I wish out project preferences were more like that of PrimeGrid. Able to select select CPU, NVIDIA, OpenCL for each type of data separately. Also they added "CPU SSE3 (normal), CPU SSE2 (slower), & CPU AVX (faster)" under CPU for some types now. So advanced users can simply select the correct the correct type for their system. Seems like something along those lines for CUDA and OpenCL versions would be a good idea here for the default apps.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1467044 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1467061 - Posted: 21 Jan 2014, 22:41:43 UTC - in response to Message 1466938.  

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

All cards struggle - the crunching time is disproportionately high.
But only 'some' notice a difference in the day to day running. I've known rigs that became virtually unusable when the GPU got VLAR and I've known rigs where you'd only notice a slightly increased display response time, if at all.

So, just because YOU don't notice anything doesn't mean it's not there. You're an absolute minority there. And mind you it's not even card specific, it depends on the whole system makeup.


If I notice no lag, then there's no lag to worry about. While my cards may be a minority they can still crunch VLARS with no noticeable lag and no errors. That means any statement that infers or says that all Nvidia cards can't process VLARS is incorrect.

Edit: and btw the reason why Richard is advocating putting a bit of CPU on the project is that somebody needs to mop up the VLARs that don't go to NV.


Yes, I got that. It has nothing to do with what I said. And just a thought,
doing VLARS on my gpu would free up cpu compute capacity for other projects that have no ability to run on a gpu. And I'm not the only one that can run VLARS on their gpu with no problems, although we are in the minority.
ID: 1467061 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1467076 - Posted: 21 Jan 2014, 23:19:40 UTC - in response to Message 1467061.  

Yes, I got that. It has nothing to do with what I said. And just a thought, doing VLARS on my gpu would free up cpu compute capacity for other projects that have no ability to run on a gpu. And I'm not the only one that can run VLARS on their gpu with no problems, although we are in the minority.

I'd agree with that. And I'd also agree with Raistmer's suggestion a few posts back that users should (in an ideal world) be given an opt-in preference allowing them to run VLARs on NVidia (and Intel) GPUs if their particular circumstances make it a viable option.

But until we reach that happy nirvana, there are two flies in the ointment.

1) Any extra options require two thing to happen. Some human being has to write web code to add the extra option controls to the preferences page. And some human being has to write some scheduler code to read and act upon those preferences. PrimeGrid may have spare human beings they could lend us, but the last I heard they were in short supply here.

2) For the time being, VLAR tasks are being issued by the scheduler to CPUs and ATI GPUs. The servers keep track of that information, and assign credit appropriately when the task is returned. You've stated that you, personally, don't crunch for credit (bravo! nor do I), but I do feel that everyone - including the 90% 'silent majority' who never post here - deserves a fair and accurate credit calculation if they want it. It's been asserted that 'rescheduled' tasks - tasks not computed by the compute resource they were allocated to - get awarded distorted credits: not just for the person doing the rescheduling, but for their wingmates too.

Until our CreditNew hounds (hint, hint) achieve the implementation phase of their quest, it sounds as if (I haven't tested the assertion for myself) people who have their own idea of how the project should be run - and run it that way, unilaterally - may be interfering with the job satisfaction of a second group of users. If you say you are not the only one doing this, then I suggest you invite the others in your group to come here and discuss the pros and cons of their actions with the rest of us. I'm not saying there's a 'right' and a 'wrong' here: just constraints and consequences.
ID: 1467076 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1467078 - Posted: 21 Jan 2014, 23:27:42 UTC - in response to Message 1467076.  

... Until our CreditNew hounds (hint, hint)...
The CreditNew hounds are currently in special training, so as not to become well bruised dinner. One of us is hiding under the sofa periodically, and the other is busy weeing on every shrub it can find. Which is which, I often wonder.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1467078 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1467082 - Posted: 21 Jan 2014, 23:33:11 UTC - in response to Message 1467076.  

2) For the time being, VLAR tasks are being issued by the scheduler to CPUs and ATI GPUs.

VLARs haven't been issued to ATI GPUs in a while, only CPUs get them at present.

Claggy
ID: 1467082 · Report as offensive
Batter Up
Avatar

Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,860,347
RAC: 0
United States
Message 1467116 - Posted: 22 Jan 2014, 1:36:53 UTC - in response to Message 1467078.  

One of us is hiding under the sofa periodically, and the other is busy weeing on every shrub it can find.

Bring me a shrubbery; ni, ni.
ID: 1467116 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : CPU vs. GPU


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.