What if... All future WU's were VLAR ?

Message boards : Number crunching : What if... All future WU's were VLAR ?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile JDWhale
Volunteer tester
Avatar

Send message
Joined: 6 Apr 99
Posts: 921
Credit: 21,935,817
RAC: 3
United States
Message 1014156 - Posted: 10 Jul 2010, 5:28:01 UTC
Last modified: 10 Jul 2010, 5:46:36 UTC

Please excuse if this topic has already been argued, but here goes...

I've got a problem with all the moaning about folks not being able to feed their GPUs at maximum efficiency... complaining the the required AR's for max RAC are not being supplied by the download feeders upon request.

Not too long ago, in the CPU only crunching era, this was considered "cherry picking"... The act of cancelling or otherwise thwarting poorer paying WU's to maximize your RAC... your perceived throughput. In the past some crunchers might have realized that different cache sizes or CPU architectures yielded varying performance on different Angle Ranges and might have cancelled poorly rewarding WUs based on this knowledge (or other 'cherry picking' reasons), while the 'pure' cruncher crunched whatever he was fed... even if it tasted/paid like sh*t.

How is reschedueling poorer performing WU's from GPU to CPU any different, especially when you might end up with months worth of work queued for the CPU while your GPU sits idle waiting for the cherries to show up?

If you are as committed to crunching for the project as many of you say you are, and have spent the cash to build super crunching machines that exclude poorly paying (credit wise) WUs... What would you do if all future WU's were VLAR? Which just might happen as interesting targets are identified and the search narrows to specific points in the sky (I don't know if this is true, just hypothesising :) ?

Just a thought...
Warm regards,
ID: 1014156 · Report as offensive
Dena Wiltsie
Volunteer tester

Send message
Joined: 19 Apr 01
Posts: 1628
Credit: 24,230,968
RAC: 26
United States
Message 1014161 - Posted: 10 Jul 2010, 5:40:22 UTC

I would have a big problem if the only thing available was VLAR's. The stock Apple application will not crunch them. Unless the application is improved, I would fade into history. On top of that, Apple is also lacking CUDA support for SETI and my graphic card will not support CUDA at all so everything must be crunched in the CPU. The last time something like that happened was back in the classic days. A new release of the application was broke for about a year and I couldn't crunch till it was corrected.
ID: 1014161 · Report as offensive
Rasputin
Volunteer tester

Send message
Joined: 13 Jun 02
Posts: 1764
Credit: 6,132,221
RAC: 0
Russia
Message 1014162 - Posted: 10 Jul 2010, 5:43:41 UTC - in response to Message 1014156.  

Please excuse if this topic has already been argued, but here goes...

I've got a problem with all the moaning about folks not being able to feed their GPUs at maximum efficiency... complaining the the required AR's for max RAC are not being supplied by the download feeders upon request.

Not too long ago, in the CPU only crunching era, this was considered "cherry picking"... The act of cancelling or otherwise thwarting poorer paying WU's to maximize your RAC... your perceived throughput. In the past some ucrunchers might have realized that different cache sizes or CPU architectures yielded varying performance on different Angle Ranges and might have cancelled poorly rewarding WUs based on this knowledge (or other 'cherry picking' reasons), while the 'pure' cruncher crunched whatever he was fed... even if it tasted/paid like sh*t.

How is reschedueling poorer performing WU's from GPU to CPU any different, especially when you might end up with months worth of work queued for the CPU while your GPU sits idle waiting for the cherries to show up?

If you are as committed to crunching for the project as many of you say you are, and have spent the cash to build super crunching machines that exclude poorly paying (credit wise) WUs... What would you do if all future WU's were VLAR? Which just might happen as interesting targets are identified and the search narrows to specific points in the sky (I don't know if this is true, just hypothesising :) ?

Just a thought...
Warm regards,


Rescheduling VLAR's to the cpu isn't about picking the best WU's for the GPU to increase RAC. VLAR's (in my case) will grind my computer to a halt and make it unusable.

If in the future all work units are VLAR's, I'll have to completely stop crunching on my gpu's. Simple as that...
ID: 1014162 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65709
Credit: 55,293,173
RAC: 49
United States
Message 1014164 - Posted: 10 Jul 2010, 5:45:20 UTC - in response to Message 1014156.  

Please excuse if this topic has already been argued, but here goes...

I've got a problem with all the moaning about folks not being able to feed their GPUs at maximum efficiency... complaining the the required AR's for max RAC are not being supplied by the download feeders upon request.

Not too long ago, in the CPU only crunching era, this was considered "cherry picking"... The act of cancelling or otherwise thwarting poorer paying WU's to maximize your RAC... your perceived throughput. In the past some ucrunchers might have realized that different cache sizes or CPU architectures yielded varying performance on different Angle Ranges and might have cancelled poorly rewarding WUs based on this knowledge (or other 'cherry picking' reasons), while the 'pure' cruncher crunched whatever he was fed... even if it tasted/paid like sh*t.

How is reschedueling poorer performing WU's from GPU to CPU any different, especially when you might end up with months worth of work queued for the CPU while your GPU sits idle waiting for the cherries to show up?

If you are as committed to crunching for the project as many of you say you are, and have spent the cash to build super crunching machines that exclude poorly paying (credit wise) WUs... What would you do if all future WU's were VLAR? Which just might happen as interesting targets are identified and the search narrows to specific points in the sky (I don't know if this is true, just hypothesising :) ?

Just a thought...
Warm regards,

JD, I only know when I have a VLAR or two on My gpus, the whole PC almost becomes unusable(to the point of almost locking up, Unless the VLAR is aborted or rescheduled before the VLARs get to the gpus, As It's impossible to reschedule a VLAR once they are being worked on), But then I have two GTX295 cards and I don't have money for four GTX470 cards which would from what I've heard do a better job of processing the VLARs, Unlike just about any older series card, Like the GT200 series or the 9xxx or the 8xxx nvidia cards.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1014164 · Report as offensive
Profile Geek@Play
Volunteer tester
Avatar

Send message
Joined: 31 Jul 01
Posts: 2467
Credit: 86,146,931
RAC: 0
United States
Message 1014165 - Posted: 10 Jul 2010, 5:48:04 UTC

Hi JD...........

When VLAR work is loaded into any of my GPU's the computer becomes unusable. Simple as that. Rescheduling them to the CPU is an elegant solution to the problem and avoids the need to abort the work. I certainly do not consider this "cherry picking" the work units.
Boinc....Boinc....Boinc....Boinc....
ID: 1014165 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65709
Credit: 55,293,173
RAC: 49
United States
Message 1014166 - Posted: 10 Jul 2010, 5:48:06 UTC - in response to Message 1014162.  

Please excuse if this topic has already been argued, but here goes...

I've got a problem with all the moaning about folks not being able to feed their GPUs at maximum efficiency... complaining the the required AR's for max RAC are not being supplied by the download feeders upon request.

Not too long ago, in the CPU only crunching era, this was considered "cherry picking"... The act of cancelling or otherwise thwarting poorer paying WU's to maximize your RAC... your perceived throughput. In the past some ucrunchers might have realized that different cache sizes or CPU architectures yielded varying performance on different Angle Ranges and might have cancelled poorly rewarding WUs based on this knowledge (or other 'cherry picking' reasons), while the 'pure' cruncher crunched whatever he was fed... even if it tasted/paid like sh*t.

How is reschedueling poorer performing WU's from GPU to CPU any different, especially when you might end up with months worth of work queued for the CPU while your GPU sits idle waiting for the cherries to show up?

If you are as committed to crunching for the project as many of you say you are, and have spent the cash to build super crunching machines that exclude poorly paying (credit wise) WUs... What would you do if all future WU's were VLAR? Which just might happen as interesting targets are identified and the search narrows to specific points in the sky (I don't know if this is true, just hypothesising :) ?

Just a thought...
Warm regards,


Rescheduling VLAR's to the cpu isn't about picking the best WU's for the GPU to increase RAC. VLAR's (in my case) will grind my computer to a halt and make it unusable.

If in the future all work units are VLAR's, I'll have to completely stop crunching on my gpu's. Simple as that...

Same here, I agree with the Squirrel.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1014166 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014185 - Posted: 10 Jul 2010, 6:49:30 UTC

IF all existing work were VLAR, I am sure that Jason and the rest of the Lunatics crew would be right on it to come up with an acceptable work around for the GPUs.
As it stands right now, that is not a priority due to the availability of the rescheduler tool and the non-vlarkill app.

I do NOT cherry pick.
I don't use the vlarkill app, so I crunch everything the servers send me.
Rearranging them on my rig to make best use of the hardware resources available is not cherry picking. Allowing the vlarkill app to trash VLAR WUs and send them back to the servers for 'somebody else' to do is.


"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014185 · Report as offensive
Profile JDWhale
Volunteer tester
Avatar

Send message
Joined: 6 Apr 99
Posts: 921
Credit: 21,935,817
RAC: 3
United States
Message 1014224 - Posted: 10 Jul 2010, 8:53:55 UTC

From the replies I see here, it seems that the BOINC Manager Preference not to "Use GPU while computer is in use" option is either being ignored by users, or not implemented very well by the application, if at all.

Not being a "GPU cruncher" myself, I can't feel the pain by those of you that are. But my experience programming parallel computing devices dating back 25 years (past tense called 'array processors') might lead me to believe that some lessons learned long ago are possibly ignored with the current implementation [possibly at the driver level?].

ID: 1014224 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014226 - Posted: 10 Jul 2010, 8:58:41 UTC - in response to Message 1014224.  

From the replies I see here, it seems that the BOINC Manager Preference not to "Use GPU while computer is in use" option is either being ignored by users, or not implemented very well by the application, if at all.

Not being a "GPU cruncher" myself, I can't feel the pain by those of you that are. But my experience programming parallel computing devices dating back 25 years (past tense called 'array processors') might lead me to believe that some lessons learned long ago are possibly ignored with the current implementation [possibly at the driver level?].

That would be a question for you to pose to Jason.
No 'whaleports' for GPUs, then?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014226 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1014228 - Posted: 10 Jul 2010, 9:00:49 UTC - in response to Message 1014224.  
Last modified: 10 Jul 2010, 9:04:17 UTC

... But my experience programming parallel computing devices dating back 25 years (past tense called 'array processors') might lead me to believe that some lessons learned long ago are possibly ignored with the current implementation [possibly at the driver level?].


Absolutely! No argument from me ;) ( Cray, hitachi & other models fit 'nicely', the implication that VLARs cannot be better parallelised/programmed is of course not the complete truth, and I never made that particular claim. There are some tricky design flaws to weed out first, then it'll be likely VLAR 'squishing' for some time )
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1014228 · Report as offensive
Profile hiamps
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 4292
Credit: 72,971,319
RAC: 0
United States
Message 1014310 - Posted: 10 Jul 2010, 15:38:38 UTC - in response to Message 1014156.  
Last modified: 10 Jul 2010, 15:39:19 UTC

Please excuse if this topic has already been argued, but here goes...

I've got a problem with all the moaning about folks not being able to feed their GPUs at maximum efficiency... complaining the the required AR's for max RAC are not being supplied by the download feeders upon request.

Not too long ago, in the CPU only crunching era, this was considered "cherry picking"... The act of cancelling or otherwise thwarting poorer paying WU's to maximize your RAC... your perceived throughput. In the past some crunchers might have realized that different cache sizes or CPU architectures yielded varying performance on different Angle Ranges and might have cancelled poorly rewarding WUs based on this knowledge (or other 'cherry picking' reasons), while the 'pure' cruncher crunched whatever he was fed... even if it tasted/paid like sh*t.

How is reschedueling poorer performing WU's from GPU to CPU any different, especially when you might end up with months worth of work queued for the CPU while your GPU sits idle waiting for the cherries to show up?

If you are as committed to crunching for the project as many of you say you are, and have spent the cash to build super crunching machines that exclude poorly paying (credit wise) WUs... What would you do if all future WU's were VLAR? Which just might happen as interesting targets are identified and the search narrows to specific points in the sky (I don't know if this is true, just hypothesising :) ?

Just a thought...
Warm regards,

Just wondering why you failed to mention the huge hit your computer takes while running a VLar. I have ran quite a few and if I get 3 at one time I can type with 2 fingers way faster than my screen can keep up. I think if your question is valid you should address this. Mine would run them as I have other machines I can use, others don't and wouldn't have a usable machine. Maybe you should try one and will understand better. Not cherry picking but having a usable machine at least for some.
Official Abuser of Boinc Buttons...
And no good credit hound!
ID: 1014310 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1014371 - Posted: 10 Jul 2010, 17:32:18 UTC

Hypothetical situations can sometimes clarify issues, though it is often difficult to restrain oneself from extending the idea. I'll try.

I believe there's a full gray scale from OS/nVidia driver/CUDA card combinations which effectively crash on VLARs all the way to combinations where VLARs run without nasty side effects, though relatively slowly.

Obviously if there were only VLAR work the worst combinations simply wouldn't be used on this project. The best combinations could be, and would provide a significant productivity increase over only using CPUs, but those who had spent the money to buy such systems might be motivated to use them on projects with more suitable work. Combinations in the middle also might be used here with the option not to do CUDA while the user was doing other things on the computer, but motivation to shift to a different project would be even higher.

The situation might also lead to a different focus in further development efforts. For instance, Jason made an internal test hybrid version which effectively used one GPU and one full CPU, dividing the work so what was difficult on GPU was done on CPU and vice versa. The remaining parts were divided with the goal that both the CPU and the GPU would be kept busy throughout the run time. Not all details were resolved though IIRC Jason used that hybrid specifically tuned for his system for an extended time here with good results. The simpler situation if only VLAR work were available would certainly simplify logic to tune such a hybrid to various hardware combinations.

I made reference in a post in the "Using FLOPs estimates in app_info.xml" thread to a possible modification to VLAR processing which would reduce the stress on CUDA while perhaps increasing the science return. If Arecibo were permanently limited to providing only data producing VLAR work, the needed splitter changes would be trivial and could be implemented quickly.

However the hypothetical case is viewed, having all work so similar obviously should make Reschedule or anything similar unnecessary. OTOH, I wouldn't be surprised if some BOINC server-side changes would combine badly with the client-side work fetch algorithms and sometimes provide either too much CPU work and too little GPU work or vice versa. Then a simple local rebalancing would still make sense to keep resources from going idle during a 3 day outage.
                                                                Joe
ID: 1014371 · Report as offensive
Jakke

Send message
Joined: 17 May 99
Posts: 1
Credit: 27,629,867
RAC: 0
Finland
Message 1014427 - Posted: 10 Jul 2010, 19:34:11 UTC - in response to Message 1014371.  

Hmmm...until this CPU/GPU balance is changed somehow, I have moved all my GPUs to other projects, because it's rather useless to keep 7 NVIDIA GPU cards mostly idle for seti.
So no more VLAR autokill, idle GPUs and maybe now I even lessen seti bandwith problems from my own part ;D
Still keepping all my computers running CPU seti 24/7, as I have done last 11 years, but no more GPUs...for now :)
Hope they continue this good project...

ID: 1014427 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1014443 - Posted: 10 Jul 2010, 20:15:02 UTC

I used to run VLAR auto kill, I dont consider it cherry picking,It was a tool that came out before the reshecduler. However I have reinstalled Lunatics and picked the non Vlar kill. Why you ask. Because during these 3 day outages I need all the GPU work i can get. No sence killing them off when i can crunch them. My GTS 250 can crunch them in 2 to 3 hours and it does not seen to affect the performance on this i7 at all.
[/quote]

Old James
ID: 1014443 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1014448 - Posted: 10 Jul 2010, 20:27:29 UTC - in response to Message 1014443.  

You're lucky James, they kill my little 9500GT. I still run the VLARKiller and use the rescheduler to keep them off it.♦


PROUD MEMBER OF Team Starfire World BOINC
ID: 1014448 · Report as offensive

Message boards : Number crunching : What if... All future WU's were VLAR ?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.