are MB 6.03's still being generated?

Message boards : Number crunching : are MB 6.03's still being generated?
Message board moderation

To post messages, you must log in.

AuthorMessage
EdwardPF
Volunteer tester

Send message
Joined: 26 Jul 99
Posts: 389
Credit: 236,772,605
RAC: 374
United States
Message 1214240 - Posted: 5 Apr 2012, 13:49:30 UTC
Last modified: 5 Apr 2012, 13:55:18 UTC

I',m down to 5 MB 6,03s (4 running and 1 waiting) and 250 6.10s (1 running 249 waiting) BUT the last request for more work got 45 6.10s and NO 6.03s...

Are 6.03s being generated??

Ed F

edit 5 min's later:

just requested cpu and gpu again and again got 45 MORE gpu WU's!!
ID: 1214240 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34257
Credit: 79,922,639
RAC: 80
Germany
Message 1214242 - Posted: 5 Apr 2012, 13:56:19 UTC

Yes, sure.

05.04.2012 15:18:43	SETI@home	Scheduler request completed: got 41 new tasks
05.04.2012 15:18:43	SETI@home	[sched_op] Server version 613
05.04.2012 15:18:43	SETI@home	Project requested delay of 303 seconds
05.04.2012 15:18:43	SETI@home	[sched_op] estimated total CPU task duration: 66172 seconds
05.04.2012 15:18:43	SETI@home	[sched_op] estimated total ATI GPU task duration: 0 seconds



With each crime and every kindness we birth our future.
ID: 1214242 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1214246 - Posted: 5 Apr 2012, 14:02:50 UTC - in response to Message 1214240.  

I',m down to 5 MB 6,03s (4 running and 1 waiting) and 250 6.10s (1 running 249 waiting) BUT the last request for more work got 45 6.10s and NO 6.03s...

Are 6.03s being generated??

Ed F

edit 5 min's later:

just requested cpu and gpu again and again got 45 MORE gpu WU's!!

There is no such thing as v6.03 or v6.10 work. Just multibeam work. Just as there is no such thing as CPU or GPU tasks. They are all the same. They just get assigned to a device when you request work.

The version number you are seeing is the number in your app_info.xml. Which corresponds to the default application version. If you look at the applications page you can see the current versions for all of the various apps.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1214246 · Report as offensive
EdwardPF
Volunteer tester

Send message
Joined: 26 Jul 99
Posts: 389
Credit: 236,772,605
RAC: 374
United States
Message 1214247 - Posted: 5 Apr 2012, 14:11:07 UTC - in response to Message 1214246.  

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F
ID: 1214247 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1214253 - Posted: 5 Apr 2012, 14:20:42 UTC - in response to Message 1214247.  
Last modified: 5 Apr 2012, 14:22:57 UTC

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

BOINC operates in a way that it will normally try to fill up the work queue for the GPU first. Even when you have a "requesting work for CPU & GPU" most if not all often get assigned to the GPU.

I can see why they would write it that way. It doesn't make sense from a user point of view to keep filling up one queue at the expense of the other.

I imagine it would be far to much work to change the server/client to work in a way that the client just gets work and then runs it on the available devices.

Edit: When I say it will normally fill up the GPU first it is because it is normally the best device/app version.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1214253 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1214254 - Posted: 5 Apr 2012, 14:22:28 UTC - in response to Message 1214247.  
Last modified: 5 Apr 2012, 14:23:14 UTC

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

Don't know, it happens sometimes. You could go into prefs and deselect GPU work, then you should only get CPU work, when you have enough CPU work then select GPU work again in prefs.
ID: 1214254 · Report as offensive
EdwardPF
Volunteer tester

Send message
Joined: 26 Jul 99
Posts: 389
Credit: 236,772,605
RAC: 374
United States
Message 1214257 - Posted: 5 Apr 2012, 14:28:31 UTC - in response to Message 1214253.  

O.k. ... i guess all is well (???) the GPU queue is full (400 WU's) and now I'm getting CPU WU's (at lease I have gotten 3 so far)

Thanks for the info!!

Ed F
ID: 1214257 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1214262 - Posted: 5 Apr 2012, 14:47:40 UTC - in response to Message 1214247.  

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

It's a combination of server assignment lopsidedness and asking for more work than can be assigned for one request. When you have both GPU and CPU capable of doing the tasks which are available, the server logic sends to the GPU first since it will process faster. When the GPU request is satisfied it will start assigning tasks to CPU. But if you're asking for the equivalent of ~70 GPU tasks or more, there's almost no chance of getting CPU work unless there's some which cannot be sent to the GPU.

You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix.
                                                                 Joe
ID: 1214262 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1214277 - Posted: 5 Apr 2012, 15:23:17 UTC - in response to Message 1214262.  



You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix.
                                                                 Joe

OR Dr. Anderson could amend the code to send a small percentage of the work requests to the slower resource instead of ignoring it entirely until the faster resource is glutted with work.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1214277 · Report as offensive
Wembley
Volunteer tester
Avatar

Send message
Joined: 16 Sep 09
Posts: 429
Credit: 1,844,293
RAC: 0
United States
Message 1214303 - Posted: 5 Apr 2012, 16:44:40 UTC - in response to Message 1214277.  



You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix.
                                                                 Joe

OR Dr. Anderson could amend the code to send a small percentage of the work requests to the slower resource instead of ignoring it entirely until the faster resource is glutted with work.

Work assigned should be pro-rated based on the percent requested. If a system asks for 70% GPU and 30% CPU that is the ratio of work that should be assigned for that request.

ID: 1214303 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1214310 - Posted: 5 Apr 2012, 16:55:51 UTC - in response to Message 1214303.  



You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix.
                                                                 Joe

OR Dr. Anderson could amend the code to send a small percentage of the work requests to the slower resource instead of ignoring it entirely until the faster resource is glutted with work.

Work assigned should be pro-rated based on the percent requested. If a system asks for 70% GPU and 30% CPU that is the ratio of work that should be assigned for that request.

Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least.

"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1214310 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1214324 - Posted: 5 Apr 2012, 17:29:54 UTC - in response to Message 1214310.  

Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least.


+1

Anyway, I think that adding complexity to the schedullers to make that choices might not be really good... May be, this choice should be taken on client side, if the host needs both kind of work then it should only ask for the device that will be iddle first...
And if it fails to get work after (lets say) 2 consecutive requests then it should ask for the next device in the sorted by "time to iddle" list of devices.
This way the servers will have less load and the scheduller will be more simple and faster. (They can even add a user parameter to specify how much priority you want to assign to each device, or make it as complex as they want cause all this will not impact on servers performance)

ID: 1214324 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1214327 - Posted: 5 Apr 2012, 17:35:01 UTC - in response to Message 1214324.  

Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least.


+1

Anyway, I think that adding complexity to the schedullers to make that choices might not be really good...

The scheduler is already complex enough to make the choice of which resource to allocate work to. Modifying the logic to allow a little bit to go to the slower resource should not tax the servers much more, if at all.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1214327 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1214343 - Posted: 5 Apr 2012, 18:05:29 UTC

There is no such thing as v6.03 or v6.10 work. Just multibeam work. Just as there is no such thing as CPU or GPU tasks. They are all the same. They just get assigned to a device when you request work.


Why can't that be done in BM? I mean, why can't "multibeam" tasks be downloaded and then be crunched by whatever (CPU or GPU)?

(I bet the answer is staring me in the face, or worse yet, I already know it... but my brain is a bit foggy ATM)
ID: 1214343 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1214397 - Posted: 5 Apr 2012, 23:46:12 UTC - in response to Message 1214327.  

Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least.

+1

Anyway, I think that adding complexity to the schedullers to make that choices might not be really good...

The scheduler is already complex enough to make the choice of which resource to allocate work to. Modifying the logic to allow a little bit to go to the slower resource should not tax the servers much more, if at all.

The Best App Version idea I submitted to Dr. Anderson via boinc_dev a couple of weeks ago requires only a simple calculation based on how many seconds of work have been requested for each resource. The derived value predicts which resource will actually start crunching the task first, (or from the user perspective which resource would have an idle instance first if it didn't get any work). After a task is assigned the estimated run time is subtracted from the original request time, so the same calculation can be used for other available tasks. Dr. Anderson asked if there was a scenario where that method made the host more productive, and Nicolás Alvarez provided a clear one. So the idea has been tentatively accepted, though is not to be implemented right away because it needs to be merged into the existing rather complex code.

If both the core client and the server estimates of how long tasks will take are reasonable, work should build for all resources more or less evenly toward cache settings. Even if the estimates are significantly off or mismatched some work should be assigned for all resources, going idle on any resource you've elected to use should only happen when there's simply not enough work to keep them all busy.

I didn't mention this earlier simply because the thread was aimed at current conditions, and this is vaporware so far. But speculation on improvements has entered the thread so I thought it best to sketch the bones of an idea which may be moving toward implementation.
                                                                  Joe
ID: 1214397 · Report as offensive

Message boards : Number crunching : are MB 6.03's still being generated?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.