are MB 6.03's still being generated?


log in

Advanced search

Message boards : Number crunching : are MB 6.03's still being generated?

Author Message
EdwardPF
Volunteer tester
Send message
Joined: 26 Jul 99
Posts: 237
Credit: 53,359,154
RAC: 100,086
United States
Message 1214240 - Posted: 5 Apr 2012, 13:49:30 UTC
Last modified: 5 Apr 2012, 13:55:18 UTC

I',m down to 5 MB 6,03s (4 running and 1 waiting) and 250 6.10s (1 running 249 waiting) BUT the last request for more work got 45 6.10s and NO 6.03s...

Are 6.03s being generated??

Ed F

edit 5 min's later:

just requested cpu and gpu again and again got 45 MORE gpu WU's!!

Profile MikeProject donor
Volunteer tester
Avatar
Send message
Joined: 17 Feb 01
Posts: 24039
Credit: 32,991,233
RAC: 23,101
Germany
Message 1214242 - Posted: 5 Apr 2012, 13:56:19 UTC

Yes, sure.

05.04.2012 15:18:43 SETI@home Scheduler request completed: got 41 new tasks
05.04.2012 15:18:43 SETI@home [sched_op] Server version 613
05.04.2012 15:18:43 SETI@home Project requested delay of 303 seconds
05.04.2012 15:18:43 SETI@home [sched_op] estimated total CPU task duration: 66172 seconds
05.04.2012 15:18:43 SETI@home [sched_op] estimated total ATI GPU task duration: 0 seconds

____________

Profile HAL9000
Volunteer tester
Avatar
Send message
Joined: 11 Sep 99
Posts: 4167
Credit: 114,037,169
RAC: 138,936
United States
Message 1214246 - Posted: 5 Apr 2012, 14:02:50 UTC - in response to Message 1214240.

I',m down to 5 MB 6,03s (4 running and 1 waiting) and 250 6.10s (1 running 249 waiting) BUT the last request for more work got 45 6.10s and NO 6.03s...

Are 6.03s being generated??

Ed F

edit 5 min's later:

just requested cpu and gpu again and again got 45 MORE gpu WU's!!

There is no such thing as v6.03 or v6.10 work. Just multibeam work. Just as there is no such thing as CPU or GPU tasks. They are all the same. They just get assigned to a device when you request work.

The version number you are seeing is the number in your app_info.xml. Which corresponds to the default application version. If you look at the applications page you can see the current versions for all of the various apps.
____________
SETI@home classic workunits: 93,865 CPU time: 863,447 hours

Join the BP6/VP6 User Group today!

EdwardPF
Volunteer tester
Send message
Joined: 26 Jul 99
Posts: 237
Credit: 53,359,154
RAC: 100,086
United States
Message 1214247 - Posted: 5 Apr 2012, 14:11:07 UTC - in response to Message 1214246.

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

Profile HAL9000
Volunteer tester
Avatar
Send message
Joined: 11 Sep 99
Posts: 4167
Credit: 114,037,169
RAC: 138,936
United States
Message 1214253 - Posted: 5 Apr 2012, 14:20:42 UTC - in response to Message 1214247.
Last modified: 5 Apr 2012, 14:22:57 UTC

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

BOINC operates in a way that it will normally try to fill up the work queue for the GPU first. Even when you have a "requesting work for CPU & GPU" most if not all often get assigned to the GPU.

I can see why they would write it that way. It doesn't make sense from a user point of view to keep filling up one queue at the expense of the other.

I imagine it would be far to much work to change the server/client to work in a way that the client just gets work and then runs it on the available devices.

Edit: When I say it will normally fill up the GPU first it is because it is normally the best device/app version.
____________
SETI@home classic workunits: 93,865 CPU time: 863,447 hours

Join the BP6/VP6 User Group today!

JohnDKProject donor
Volunteer tester
Avatar
Send message
Joined: 28 May 00
Posts: 842
Credit: 44,200,118
RAC: 73,743
Denmark
Message 1214254 - Posted: 5 Apr 2012, 14:22:28 UTC - in response to Message 1214247.
Last modified: 5 Apr 2012, 14:23:14 UTC

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

Don't know, it happens sometimes. You could go into prefs and deselect GPU work, then you should only get CPU work, when you have enough CPU work then select GPU work again in prefs.

EdwardPF
Volunteer tester
Send message
Joined: 26 Jul 99
Posts: 237
Credit: 53,359,154
RAC: 100,086
United States
Message 1214257 - Posted: 5 Apr 2012, 14:28:31 UTC - in response to Message 1214253.

O.k. ... i guess all is well (???) the GPU queue is full (400 WU's) and now I'm getting CPU WU's (at lease I have gotten 3 so far)

Thanks for the info!!

Ed F

Josef W. SegurProject donor
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4244
Credit: 1,047,369
RAC: 275
United States
Message 1214262 - Posted: 5 Apr 2012, 14:47:40 UTC - in response to Message 1214247.

That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually??

Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else??

Ed F

It's a combination of server assignment lopsidedness and asking for more work than can be assigned for one request. When you have both GPU and CPU capable of doing the tasks which are available, the server logic sends to the GPU first since it will process faster. When the GPU request is satisfied it will start assigning tasks to CPU. But if you're asking for the equivalent of ~70 GPU tasks or more, there's almost no chance of getting CPU work unless there's some which cannot be sent to the GPU.

You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix.
Joe

Wembley
Volunteer tester
Avatar
Send message
Joined: 16 Sep 09
Posts: 415
Credit: 888,257
RAC: 0
United States
Message 1214303 - Posted: 5 Apr 2012, 16:44:40 UTC - in response to Message 1214277.



You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix.
Joe

OR Dr. Anderson could amend the code to send a small percentage of the work requests to the slower resource instead of ignoring it entirely until the faster resource is glutted with work.

Work assigned should be pro-rated based on the percent requested. If a system asks for 70% GPU and 30% CPU that is the ratio of work that should be assigned for that request.

____________


Donate with your searches and online buys:
http://www.goodsearch.com/toolbar/university-of-california-setihome

Horacio
Send message
Joined: 14 Jan 00
Posts: 536
Credit: 73,374,590
RAC: 90,358
Argentina
Message 1214324 - Posted: 5 Apr 2012, 17:29:54 UTC - in response to Message 1214310.

Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least.


+1

Anyway, I think that adding complexity to the schedullers to make that choices might not be really good... May be, this choice should be taken on client side, if the host needs both kind of work then it should only ask for the device that will be iddle first...
And if it fails to get work after (lets say) 2 consecutive requests then it should ask for the next device in the sorted by "time to iddle" list of devices.
This way the servers will have less load and the scheduller will be more simple and faster. (They can even add a user parameter to specify how much priority you want to assign to each device, or make it as complex as they want cause all this will not impact on servers performance)

____________

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,649,021
RAC: 347
Greece
Message 1214343 - Posted: 5 Apr 2012, 18:05:29 UTC

There is no such thing as v6.03 or v6.10 work. Just multibeam work. Just as there is no such thing as CPU or GPU tasks. They are all the same. They just get assigned to a device when you request work.


Why can't that be done in BM? I mean, why can't "multibeam" tasks be downloaded and then be crunched by whatever (CPU or GPU)?

(I bet the answer is staring me in the face, or worse yet, I already know it... but my brain is a bit foggy ATM)

Josef W. SegurProject donor
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4244
Credit: 1,047,369
RAC: 275
United States
Message 1214397 - Posted: 5 Apr 2012, 23:46:12 UTC - in response to Message 1214327.

Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least.

+1

Anyway, I think that adding complexity to the schedullers to make that choices might not be really good...

The scheduler is already complex enough to make the choice of which resource to allocate work to. Modifying the logic to allow a little bit to go to the slower resource should not tax the servers much more, if at all.

The Best App Version idea I submitted to Dr. Anderson via boinc_dev a couple of weeks ago requires only a simple calculation based on how many seconds of work have been requested for each resource. The derived value predicts which resource will actually start crunching the task first, (or from the user perspective which resource would have an idle instance first if it didn't get any work). After a task is assigned the estimated run time is subtracted from the original request time, so the same calculation can be used for other available tasks. Dr. Anderson asked if there was a scenario where that method made the host more productive, and Nicolás Alvarez provided a clear one. So the idea has been tentatively accepted, though is not to be implemented right away because it needs to be merged into the existing rather complex code.

If both the core client and the server estimates of how long tasks will take are reasonable, work should build for all resources more or less evenly toward cache settings. Even if the estimates are significantly off or mismatched some work should be assigned for all resources, going idle on any resource you've elected to use should only happen when there's simply not enough work to keep them all busy.

I didn't mention this earlier simply because the thread was aimed at current conditions, and this is vaporware so far. But speculation on improvements has entered the thread so I thought it best to sketch the bones of an idea which may be moving toward implementation.
Joe

Message boards : Number crunching : are MB 6.03's still being generated?

Copyright © 2014 University of California