Message boards :
Number crunching :
are MB 6.03's still being generated?
Message board moderation
Author | Message |
---|---|
EdwardPF Send message Joined: 26 Jul 99 Posts: 389 Credit: 236,772,605 RAC: 374 |
I',m down to 5 MB 6,03s (4 running and 1 waiting) and 250 6.10s (1 running 249 waiting) BUT the last request for more work got 45 6.10s and NO 6.03s... Are 6.03s being generated?? Ed F edit 5 min's later: just requested cpu and gpu again and again got 45 MORE gpu WU's!! |
Mike Send message Joined: 17 Feb 01 Posts: 34258 Credit: 79,922,639 RAC: 80 |
Yes, sure. 05.04.2012 15:18:43 SETI@home Scheduler request completed: got 41 new tasks 05.04.2012 15:18:43 SETI@home [sched_op] Server version 613 05.04.2012 15:18:43 SETI@home Project requested delay of 303 seconds 05.04.2012 15:18:43 SETI@home [sched_op] estimated total CPU task duration: 66172 seconds 05.04.2012 15:18:43 SETI@home [sched_op] estimated total ATI GPU task duration: 0 seconds With each crime and every kindness we birth our future. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I',m down to 5 MB 6,03s (4 running and 1 waiting) and 250 6.10s (1 running 249 waiting) BUT the last request for more work got 45 6.10s and NO 6.03s... There is no such thing as v6.03 or v6.10 work. Just multibeam work. Just as there is no such thing as CPU or GPU tasks. They are all the same. They just get assigned to a device when you request work. The version number you are seeing is the number in your app_info.xml. Which corresponds to the default application version. If you look at the applications page you can see the current versions for all of the various apps. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
EdwardPF Send message Joined: 26 Jul 99 Posts: 389 Credit: 236,772,605 RAC: 374 |
That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually?? Is this a result of chance ... requesting lopsidedness ... or server assignment lopsidedness ... or something else?? Ed F |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually?? BOINC operates in a way that it will normally try to fill up the work queue for the GPU first. Even when you have a "requesting work for CPU & GPU" most if not all often get assigned to the GPU. I can see why they would write it that way. It doesn't make sense from a user point of view to keep filling up one queue at the expense of the other. I imagine it would be far to much work to change the server/client to work in a way that the client just gets work and then runs it on the available devices. Edit: When I say it will normally fill up the GPU first it is because it is normally the best device/app version. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually?? Don't know, it happens sometimes. You could go into prefs and deselect GPU work, then you should only get CPU work, when you have enough CPU work then select GPU work again in prefs. |
EdwardPF Send message Joined: 26 Jul 99 Posts: 389 Credit: 236,772,605 RAC: 374 |
O.k. ... i guess all is well (???) the GPU queue is full (400 WU's) and now I'm getting CPU WU's (at lease I have gotten 3 so far) Thanks for the info!! Ed F |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
That is how I think I've heard it explained before :) (I think). So the 360 WU's assigned to my 1 GPU and the 5 assigned to my 4 CPU's is a fluke that should clear up eventually?? It's a combination of server assignment lopsidedness and asking for more work than can be assigned for one request. When you have both GPU and CPU capable of doing the tasks which are available, the server logic sends to the GPU first since it will process faster. When the GPU request is satisfied it will start assigning tasks to CPU. But if you're asking for the equivalent of ~70 GPU tasks or more, there's almost no chance of getting CPU work unless there's some which cannot be sent to the GPU. You can use the web preferences to tell the servers not to assign any GPU work, but remember to enable GPU again later. Or you can use either local or web preferences to reduce the amount of work being requested. If 360 GPU tasks would last for a day, set the host for one day of cache and it will ask for little more GPU work but enough to build CPU up to a day, and you'll probably get a decent mix. Joe |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
OR Dr. Anderson could amend the code to send a small percentage of the work requests to the slower resource instead of ignoring it entirely until the faster resource is glutted with work. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Wembley Send message Joined: 16 Sep 09 Posts: 429 Credit: 1,844,293 RAC: 0 |
Work assigned should be pro-rated based on the percent requested. If a system asks for 70% GPU and 30% CPU that is the ratio of work that should be assigned for that request. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least. +1 Anyway, I think that adding complexity to the schedullers to make that choices might not be really good... May be, this choice should be taken on client side, if the host needs both kind of work then it should only ask for the device that will be iddle first... And if it fails to get work after (lets say) 2 consecutive requests then it should ask for the next device in the sorted by "time to iddle" list of devices. This way the servers will have less load and the scheduller will be more simple and faster. (They can even add a user parameter to specify how much priority you want to assign to each device, or make it as complex as they want cause all this will not impact on servers performance) |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least. The scheduler is already complex enough to make the choice of which resource to allocate work to. Modifying the logic to allow a little bit to go to the slower resource should not tax the servers much more, if at all. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
There is no such thing as v6.03 or v6.10 work. Just multibeam work. Just as there is no such thing as CPU or GPU tasks. They are all the same. They just get assigned to a device when you request work. Why can't that be done in BM? I mean, why can't "multibeam" tasks be downloaded and then be crunched by whatever (CPU or GPU)? (I bet the answer is staring me in the face, or worse yet, I already know it... but my brain is a bit foggy ATM) |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Perhaps, but I would settle for a heavy bias in favor of the fast resource, just that the slower one is not ignored entirely. Having the CPUs go idle when the GPU cache cannot be filled makes no sense. Give the slow resource a little to work with at least. The Best App Version idea I submitted to Dr. Anderson via boinc_dev a couple of weeks ago requires only a simple calculation based on how many seconds of work have been requested for each resource. The derived value predicts which resource will actually start crunching the task first, (or from the user perspective which resource would have an idle instance first if it didn't get any work). After a task is assigned the estimated run time is subtracted from the original request time, so the same calculation can be used for other available tasks. Dr. Anderson asked if there was a scenario where that method made the host more productive, and Nicolás Alvarez provided a clear one. So the idea has been tentatively accepted, though is not to be implemented right away because it needs to be merged into the existing rather complex code. If both the core client and the server estimates of how long tasks will take are reasonable, work should build for all resources more or less evenly toward cache settings. Even if the estimates are significantly off or mismatched some work should be assigned for all resources, going idle on any resource you've elected to use should only happen when there's simply not enough work to keep them all busy. I didn't mention this earlier simply because the thread was aimed at current conditions, and this is vaporware so far. But speculation on improvements has entered the thread so I thought it best to sketch the bones of an idea which may be moving toward implementation. Joe |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.