Message boards :
Number crunching :
176 hours to completion??
Message board moderation
Author | Message |
---|---|
Grecu Ionut - Florin Send message Joined: 4 Oct 06 Posts: 10 Credit: 14,554 RAC: 0 |
Hello!! I see lately that it takes a long time until completion, but 176 hours!? Isn't it a too much? I have 2Gh core2Duo processor, an nvidia 9600m GT, and I don't know..maybe it is normal.. What do you think? has anyone else encountered with such a long time until completion? |
Gundolf Jahn Send message Joined: 19 Sep 00 Posts: 3184 Credit: 446,358 RAC: 0 |
What kind of task, MultiBeam or AstroPulse? Since the online task lists still mostly are messed up, I can't check myself. Gruß, Gundolf Computer sind nicht alles im Leben. (Kleiner Scherz) SETI@home classic workunits 3,758 SETI@home classic CPU time 66,520 hours |
EdwardPF Send message Joined: 26 Jul 99 Posts: 389 Credit: 236,772,605 RAC: 374 |
I'm seeing the same thing on my 4Gh P4 CPU WU's est comp time is now at 200+Hrs. The actual time seems to be about 9Hrs ... my Task duration correction factor has gone from .11 to 6.510421 S@H Enhance no AP. Ed F |
Hellsheep Send message Joined: 12 Sep 08 Posts: 428 Credit: 784,780 RAC: 0 |
I'm seeing the same thing on my 4Gh P4 CPU WU's est comp time is now at 200+Hrs. The actual time seems to be about 9Hrs ... my CPU time correction factor has, for some reason, gone from .11 to 6.something. What application are you using? If you're using optimized apps they don't contain the <flops></flops> info for your GPU or CPU, they are required to be added in manually, which if done without care can abort all your work units. Also, it may just be because the server is down and can't report any tasks, it may think it's not completing the tasks. (Most likely not though) - Jarryd |
EdwardPF Send message Joined: 26 Jul 99 Posts: 389 Credit: 236,772,605 RAC: 374 |
Standard aPP Ed F |
Hellsheep Send message Joined: 12 Sep 08 Posts: 428 Credit: 784,780 RAC: 0 |
Are you trying to process astropulse work units or MB? Are they CUDA or CPU work units also? - Jarryd |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
The units from another project are taking over 1 week to process. 200+ hours total, cpu only. And that is on a quad core(2 cores available to BOINC, 2 are MINE!!!!!) The GPU units tend to go much faster. 2-3 hours average for me. But again what type can make a big difference. Astropulse can take a long time to process. |
Keith Send message Joined: 19 May 99 Posts: 483 Credit: 938,268 RAC: 0 |
I guess this is the Astropulse which is 2nd on your task list The "name" of the work unit/task can be seen by going to the task list and changing the offset in the routing from offset=0 to offset=2 and then clicking on the displayed number shown Keith |
Grecu Ionut - Florin Send message Joined: 4 Oct 06 Posts: 10 Credit: 14,554 RAC: 0 |
It's an atropulse task. Elapsed 15h remaining 167h. It's strange ti takes so much time. I don't know if it's and GPU or CPU tansk, but I've enable GPU from BOINC activity menu. I'm using the latest BOINC version. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
It's an atropulse task. Elapsed 15h remaining 167h. It's strange ti takes so much time. I don't know if it's and GPU or CPU tansk, but I've enable GPU from BOINC activity menu. I'm using the latest BOINC version. There isn't any nVidia GPU application for AstroPulse yet. The estimates for AstroPulse tasks are long, the calibration of the difference between S@H Enhanced and AstroPulse was done at SETI Beta a couple of years ago before the most recent changes to the AstroPulse code base. With your 2GHz. Core duo I'd expect the actual time to be well under 72 hours. Joe |
Ianab Send message Joined: 11 Jun 08 Posts: 732 Credit: 20,635,586 RAC: 5 |
Whats the percentage completed after those 15 hours? Thats a better estimate of the time it's going to take. My Core2 is at 55% after 9 hours, but thats with the optimised app. Would take a bit longer with the standard app. Ian |
Hellsheep Send message Joined: 12 Sep 08 Posts: 428 Credit: 784,780 RAC: 0 |
Correct, on my Astropulse work units i have estimates of 200 hours, although after only 11 hours they're complete on my Q9550 usually. It's just a matter of the calculation isn't exactly correct right now, i'm sure this is something that could be suggested and maybe something can be done about it in the future. - Jarryd |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Correct, on my Astropulse work units i have estimates of 200 hours, although after only 11 hours they're complete on my Q9550 usually. The new credit system just introduced here should take care of it. The servers keep track of performance for each application version separately and make adjustments to the estimate based on the specific host performance. As noted elsewhere, it will take awhile for the servers to have enough data to attempt that, and there may be bugs too. But unless there's some fundamental flaw in the math, estimates ought to become much better. The transition is likely to try our patience. Joe |
Hellsheep Send message Joined: 12 Sep 08 Posts: 428 Credit: 784,780 RAC: 0 |
Correct, on my Astropulse work units i have estimates of 200 hours, although after only 11 hours they're complete on my Q9550 usually. True, but with all changes some people dislike it, but in the end it usually ends up being beneficial. :) I've only been back 2 weeks now, but it's great to see the amount of effort everyone puts in here, i feel our luck of finding little green men is starting to become more and more possible each day. - Jarryd |
EdwardPF Send message Joined: 26 Jul 99 Posts: 389 Credit: 236,772,605 RAC: 374 |
by the way ... is: If you're using optimized apps they don't contain the <flops></flops> info for your GPU or CPU, they are required to be added in manually, which if done without care can abort all your work units. The /flops number the sum of the 2 cards or the avg of them?? EdF |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
by the way ... is: Average. It's used just like the CPU Whetstone benchmark as an indicator of how fast one resource is. The new credit system should only use that initially. After it has enough performance data for what it considers a good average it'll use that rather than <flops> or the GPU advertised speed or the CPU Whetstone benchmark. Joe |
BMaytum Send message Joined: 3 Apr 99 Posts: 104 Credit: 4,382,041 RAC: 2 |
I'm seeing the same thing on my 4Gh P4 CPU WU's est comp time is now at 200+Hrs. The actual time seems to be about 9Hrs ... my CPU time correction factor has, for some reason, gone from .11 to 6.something. Please advise how I can add <flops></flops> for my Core2Duo 3GHz CPU (100% share) for MB units; I am using Lunatics optimized apps. Typically this CPU crunches MB WUs in 5,000-6,000 CPU secs(see http://setiathome.berkeley.edu/results.php?hostid=5185956), but after I "upgraded" to BOINC 6.10.56 from 6.10.18, the To Completion time estimate jumped from realistic 1.5-2 hrs to 18-20hrs per WU, consequently I get only 1 or 2 WUs for the CPU (thankfully the GPU To Completion time estimate remained realistic 20-30 minutes/WU). Sabertooth Z77, i7-3770K@4.2GHz, GTX680, W8.1Pro x64 P5N32-E SLI, C2D E8400@3Ghz, GTX580, Win7SP1Pro x64 & PCLinuxOS2015 x64 |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Please advise how I can add <flops></flops> for my Core2Duo 3GHz CPU (100% share) for MB units; I am using Lunatics optimized apps. Typically this CPU crunches MB WUs in 5,000-6,000 CPU secs(see http://setiathome.berkeley.edu/results.php?hostid=5185956), but after I "upgraded" to BOINC 6.10.56 from 6.10.18, the To Completion time estimate jumped from realistic 1.5-2 hrs to 18-20hrs per WU, consequently I get only 1 or 2 WUs for the CPU (thankfully the GPU To Completion time estimate remained realistic 20-30 minutes/WU). As of about 1 hour ago, I'd advise don't. The server side code is changing in weird ways, so it'd be better to leave these out (at least for now). "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
BMaytum Send message Joined: 3 Apr 99 Posts: 104 Credit: 4,382,041 RAC: 2 |
Please advise how I can add <flops></flops> for my Core2Duo 3GHz CPU (100% share) for MB units; I am using Lunatics optimized apps. Typically this CPU crunches MB WUs in 5,000-6,000 CPU secs(see http://setiathome.berkeley.edu/results.php?hostid=5185956), but after I "upgraded" to BOINC 6.10.56 from 6.10.18, the To Completion time estimate jumped from realistic 1.5-2 hrs to 18-20hrs per WU, consequently I get only 1 or 2 WUs for the CPU (thankfully the GPU To Completion time estimate remained realistic 20-30 minutes/WU). Weird ways - yes so it seems! I'll let mine run as-is for now as you recommended. Sabertooth Z77, i7-3770K@4.2GHz, GTX680, W8.1Pro x64 P5N32-E SLI, C2D E8400@3Ghz, GTX580, Win7SP1Pro x64 & PCLinuxOS2015 x64 |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
As of about 1 hour ago, I'd advise don't. The server side code is changing in weird ways, so it'd be better to leave these out (at least for now). Yes, I think I'd go along with that. In particular, I think I'd advise people not to change their current setup, while Berkeley is changing theirs. If you have <flops> in there, leave it in: if you don't, leave it out. Otherwise, you'll be chasing Berkeley up and down the DCF ladder till the cows come home. The new server code, as implemented at Beta [not guaranteed to be the same here] tries to adjust things so that all tasks, whichever application runs them, estimate runtime correctly at a client DCF of 1.000000 I suspect, but I don't yet know, that this may cause problems for those of us who use an app_info and some (any) form of rebranding. Because the server is doing the correcting, it may be applying different 'corrections' to different work allocations - this would lead to some weird estimates for rebranded work. It may be that we end up having to apply new <flops> values, possibly five times larger than previously, so that we cooperate with the server in reaching that DCF target of 1.0, instead of fighting against it. It will take time for that to become clear. What is certain is that if you suddenly declare your computer to be five times faster than it previously was, you'll download a whole heap of new work. And since you'll be lying (the computer won't actually be any faster), it will take time to work it off. The changes will be easier and safer if you make them while you have a small cache. So I recommend starting to reduce your cache size, if you expect to fine-tune the machine later. Then, when the transition is complete and estimates have settled back down to the new equilibrium, it's up to you whether you put the cache back to a larger setting. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.