Message boards :
Number crunching :
Got *no* new work...
Message board moderation
Author | Message |
---|---|
Mibe, ZX-81 16kb Send message Joined: 30 Jun 99 Posts: 42 Credit: 2,622,033 RAC: 0 |
Strange, even though I have pref's set for 10 days cache, it runs dry w/o asking for any work. Only last-resort-solution I could think of is to press the "update"-button since the boinc-magic dosen't do the trick, but no go. (Boinc-mgr 6.4.5, seti only). My cache is empty (cpu is idling at 0%) and I have no other project (not interested). Message in output: [SETI@home] Sending scheduler request: Requested by user. Requesting 0 seconds of work, reporting 0 completed tasks [SETI@home] Scheduler request completed: got 0 new tasks Any clues? (I have been happily churning along at 5.00 AP's and 6.08 MB's up until now, all of a sudden everything ended. Why?) |
Luke Send message Joined: 31 Dec 06 Posts: 2546 Credit: 817,560 RAC: 0 |
Strange, even though I have pref's set for 10 days cache, it runs dry w/o asking for any work. Only last-resort-solution I could think of is to press the "update"-button since the boinc-magic dosen't do the trick, but no go. (Boinc-mgr 6.4.5, seti only). My cache is empty (cpu is idling at 0%) and I have no other project (not interested). Sometimes it is very easy to set "No New Tasks" and forget about it? Cover all bases. And then we can hold a differential diagnosis. - Luke. - Luke. |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
|
Luke Send message Joined: 31 Dec 06 Posts: 2546 Credit: 817,560 RAC: 0 |
Also, looks like your AMD Machine is still crunching with a BOINC Manager version of 5.2.13. Thought about giving it an upgrade to, say, 6.2.10 or 6.4.5? I would say that is getting a bit on the old side. Last 2 BOINC Manager version 5's I crunched with was 5.10.45 and 5.10.30, which in my terms where the safest and most stable releases to date. - Luke. - Luke. |
Elphidieus Send message Joined: 1 Nov 02 Posts: 67 Credit: 3,140,607 RAC: 0 |
Having the same issue here. Not getting any new work despite setting the cache size to 10 days. The last thing that would come to my mind with my return to SETI was the servers being plagued with so many issues that hinders productivity. Time to go back to Einstein then... |
Gonad the Destroyer®©™ Send message Joined: 6 Aug 99 Posts: 204 Credit: 12,463,705 RAC: 0 |
1/24/2009 8:20:31 PM|SETI@home|Sending scheduler request: Requested by user. Requesting 0 seconds of work, reporting 4 completed tasks 1/24/2009 8:20:36 PM|SETI@home|Scheduler request completed: got 0 new tasks I get this also, I havent been able to get new WU's, slowly sending in the ones in cache... Sucks...heh |
Luke Send message Joined: 31 Dec 06 Posts: 2546 Credit: 817,560 RAC: 0 |
Having the same issue here. Not getting any new work despite setting the cache size to 10 days. The last thing that would come to my mind with my return to SETI was the servers being plagued with so many issues that hinders productivity. Could you please post the first 20 lines of a restart of the BOINC Manager? As per what Arkayn said. Then we might be able to help you. - Luke. - Luke. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Also, looks like your AMD Machine is still crunching with a BOINC Manager version of 5.2.13. Thought about giving it an upgrade to, say, 6.2.10 or 6.4.5? I would say that is getting a bit on the old side. Last 2 BOINC Manager version 5's I crunched with was 5.10.45 and 5.10.30, which in my terms where the safest and most stable releases to date. I'll second that. 5.10.45 or 6.2.19 are pretty much rock-solid. v5 has the executables and data (WUs and such) in Program files\BOINC, v6 splits executables and data locations. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Luke Send message Joined: 31 Dec 06 Posts: 2546 Credit: 817,560 RAC: 0 |
Also, looks like your AMD Machine is still crunching with a BOINC Manager version of 5.2.13. Thought about giving it an upgrade to, say, 6.2.10 or 6.4.5? I would say that is getting a bit on the old side. Last 2 BOINC Manager version 5's I crunched with was 5.10.45 and 5.10.30, which in my terms where the safest and most stable releases to date. Heh. Yes, I found that confusing when I went to install 6.4.5 and the Optimized Applications.... Program Files or Program Data? LOL. - Luke. - Luke. |
keeleysam Send message Joined: 17 Dec 03 Posts: 133 Credit: 60,478,373 RAC: 0 |
I had this problem on 6.4.5 today, it thought it would take 650 hours for each AP WU so it wouldn't request anymore. I upgraded to 6.6.2, and ran CPU benchmarks, and now have a full queue. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
"Mibe, ZX-81 16kb" wrote: Strange, even though I have pref's set for 10 days cache, it runs dry w/o asking for any work. "Elphidieus" wrote: Having the same issue here. Not getting any new work despite setting the cache size to 10 days. The problem is that both of you are running on 10 day caches with a project that has 7 day deadlines. BOINC Manager thinks you have too much work (you want 10 days worth, but the deadlines are for 7) and is probably conservatively preventing you from actually achieving a 10 day cache because of this. You should set the cache to half the deadline of the project you plan on running, and/or be more realistic with the expected downtime of the servers. There's no need to cache 10 days worth if the servers aren't down more than 3 or 4. |
Elphidieus Send message Joined: 1 Nov 02 Posts: 67 Credit: 3,140,607 RAC: 0 |
"Mibe, ZX-81 16kb" wrote:Strange, even though I have pref's set for 10 days cache, it runs dry w/o asking for any work. The problem with me (yes, with me) is that I will not have access to my machines for the upcoming week or more, so I will definitely need to fill up my WU cache to last my absence. And bear in mind that the recent WUs have a deadline of Feb 17th, so I don't see the point of a 7-day deadlines, much less any arguments. Anyway, I've managed to download enough WUs to last me a week, so I rest my case. |
Mibe, ZX-81 16kb Send message Joined: 30 Jun 99 Posts: 42 Credit: 2,622,033 RAC: 0 |
Mystery partly solved, I hope. I found a global_prefs_override.xml in my Boinc directory that was set to 0,25 days cache. Probably since the time I set number of cpu's to 125% in the gui when I was trying out the CUDA-app? But shouldn't Boinc take all cpu-cores into consideration when requesting work? All the time up until it finally requested new work at 03:19 some of the cores were idling. I have currently changed the local prefs to 3 days cache and right now have 10 wu's in the cache totalling 90 hours of work which should take less then 23 hours to do on four cores. So if I want a 3 days cache, do I have to set my prefs to 10 days (3 days * 4 cores)? Thanks for your replys and insights! Details: Here's my stdoutae.txt: 25-Jan-2009 00:42:43 [---] Exit requested by user 25-Jan-2009 00:45:51 [---] Starting BOINC client version 6.4.5 for windows_intelx86 25-Jan-2009 00:45:51 [---] log flags: task, file_xfer, sched_ops 25-Jan-2009 00:45:51 [---] Libraries: libcurl/7.19.0 OpenSSL/0.9.8i zlib/1.2.3 25-Jan-2009 00:45:51 [---] Data directory: C:\Documents and Settings\All Users\Application Data\BOINC 25-Jan-2009 00:45:51 [---] Running under account NNN 25-Jan-2009 00:45:51 [SETI@home] Found app_info.xml; using anonymous platform 25-Jan-2009 00:45:51 [---] Processor: 4 GenuineIntel Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz [x86 Family 6 Model 23 Stepping 7] 25-Jan-2009 00:45:51 [---] Processor features: fpu tsc pae nx sse sse2 mmx 25-Jan-2009 00:45:51 [---] OS: Microsoft Windows XP: Professional x86 Editon, Service Pack 2, (05.01.2600.00) 25-Jan-2009 00:45:51 [---] Memory: 2.00 GB physical, 2.35 GB virtual 25-Jan-2009 00:45:51 [---] Disk: 19.53 GB total, 12.74 GB free 25-Jan-2009 00:45:51 [---] Not using a proxy 25-Jan-2009 00:45:51 [---] CUDA devices found 25-Jan-2009 00:45:51 [---] Coprocessor: GeForce 9500 GT (1) 25-Jan-2009 00:45:52 [SETI@home] URL: http://setiathome.berkeley.edu/; Computer ID: 4758156; location: school; project prefs: school 25-Jan-2009 00:45:52 [---] General prefs: from SETI@home (last modified 17-Jan-2009 13:39:33) 25-Jan-2009 00:45:52 [---] Computer location: school 25-Jan-2009 00:45:52 [---] General prefs: using separate prefs for school 25-Jan-2009 00:45:52 [---] Reading preferences override file 25-Jan-2009 00:45:52 [---] Preferences limit memory usage when active to 1023.52MB 25-Jan-2009 00:45:52 [---] Preferences limit memory usage when idle to 1842.33MB 25-Jan-2009 00:45:52 [---] Preferences limit disk usage to 12.68GB 25-Jan-2009 00:45:52 [SETI@home] Restarting task ap_20dc08ag_B1_P1_00203_20090123_15858.wu_0 using astropulse version 500 25-Jan-2009 00:45:52 [SETI@home] Restarting task ap_20dc08ag_B2_P1_00133_20090123_21113.wu_0 using astropulse version 500 25-Jan-2009 00:56:52 [SETI@home] Sending scheduler request: Requested by user. Requesting 0 seconds of work, reporting 0 completed tasks 25-Jan-2009 00:56:57 [SETI@home] Scheduler request completed: got 0 new tasks I was using a 2 day setting in prefs and my machine had no problem keeping wu's in the cache for all cores up until now when this occured. Since my cache was running out of work I thought that maybe 2 days was to short in the long run and changed it to 10 days. But that didn't help so I pressed the update-button but got a request for 0 units. At the end of computing the last wu it finally decided to get more work: 25-Jan-2009 03:19:03 [SETI@home] Sending scheduler request: To fetch work. Requesting 66708 seconds of work, reporting 9 completed tasks 25-Jan-2009 03:19:08 [SETI@home] Scheduler request completed: got 7 new tasks 25-Jan-2009 03:19:10 [SETI@home] Started download of 16dc08ac.1676.241255.4.8.16 25-Jan-2009 03:19:10 [SETI@home] Started download of 16dc08ac.1676.241255.4.8.1 25-Jan-2009 03:19:14 [SETI@home] Finished download of 16dc08ac.1676.241255.4.8.16 25-Jan-2009 03:19:14 [SETI@home] Started download of 16dc08ac.1676.241255.4.8.11 25-Jan-2009 03:19:15 [SETI@home] Finished download of 16dc08ac.1676.241255.4.8.1 25-Jan-2009 03:19:15 [SETI@home] Started download of 16dc08aa.6399.21340.16.8.56 And started to work on them as soon as the dl finished: 25-Jan-2009 03:19:15 [SETI@home] Starting 16dc08ac.1676.241255.4.8.16_0 25-Jan-2009 03:19:15 [SETI@home] Starting task 16dc08ac.1676.241255.4.8.16_0 using setiathome_enhanced version 608 25-Jan-2009 03:19:16 [SETI@home] Starting 16dc08ac.1676.241255.4.8.1_0 25-Jan-2009 03:19:16 [SETI@home] Starting task 16dc08ac.1676.241255.4.8.1_0 using setiathome_enhanced version 608 25-Jan-2009 03:53:39 [SETI@home] Computation for task ap_20dc08ag_B1_P1_00203_20090123_15858.wu_0 finished So basically some of my cores where idle up until 03:19. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Your messages still say: 25-Jan-2009 00:45:52 [---] Reading preferences override file Try going into Boinc preferences, Click Clear, then OK, and see if anything changes. Other things to Check are that you have Web Preferences you changed are for the Location School, and not Default, Home or Work, And what is the Task duration correction factor for that PC?, you can check that on it's computer summary. Claggy |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
"Mibe, ZX-81 16kb" wrote:Strange, even though I have pref's set for 10 days cache, it runs dry w/o asking for any work. Its understandable that you'd want to keep your system busy while you're away, but if you have broadband(?) than it should be an issue. If not, then I can certainly understand why you'd want to try to keep the cache full, but understand that its just not going to happen the way you expect it to. Even if some of your workunits do have a longer deadline, I wouldn't anticipate this to always be the case, and newer units downloaded later can have shorter deadlines causing BOINC to go into High Priority mode if it thinks it won't finish those in time. Are you sure you've downloaded enough workunits to last a week? I've noticed that many of these latest are "shorties", finished within a half hour on my machines. Remember that if your TDCF is used to longer units or AstroPulse, then BOINC will think it has downloaded a ton of work only to find out it completed them quicker than it expected, and you'll return to an idle system anyway. Even if your system downloaded a lot of work, this seems to be a problem with the Berkeley servers giving out too much work, so I also wouldn't expect this trend to continue either... I wouldn't rest my case so early and easily if I were you. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
But shouldn't Boinc take all cpu-cores into consideration when requesting work? As far as I know it does, but it also takes other factors into account when requesting work, such as: the amount of time the machine is on, the amount of time while machine is on that BOINC is allowed to run, the efficiency of the particular CPU (known as TDCF), and a couple other figures. Work requesting isn't a flat figure. So if I want a 3 days cache, do I have to set my prefs to 10 days (3 days * 4 cores)? No. The reason why you've been idling isn't because of too small a cache, its because comms with the servers have been bogged, preventing people from getting work and/or uploading. The fact that you've told BOINC you want to cache 10 days worth of work on a project with short deadlines, BOINC took the conservative route and stopped requesting to top off the cache in favor of keeping work to a minimum to prevent deadline issues, so in fact you probably weren't getting 10 days of cache. This is why I suggest setting the number lower, such as 3 or 4 days, which is much easier for BOINC to handle during work requests. |
Virtual Boss* Send message Joined: 4 May 08 Posts: 417 Credit: 6,440,287 RAC: 0 |
Remember that if your TDCF is used to longer units or AstroPulse, then BOINC will think it has downloaded a ton of work only to find out it completed them quicker than it expected, and you'll return to an idle system anyway. OzzFan - He is using AP V5r103. I have found that since changing to r103 that my TDCF has stabilised dramatically. His TDCF may be off because it is a new host (15 Jan 2009), but should settle quickly. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
Remember that if your TDCF is used to longer units or AstroPulse, then BOINC will think it has downloaded a ton of work only to find out it completed them quicker than it expected, and you'll return to an idle system anyway. I'm sure you're using 'stabilized' as a relative term. TDCF by nature is going to fluctuate if you have such dramatically different size workunits as SETI MB and AstroPulse using the same efficiency stat, which means that BOINC's estimates for your machine's ability to crunch work at a given rate is always going to be off - at least until TDCF is separated by app as has been hinted at for future BOINC releases. ...and as long as BOINC uses this erred efficiency stat to calculate work fetch, the work fetch is always going to be off kilter. When the TDCF is high, BOINC will request less work to fill the cache even if the new workunits are small, and when the TDCF is low, BOINC will request more work to fill the cache even if the new workunits are large. |
Virtual Boss* Send message Joined: 4 May 08 Posts: 417 Credit: 6,440,287 RAC: 0 |
By stabilised I mean that - Previously after crunching an AP the estimated completion times would inflate by ~50% ie 15mins to 22mins for a shortie. Now after crunching an AP the time only changes by about 5-10 seconds. As the estimated time is calculated from the TDCF then this means there has basically been negligable change in the TDCF due to AP. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
By stabilised I mean that - Interesting. I just recently upgraded to r103 AP myself. I'll have to observe it further. If what you say is true, then it is more likely that the TDCF is off because of it being a new host as you suggested. Once the TDCF stabilizes, then work fetch should operate more effectively in the future. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.