Message boards :
Number crunching :
Enhanced Holding Completed Work Longer?
Message board moderation
Author | Message |
---|---|
Karl Roos Send message Joined: 19 Mar 01 Posts: 36 Credit: 206,258,788 RAC: 0 |
Since I switched to the enhanced clients a week or so ago, I've noticed that a number of my machines seem to be holding completed work much longer than with the previous client. Now when I look at these machines, they will often have > 24 hours of completed work that has not been reported. Is there any setting adjustment for this? Also, I have a couple of machines that are only cacheing one or two workunits. The other machines are cacheing 4-5 days worth per my general settings. Is there a way to get the low cache machines to grab more work? My overall RAC is down over 20% with the enhanced client and still falling. Guess I don't understand why the powers that be felt it was necesary to mess with the scoring system. I have read about how the new client has more optimization in it, and I am using crunch3r's optimized clients, but why does the net effect on RAC have to be 25% less than before? To drive us to other BOINC projects? |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
First, the optomized clients will not handle the enhanced scores correctly. The enhanced applications were generated to try to get the amount of credit requested per result to be approximately in line with the credit requested per result on a completely stock system. -- Recommendation is to drop the optomized clients. You also (hopefully) were using optomized applications. The applications in your app_info file are now completely wrong. -- Recommendation delete (or rename) the file until such time as you can add a new optomized application to it. The work requests are not different, but if you are still relying on the optomized application, you will only get work if there is a result returned for a more work. Until such time as the app_info files are removed, you should expect your queues to reduce slowly to 0. As far as reporting work. Work is reported on any project update. These occur at the following times: 1) If a result is due within 24 hours. 2) (NEW) If a result is due within connect every X. 3) If a result has finished more than connect every X ago. 4) A request for more work. 5) A manual update. Work will not be fetched if the host has entered NWF because a deadline may have trouble being met. Solution for this is to reduce the cache size (if possible - modem users may not be able to reduce the cache size much). BOINC WIKI |
Karl Roos Send message Joined: 19 Mar 01 Posts: 36 Credit: 206,258,788 RAC: 0 |
Thanks John but I don't think I understand all of your message. I am running the "standard" enhanced Boink software with the crunch3r optimized s@h clients. I understood that the optimized s@h clients would result in higher RAC since they take advantage of cpu sse, sse2, sse3 instructions more than the standard s@h client, but only 20% order of magnitude. I replaced the app_info.xml file with a new one when I installed the optimized clients. With these clairifications, would you still recommend that I revert to the standard clients? My caches of work do not seem to be diminishing at all, its just that certain machines only get a couple of workunits, while the others get dozens (I have general setting at 5 days I believe). Is there a way to get a machine that is only cacheing a couple WU's to grab more (like 5 days worth)? I don't understand work reporating causes 2) and 3) in your list, but I know that with the previous versions of boinc/S@H all my computers would report after completing one or two workunits at the most. Now, some of them will have not reported work even when sitting there with ~ 15 completed WU's. Whatever of the reasons that these machines used to report more frequently, they are not doing so now. Can this reporting behavior be changed? Thanks. |
n7rfa Send message Joined: 13 Apr 04 Posts: 370 Credit: 9,058,599 RAC: 0 |
Thanks John but I don't think I understand all of your message. I am running the "standard" enhanced Boink software with the crunch3r optimized s@h clients. I understood that the optimized s@h clients would result in higher RAC since they take advantage of cpu sse, sse2, sse3 instructions more than the standard s@h client, but only 20% order of magnitude. I replaced the app_info.xml file with a new one when I installed the optimized clients. With these clairifications, would you still recommend that I revert to the standard clients? You could try Crunch3r's 5.5.0 version of BOINC. It reports immediately. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
Thanks John but I don't think I understand all of your message. I am running the "standard" enhanced Boink software with the crunch3r optimized s@h clients. I understood that the optimized s@h clients would result in higher RAC since they take advantage of cpu sse, sse2, sse3 instructions more than the standard s@h client, but only 20% order of magnitude. I replaced the app_info.xml file with a new one when I installed the optimized clients. With these clairifications, would you still recommend that I revert to the standard clients? OK, if they are not getting any work at all, then there is possibly a problem with the app_info file. If they are sometimes getting work, it is possible that they are running into trouble with deadlines somehow. It is also possible that some of the machines have gotten high angle results, and others have gotten low angle results - these take significantly different amounts of CPU time. What are your daily quotas for these machines? If you have found an optomized enhanced version, then you have to get your app_info correct. One possibility to see if it is the app_info file that is the problem is to temporarily rename it. #2 and #3 are based on the queue size. BOINC WIKI |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.