Message boards :
Number crunching :
Lack of SoG WUs
Message board moderation
Author | Message |
---|---|
AMDave Send message Joined: 9 Mar 01 Posts: 234 Credit: 11,671,730 RAC: 0 |
Over the passed several weeks, I haven't received any SoG WUs.  It's strictly been 8.12 *sah or 8.00 cuda50.  Runtimes have increased and the cuda50s can be dramatically inconsistent.  For instance, there were 2 WUs with the exact same AR (0.017676), one ran for 4m 42s, the other ran for 44m 54s.  Did I miss a memo somewhere? I also have my share of overflows from the JAN & FEB Arecibo WUs of '16, one of which had an AR of 139.571960. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Dave, I'm assuming you are running stock apps? Have you looked at using the lunatics v45 beta 4 installer to get SoGs? I'm guessing the lack of SoG for your rig maybe due to the server thinking the sah and cuda 50 are faster for your machine. |
rob smith Send message Joined: 7 Mar 03 Posts: 22190 Credit: 416,307,556 RAC: 380 |
There is no difference between the actual work units, all are "created equal", at the time of distribution they are target to a particular application. When using the stock applications the servers attempt to identify which application works best on your system, and, as Zalster says, it would appear that they have decided that SaH and CUDA50 are best performing on your system. If you decide to use the optimised applications, as suggested, then you can select what applications to run. This may give you better performance, but you are responsible for updates, and choosing the correct applications for both the CUP and GPU, plus keeping them updated - all of which is done for you when using the stock applications. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
AMDave Send message Joined: 9 Mar 01 Posts: 234 Credit: 11,671,730 RAC: 0 |
Dave, I'm assuming you are running stock apps? Yes Have you looked at using the lunatics v45 beta 4 installer to get SoGs? It's been in the back of my mind. I'm guessing the lack of SoG for your rig maybe due to the server thinking the sah and cuda 50 are faster for your machine. and When using the stock applications the servers attempt to identify which application works best on your system, and, as Zalster says, it would appear that they have decided that SaH and CUDA50 are best performing on your system. Right.  I know that from time to time, the server will try a different app to determine if, in fact, the appropriate app is being used on a rig.  Previously, however, this has not happened for such an extended period of time.  How about resetting the project? |
rob smith Send message Joined: 7 Mar 03 Posts: 22190 Credit: 416,307,556 RAC: 380 |
Resetting might work, but you would probably end up in exactly the same situation. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Mike Send message Joined: 17 Feb 01 Posts: 34255 Credit: 79,922,639 RAC: 80 |
Resetting might work, but you would probably end up in exactly the same situation. Unlikely. Both OpenCL apps have better APR and are much faster on his host. With each crime and every kindness we birth our future. |
AMDave Send message Joined: 9 Mar 01 Posts: 234 Credit: 11,671,730 RAC: 0 |
Resetting cleared the cache, then downloaded 47 v8.12 *sah WUs.  I'll give it some time for the cache to fully restock.  If I still don't get any SoGs, then maybe I found an 'undocumented feature'. [EDIT] It's official, I've an undocumented feature. By the way, what causes the warning that appears for Task 5179311603?  This was the first WU processed after I reset the project. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13731 Credit: 208,696,464 RAC: 304 |
Resetting might work, but you would probably end up in exactly the same situation. But depending on the work mix at the time of the Manager figuring out which is faster, it could easily end up not picking SoG again. Because of the work mix when I was running stock at Beta, my system ended up selecting CUDA42- the slowest of all the applications for my system. To get it to try again I just ran multiple WUs at the same time to bring the APR rate for the selected application down. Then the manager did it's round robin trial again & this time came up with SoG as the best option. Then I went back to 1 WU at a time. Grant Darwin NT |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
The APR is calculated using averages. It's the lazy choice. Naturally we would choose the right tool for the right job, which usually would involve looking at the different work. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13731 Credit: 208,696,464 RAC: 304 |
The APR is calculated using averages. It's the lazy choice. Naturally we would choose the right tool for the right job, which usually would involve looking at the different work. An APR based on Angle Range & WU source? Grant Darwin NT |
Mike Send message Joined: 17 Feb 01 Posts: 34255 Credit: 79,922,639 RAC: 80 |
Sometimes i`m wondering if those people writing such code living on the same planet. Don`t get me wrong i`m used to cheat but most of the users are not. With each crime and every kindness we birth our future. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
The APR is calculated using averages. It's the lazy choice. Naturally we would choose the right tool for the right job, which usually would involve looking at the different work. APR, being a BOINC measurement used for all applications and at all projects, can't take parochial constructs like AR into account. Simple example: Astropulse tasks don't have an AR. The only project-specified value it has available to consider is <rsc_fpops_est>. |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
Hi, the ar doesn't affect much of the CPU tasks. I've seen two major categories. Just before I headed for a short family trip to a Holiday resort I had a major vision of how to improve triplet finding having a low or very low ar. I'll ponder that over and over in my mind and try to be still present to my family and then finally implement that on late sunday evening or monday afternoon. I'm not sure it will reduce the processing time of GBT packets, but this is something I have to test and try. The idea is similar to the -unroll in special version that cut the time in half. Now some ()___)________))))~~~ and sleep. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
The APR is calculated using averages. It's the lazy choice. Naturally we would choose the right tool for the right job, which usually would involve looking at the different work. Correct using old functional (software) design strategies and incomplete math, generalised for use in as wider context as possible. However, nowadays there are cleaner, easier to maintain, ways to separate the domain specific knowledge (e.g. AR), than rebuilding a whole splitter from source everytime something new comes along. Current method: - either estimate, derive methodically, or guess <rsc_fpops_est> (well or badly) -- happens to include an AR dependant term (in the splitter, hardwired) - scale estimates, for task issue, using averages Proposed method: - either estimate, derive methodically, or guess <rsc_fpops_est> (well or badly) - formalise the domain (task) specific component (such as AR) as a plugin transfer function. (currently is a transfer function in multibeam, but not very pluginable or useful for other work/project types) - scale estimates, for task issue, using actual estimate localisation techniques. Pros of the Current method, are that it's there and works (sortof). The cons are that any changes/additions require building a new splitter to include change (risky), reaching into scheduler issue time are sensitive to problems with using averages, and have no visible quality metrics to say whether it's right or wrong (other than users pointing and saying it looks borked when their estimates go wacky). Pros of the equivalent but more refined second approach are flexibility to make domain specific changes (e.g. add telescope) as determining parameters, without having to rebuild/maintain core code, can choose starting values from the closest estimate set of knowledge, and function + problems become isolated to their domain. The current method is a bespoke solution to a general problem, where the general problem has been misidentified. That is, the problem as coded for in Boinc is making estimates for every kindof task on every project in the same way, when the real underlying problem is that unique tasks, hosts, applications all require different estimates, and some adaptiveness. I guess it might come down to that the quest for one-size-fits-all solutions ignores the information available for the sakes of a prescriptive regime that ends up guaranteeing projects have to modify code when they'd rather be focussed on their particular science. There are tools and techniques for making estimates for the purposes of control, and averages are nearly universally considered bad. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.