Message boards :
Number crunching :
Long WU's - Impending disaster?
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Darren Send message Joined: 2 Jul 99 Posts: 259 Credit: 280,503 RAC: 0 |
> The scheduler will not send out more work than can be completed before the > deadline. Weather or not it includes work already in the queue properly I > don't know. Uh, doesn't this kind of totally contradict what you said earlier in the thread: "It is even worse than that Neil, when the proper estimates are put back in the people that didn't have to patience to wait out the problem will get tons of work they cannot finish in time. They will then likely be the same people whining about how long it is taking to grant credit." If, as you say now, "the scheduler will not send out more work than can be completed before the deadline", what you said earlier isn't possible, as the scheduler will, well, not send them those "tons of work" when the times get straightened back out. Actually, if the scheduler will cut them off at what they can really do, it's a complete moot point what any of the impatient people have increased their cache to. Darren |
Keck_Komputers Send message Joined: 4 Jul 99 Posts: 1575 Credit: 4,152,111 RAC: 1 |
> > > The scheduler will not send out more work than can be completed before > the > > deadline. Weather or not it includes work already in the queue properly > I > > don't know. > > Uh, doesn't this kind of totally contradict what you said earlier in the > thread: > > "It is even worse than that Neil, when the proper estimates are put back in > the people that didn't have to patience to wait out the problem will get tons > of work they cannot finish in time. They will then likely be the same people > whining about how long it is taking to grant credit." > > If, as you say now, "the scheduler will not send out more work than can be > completed before the deadline", what you said earlier isn't possible, as the > scheduler will, well, not send them those "tons of work" when the times get > straightened back out. Actually, if the scheduler will cut them off at what > they can really do, it's a complete moot point what any of the impatient > people have increased their cache to. > > > Darren > Good point, I hope it really works that way. John Keck BOINCing since 2002/12/08 |
JAF Send message Joined: 9 Aug 00 Posts: 289 Credit: 168,721 RAC: 0 |
My problem is, I travel, sometimes up to 10 days per trip. I want to keep my two cheap computers crunching (unattended), so I try to set the cache size to accomplish this. I have dial-up, so I have to manually connect to get new Wu's and send what I complete. Now it looks like the Seti team is trying to make it hard for me to keep crunching. Currently, I have about 40 hours worth of work even though I set my preferences to 5 - 10 days. Why keep my computers running for 10 days with 1.67 days of work? I don't understand why they can't develop a "crunch mark" based on your processing history per computer ID. If your computer is averaging 3.0 hours per work unit and your cache is set for 3 days worth, your low water mark is 24. Sending fake WU completion times to adjust for WU demand doesn't seem very "scientific". They have a few years of data to use to determine the amount of Wu's needed to support the project. If people are hoarding Wu's and can't complete them before the drop dead date, so what? It's the "hoarders" fault. Just my opinion... |
Pascal, K G Send message Joined: 3 Apr 99 Posts: 2343 Credit: 150,491 RAC: 0 |
> My problem is, I travel, sometimes up to 10 days per trip. I want to keep my > two cheap computers crunching (unattended), so I try to set the cache size to > accomplish this. I have dial-up, so I have to manually connect to get new Wu's > and send what I complete. Now it looks like the Seti team is trying to make it > hard for me to keep crunching. > > Currently, I have about 40 hours worth of work even though I set my > preferences to 5 - 10 days. Why keep my computers running for 10 days with > 1.67 days of work? > > I don't understand why they can't develop a "crunch mark" based on your > processing history per computer ID. If your computer is averaging 3.0 hours > per work unit and your cache is set for 3 days worth, your low water mark is > 24. > > Sending fake WU completion times to adjust for WU demand doesn't seem very > "scientific". They have a few years of data to use to determine the amount of > Wu's needed to support the project. If people are hoarding Wu's and can't > complete them before the drop dead date, so what? It's the "hoarders" fault. > > Just my opinion... > I know it is only one day but My wuwus are coming 1 at a time for one machine and 2 at a time for the other and it is keeping them both busy. They should be dropping back to the old completion time soon, if I read anonther post correctly.... Seti@Home, it is really about the science, isn’t it!! |
Belial Send message Joined: 22 Jan 02 Posts: 47 Credit: 63,100 RAC: 0 |
> My problem is, I travel, sometimes up to 10 days per trip. I want to keep my > two cheap computers crunching (unattended), so I try to set the cache size to > accomplish this. I have dial-up, so I have to manually connect to get new Wu's > and send what I complete. Now it looks like the Seti team is trying to make it > hard for me to keep crunching. > > Currently, I have about 40 hours worth of work even though I set my > preferences to 5 - 10 days. Why keep my computers running for 10 days with > 1.67 days of work? > > I don't understand why they can't develop a "crunch mark" based on your > processing history per computer ID. If your computer is averaging 3.0 hours > per work unit and your cache is set for 3 days worth, your low water mark is > 24. > > Sending fake WU completion times to adjust for WU demand doesn't seem very > "scientific". They have a few years of data to use to determine the amount of > Wu's needed to support the project. If people are hoarding Wu's and can't > complete them before the drop dead date, so what? It's the "hoarders" fault. > > Just my opinion... > run another project then along with seti. You can go on vacation and your computers will stay busy with something along with your 1.67 days worth of seti getting finished. Maybe you want to run seti 100% exclusively but I think wu shortages are a good thing for the science because it means we are getting the stuff crunched only as much as necessary; rather then crunching the same wu 50 times over just so the stat people can gloat. or just turn your computers off for the 10 days. It's not a race after all and you can take a break now and then. |
Heffed Send message Joined: 19 Mar 02 Posts: 1856 Credit: 40,736 RAC: 0 |
> Sending fake WU completion times to adjust for WU demand doesn't seem very > "scientific". They aren't sending fake completion times to adjust for demand. It was a mistake... <a> [/url] |
JAF Send message Joined: 9 Aug 00 Posts: 289 Credit: 168,721 RAC: 0 |
> > Sending fake WU completion times to adjust for WU demand doesn't seem > very > > "scientific". > > They aren't sending fake completion times to adjust for demand. It was a > mistake... > > <a> |
Heffed Send message Joined: 19 Mar 02 Posts: 1856 Credit: 40,736 RAC: 0 |
> Do you know if the scheduler uses the amount of pending work time one has in > cache is what it uses to determine if Wu's are available for download? I would > guess that would be the case. If so, those mistaken "long" WU times will cause > some outages. Yes on both counts. :) <a> [/url] |
EclipseHA Send message Joined: 28 Jul 99 Posts: 1018 Credit: 530,719 RAC: 0 |
> They aren't sending fake completion times to adjust for demand. It was a > mistake... > Do you have real data to back this up? Bet not! What's the bug I can look up in CVS or the "problem list".. I'll understand when I don't hear back! |
EclipseHA Send message Joined: 28 Jul 99 Posts: 1018 Credit: 530,719 RAC: 0 |
> > Do you know if the scheduler uses the amount of pending work time one has > in > > cache is what it uses to determine if Wu's are available for download? I > would > > guess that would be the case. If so, those mistaken "long" WU times will > cause > > some outages. > > Yes on both counts. :) > The scheduler isn't involved.. The client requests work and will request differnt amounts based on the queue on the local system and the flakey benchmark system. The scheduler is only there to say "you want it, you got it!", or to deny the request... |
JAF Send message Joined: 9 Aug 00 Posts: 289 Credit: 168,721 RAC: 0 |
> > > Do you know if the scheduler uses the amount of pending work time > one has > > in > > > cache is what it uses to determine if Wu's are available for > download? I > > would > > > guess that would be the case. If so, those mistaken "long" WU times > will > > cause > > > some outages. > > > > Yes on both counts. :) > > > > The scheduler isn't involved.. The client requests work and will request > differnt amounts based on the queue on the local system and the flakey > benchmark system. The scheduler is only there to say "you want it, you got > it!", or to deny the request... > > > AZ Woody, I'm not saying your wrong, but wouldn't the client request x amount of seconds of work and the scheduler respond with x amount of seconds of work based on the benchmarks, preferences, and some WU time? If the WU times are screwed up, it would seem the scheduler would send an incorrect amount of work. Just trying to learn how the system works. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
> > > They aren't sending fake completion times to adjust for demand. It was a > > mistake... > > > Do you have real data to back this up? Bet not! What's the bug I can look up > in CVS or the "problem list".. > > I'll understand when I don't hear back! > > The Alpha team had quite a discussion in their email group. One of the developers had noticed quite a few results with 6 * the estimated processing time, and took it on himself to fix the problem. |
EclipseHA Send message Joined: 28 Jul 99 Posts: 1018 Credit: 530,719 RAC: 0 |
> One of the > developers had noticed quite a few results with 6 * the estimated processing > time, and took it on himself to fix the problem. No, he fixed the "symptom", not the problem, one I reported long ago! The "benchmarks" have been screwed up for months! I kept been told that the "new benchmark" (the one in the verions that was first available in beta a day or two before live) would fix it... It didn't.. Still doesn't.. The difference between an -average developer and a good+ developer is not the code, but the ability to fix problems! Patching crap to fix a symptom only means the bug will be harder to track later! It's harder to fix if the symptoms have been masked! |
Rom Walton (BOINC) Send message Joined: 28 Apr 00 Posts: 579 Credit: 130,733 RAC: 0 |
> No, he fixed the "symptom", not the problem, one I reported long ago! > > The "benchmarks" have been screwed up for months! I kept been told that the > "new benchmark" (the one in the verions that was first available in beta a day > or two before live) would fix it... > > It didn't.. Still doesn't.. > > The difference between an -average developer and a good+ developer is not the > code, but the ability to fix problems! > > Patching crap to fix a symptom only means the bug will be harder to track > later! It's harder to fix if the symptoms have been masked! Once again Woody you are making a mountain out of a mole hill. Several problems were resolved by switching to the Whetstone/Dhrystone combination, such as Hyper-Threading vs. Multi-Proc machines, length of execution time, more standards compliance, not destabilize the product so close to shipping, and we retained the ability for a project leader to setup a BOINC project in a few hours and start distributing work. The current benchmarks scale according to processor types within each OS group. The outstanding problems with the current benchmarking system are related to the default optimizations GCC enables by default vs. the Microsoft C++ compiler. Another issue seems to be related to the way the Linux 2.6 thread scheduler schedules work. Already people have been trying different GCC compiler flags and are coming close to the Windows numbers. Now, there has been talk of writing hooks into the system for specialized benchmarks and only using Whetstones/Dhrystones as a fall back measure. We first started discussing this in December of last year, but after walking through which areas of the code base would be affected we decided to defer it for another milestone. Many problems were fixed; but like all things, the value of it is completely a matter of perspective. A good developer can not only fix bugs, but determine the right time to fix them based on requirements, schedules, and the potential to destabilize the code base. By not being selective in which bugs get fixed and when, you always miss your schedule as you always find more bugs to fix then you have time for. Even NASA’s shuttle software has had to fix something like 6 bugs since it was initially released, and it went through the best software testing system of the day with a fairly large budget I might add. ----- Rom BOINC Development Team, U.C. Berkeley |
Rom Walton (BOINC) Send message Joined: 28 Apr 00 Posts: 579 Credit: 130,733 RAC: 0 |
> The scheduler isn't involved.. The client requests work and will request > differnt amounts based on the queue on the local system and the flakey > benchmark system. The scheduler is only there to say "you want it, you got > it!", or to deny the request... Wrong, The client only sends the number of seconds it has to fill to reach the minimum work buffer. It is up to the scheduler to decide how much work to send to the client for that request. ----- Rom BOINC Development Team, U.C. Berkeley |
Rom Walton (BOINC) Send message Joined: 28 Apr 00 Posts: 579 Credit: 130,733 RAC: 0 |
> > > They aren't sending fake completion times to adjust for demand. It was a > > mistake... > > > Do you have real data to back this up? Bet not! What's the bug I can look up > in CVS or the "problem list".. > > I'll understand when I don't hear back! Fine, I'll pipe in here, this was a change that David, Jeff, and myself were not notified of before it was made. Tomorrow I'll be talking to the Dev that made the change and possibly revert it back to the original number or at least drop the numbers a bit. A bug hasn't been filed on it yet, and depending on tomorrows conversation may not be necessary. ----- Rom BOINC Development Team, U.C. Berkeley |
Rom Walton (BOINC) Send message Joined: 28 Apr 00 Posts: 579 Credit: 130,733 RAC: 0 |
> Don't get me wrong. It would be nice if Boinc would keep up with real > demands, or they'd limit the users/hosts crunching, but (if intended) this > seems like a logical solution (why have thousands of WU's that won't be > crunched for days, assigned out, when others can't get get a single WU!) We are currently able to keep up with demand. More performance improvements are in the works to increase our scalability. ----- Rom BOINC Development Team, U.C. Berkeley |
xi3piscium Send message Joined: 17 Aug 99 Posts: 287 Credit: 26,674 RAC: 0 |
Long work units....I have 10 of them 43 hours each...to dump them or not? Can anyone offer advice? I am not in the habit of dumping WU's....but this scares me and my 2 machines.....reporting from Chongqing China...xi3 <a> [/url] |
STE\/E Send message Joined: 29 Mar 03 Posts: 1137 Credit: 5,334,063 RAC: 0 |
Long work units....I have 10 of them 43 hours each...to dump them or not? ========== No, don't dump them, from what I've read they will only take the normal amount of time to run & not the 43 hr's it's saying it will take... |
Isis Send message Joined: 20 Jul 03 Posts: 4 Credit: 43,217,876 RAC: 68 |
The odd thing is that some of the computers show normal WU times and have the appropriate number of Wu's in the cache and others do not. It seems that the ones with normal WU times are all running XP and the ones with abnormal WU times are running Windows 2000. Not sure if this means anything but I thought it might be useful for the dev folks to know. Olli Andreas Olligschlaeger, Ph.D. TruNorth Data Systems, Inc. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.