Message boards :
Technical News :
Maxed (Dec 16 2010)
Message board moderation
Author | Message |
---|---|
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
We're back to shoveling out workunits as fast as we can. I mentioned in another thread that the gigabit link project is still alive. In fact, the whole lab is interested in getting gigabit connectivity to the rest of campus, which makes the whole battle a lot easier (we'll still have to buy our own bits and get the hardware to keep them separate). Still, it's slow going due to campus staff cutbacks and higher priorities. With the heavy load on oscar (splitting and assimilating full bore) I got some good i/o stats to determine how much we should reduce the stripe size on its database RAID partition. This will be enacted next week during the return of the 3-day weekly outage. It's unclear how regular these extended weekly outages will be - we'll figure that all out in the new year. But back to oscar... we were pushing it pretty hard today - almost too much. It looked like we were about to run out of workunits for a minute there but I caught it just in time. We're still trying to figure some things out. By the way, I think there was some general maintenance around the lab in general, which may have caused a temporary network "brown out." - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Matt, please consider the throttle backs I mentioned through Eric and in NC:Panic 42, and to back us off as needed. This should lighten the load on both Oscar and the link. Janice |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Matt, thanks for the news! BTW, the currently 'WU limit in progress' is too low for to bridge the next 3 day outage.. ;-) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
Matt, thanks for the news! But it's too high to avoid maxing out the pipe ;-) |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Matt, thanks for the news! Yes.. But only one day (Monday - Tuesday morning) for to fill up a 3 day WU cache is too low. At least if you need 2,700+ WUs for 3 days. And you have only DSL light (384/64 kbit/s). In past it was like this, increased limit at Monday and immediately backlogged DLs in BOINC. And not enough WUs DLed for to bridge the 3 days. It would be wonderful to have a gigabit connection to the SETI@home server. ;-) |
-BeNt- Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 |
Matt, thanks for the news! I think the issue now will still be a bottleneck even with a gigabit link. It will just take a larger number of concurrent downloaders, or make the duration of the log jam shorter. Too bad we don't have a scheduler for timed downloads where one user would recieve WU's in chunks of 5 or 10 then allow the next person to go. I believe this is the idea behind the back off period, but the number of users fighting over spots makes it more of less a dog fight for the first position that opens up. Traveling through space at ~67,000mph! |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
So who do you want to alienate? A lot of small crunchers and let the large crunchers hog the bandwidth/time slots, or a few big crunchers by letting everybody have an equal chunk of bandwidth/time slots. Edited (note to self, don't forget to proof read before posting) |
-BeNt- Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 |
So who do you want to alienate? A lot of small crunchers and let the large crunchers hog the bandwidth/time slots, or a few big crunchers by letting everybody have an equal chunk of bandwidth/time slots. I suppose you are talking to me? I didn't know I had implied alienating anyone. I think the time and bandwidth would be best served by equal bandwidth and time more than letting one 'class' dominate the connections. Traveling through space at ~67,000mph! |
RottenMutt Send message Joined: 15 Mar 01 Posts: 1011 Credit: 230,314,058 RAC: 0 |
equal pay for everyone!!! |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
equal pay for everyone!!! From each according to their ability, to each according to their need. Isn't that the American way? |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
... This will be enacted next week during the return of the 3-day weekly outage. It's unclear how regular these extended weekly outages will be - we'll figure that all out in the new year. Seriously? That means nothing has changed for us, the Cruncher, and we are at the same Point as before the big Outage? Overloaded internet connection, a 3-Day Outage and babysitting our Rigs... I had hoped for more (because of our generous donation)… :-( Helli A loooong time ago: First Credits after SETI@home Restart |
Aurora Borealis Send message Joined: 14 Jan 01 Posts: 3075 Credit: 5,631,463 RAC: 0 |
... This will be enacted next week during the return of the 3-day weekly outage. It's unclear how regular these extended weekly outages will be - we'll figure that all out in the new year. "I find my life is a lot easier the lower I keep my expectations."–Calvin My only hope is that Nitpicker will only take a year instead of the estimated 2.5 years to search through the current database. |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
Part of the bandwidth/demand issue must be the limit of storage that berkeley can provide to hold results-in-progress. This issue is a technical one, for sure, meaning that it can only be understood from experience and testing. However, I would think that if we ran with larger caches, especially the crunchers, the delay times between retries were lengthened, and the download limit increased at least a small amount, the issue would be resolved in large measure, save the occasional seti crash or the (abhorrent) 3-day outages perhaps. We could hope that these actions would reduce the ghost issue as well. So, does anybody know how many results-in-progress seti can manage and are we close to that number yet? In my case, on my main hosts, I have relented unwillingly just to run einstein with a resource setting of zero. It reduces my contribution to seti, but keeps the electrons oscillating when seti goes burp. |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
Part of the bandwidth/demand issue must be the limit of storage that berkeley can provide to hold results-in-progress. This issue is a technical one, for sure, meaning that it can only be understood from experience and testing. However, I would think that if we ran with larger caches, especially the crunchers, the delay times between retries were lengthened, and the download limit increased at least a small amount, the issue would be resolved in large measure, save the occasional seti crash or the (abhorrent) 3-day outages perhaps. We could hope that these actions would reduce the ghost issue as well. I would think that with the new servers online, our 'in process' limit has been increased immensely. Kitties go burp from digesting the new WUs sent recently.... "Time is simply the mechanism that keeps everything from happening all at once." |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
equal pay for everyone!!! I can resist a reply: This quote is simplistic liberal pap because it leaves hanging who decides ability and need. Too many Americans fail to read between the lines. |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
... This will be enacted next week during the return of the 3-day weekly outage. It's unclear how regular these extended weekly outages will be - we'll figure that all out in the new year. Seriously? You only read the first sentence of what you quoted? I was hoping they could cut the outages down to two days, but it sounds like they're leaning more toward not doing them every week. I, for one, am fine with that. I just let BOINC run on its own, the way it was designed to, for weeks at a time without even opening the BOINC manager and months at a time without getting into my preferences on the project web sites. I'm taking more active control lately because running Einstein exclusively during the Seti outage was causing my comptuer to crash, but I'm hoping to get it to reach a new equilibrium so I can leave it alone again. If that means I occasionally go a day or two without any work to do, then so be it. David David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
LOL If i would also have a RAC of only 118 - i would also not be worried. :D Helli A loooong time ago: First Credits after SETI@home Restart |
Profi Send message Joined: 8 Dec 00 Posts: 19 Credit: 20,552,123 RAC: 0 |
[As of 17 Dec 2010 16:00:08 UTC] Results ready to send:198,960 <- BTW - shouldn't it be "Workunits" instead of "Results"? (maybe I'm getting this wrong but isn't it an amount of work-to-do created?) Current result creation rate 16.4682/sec <- same as above - WU's instead of Results? (a work-to-do creation rate or is it a rate at which the results from users are being inserted into the database?) Am I misunderstanding the definitions of "Workunit" and "Result"? Is the "Workunit" a portion of a raw telescope data "chopped" into units of approx. 380KB (ok - skipping VLAR and other "derivatives") and sent to users? Is a "Result" an outcome of cruncher's machine calculation on chunk of raw data which is returned to S@H server for cross-checking with other results returned by other users(machines) working on the same chunk of data (and other server-side operations like DB assimilation etc.) ? Correct me here if I'm wrong - thanks in advance... Results received in last hour 50,451 1hr=3600s So from simple "steady-state" calculation: At current rate 16.4682*3600=59285,52 WU's can be created each hour. So right now the project is more less balanced - creation rate is slightly higher than WU demand. A current WU's pool will satisfy 4 hr of Crunchers' demand. If you want to have 3 days outage, than I'm afraid that the project will run out of the woods and it will get worse (after outage - higher demand) and as a time goes by crunching machines are not getting slower.... But keep on pushing folks!! :) Profi p.s. I was suggesting 1Gbit connection as a temporary solution a long time ago - hopefully it will be realized soon. But I think that sooner or later "server side" of the project will have to be splitted somehow across a couple of world-spreaded sites and Berkeley should remain "capo di tutti capi" of the project - otherwise S@H may become a victim of it's own success. |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30971 Credit: 53,134,872 RAC: 32 |
You have it correct. What you don't quite get is the "result" is called that even before it is sent to a cruncher's computer. At that point the state of the result is "ready to send" When it comes back the state gets changed, completed, error, time out, etc. When the wingman's result comes back the validator is notified and changes the state(s) again or it creates another result to be sent if inconclusive. |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
In most English speaking countries a "result" is indeed what comes after something. At Berkeley a result in a duplicate of a work unit, and it is called a result before it is sent, while it is out in the field, and after it comes home. Edit: you just beat me Gary. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.