Ebb and Flow (Sep 04 2008)

Message boards : Technical News : Ebb and Flow (Sep 04 2008)

To post messages, you must log in.

AuthorMessage
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 1 Mar 99
Posts: 1441
Credit: 213,689
RAC: 0
United States
Message 804949 - Posted: 4 Sep 2008, 19:48:30 UTC

The good news is that recent woes due to lack of workunit disk space have seemingly passed for now. We're still on the very edge of our capacity, but now that we're prioritizing the smaller regular workunits (as opposed to the big Astropulse workunits) we were able to build up a ready-to-send queue and network traffic stabilized overnight.

The less-good news is that we still need to build some indexes on the science database. We're building one now, and it usually takes 12-24 hours. This adds a lot of CPU and disk I/O to the science database server, meaning the splitters can add rows as fast, nor can the assimilators. So the ready-to-send queue drops, and the assimilator queue rises. As an added bonus, when the assimilator queue rises, that means the deleters slow down, which means the available workunit disk space reduces, and we're back to square one again. No big deal as long as people are patient. All the backend services are doing the best they can until the index build finishes, and then we should catch up again.

- Matt

-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude
ID: 804949 · Report as offensive
Profile computerguy09
Volunteer tester
Avatar

Send message
Joined: 3 Aug 99
Posts: 80
Credit: 5,067,105
RAC: 2,807
United States
Message 804963 - Posted: 4 Sep 2008, 20:26:35 UTC - in response to Message 804949.  

The good news is that recent woes due to lack of workunit disk space have seemingly passed for now. We're still on the very edge of our capacity, but now that we're prioritizing the smaller regular workunits (as opposed to the big Astropulse workunits) we were able to build up a ready-to-send queue and network traffic stabilized overnight.

The less-good news is that we still need to build some indexes on the science database. We're building one now, and it usually takes 12-24 hours. This adds a lot of CPU and disk I/O to the science database server, meaning the splitters can add rows as fast, nor can the assimilators. So the ready-to-send queue drops, and the assimilator queue rises. As an added bonus, when the assimilator queue rises, that means the deleters slow down, which means the available workunit disk space reduces, and we're back to square one again. No big deal as long as people are patient. All the backend services are doing the best they can until the index build finishes, and then we should catch up again.

- Matt


Thanks for all your hard work to keep things going...

And this explains why the RTS queue is now empty, and the cricket graphs have gone down...
Mark

ID: 804963 · Report as offensive
Profile BlurfProject Donor
Volunteer tester

Send message
Joined: 2 Sep 06
Posts: 8817
Credit: 9,645,152
RAC: 3,044
United States
Message 805037 - Posted: 4 Sep 2008, 22:38:05 UTC

Matt--thanks for the updates! I'll give you a call next week about hardware donations


ID: 805037 · Report as offensive
Profile Gary CharpentierCrowdfunding Project Donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 18644
Credit: 21,481,799
RAC: 19,449
United States
Message 805093 - Posted: 5 Sep 2008, 0:54:55 UTC - in response to Message 804949.  

The good news is that recent woes due to lack of workunit disk space have seemingly passed for now. We're still on the very edge of our capacity, but now that we're prioritizing the smaller regular workunits (as opposed to the big Astropulse workunits) we were able to build up a ready-to-send queue and network traffic stabilized overnight.

The less-good news is that we still need to build some indexes on the science database. We're building one now, and it usually takes 12-24 hours. This adds a lot of CPU and disk I/O to the science database server, meaning the splitters can add rows as fast, nor can the assimilators. So the ready-to-send queue drops, and the assimilator queue rises. As an added bonus, when the assimilator queue rises, that means the deleters slow down, which means the available workunit disk space reduces, and we're back to square one again. No big deal as long as people are patient. All the backend services are doing the best they can until the index build finishes, and then we should catch up again.

- Matt


Thanks for the advance notice.

Knew you were running on the edge, just didn't realize it was that close.

Gary
ID: 805093 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16015
Credit: 749,424
RAC: 227
United States
Message 805095 - Posted: 5 Sep 2008, 0:58:51 UTC


. . . Thank You for the Update Matt - note that patience is a virtue


BOINC Wiki . . .

Science Status Page . . .
ID: 805095 · Report as offensive
Profile MarkJProject Donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 08
Posts: 978
Credit: 33,086,079
RAC: 6,044
Australia
Message 805252 - Posted: 5 Sep 2008, 12:46:14 UTC - in response to Message 805037.  

Matt--thanks for the updates! I'll give you a call next week about hardware donations


@ Matt,
Is there anything we can do to increase storage (like get some more/bigger drives) or is the server simply not able to plug any more drives (or cope with larger ones)?

@ Blurf, Perhaps a limited time donation drive for this specific purpose (assuming more/bigger drives are useful)?

Cheers,
MarkJ
BOINC blog
ID: 805252 · Report as offensive
Richard HaselgroveProject Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 11142
Credit: 83,815,452
RAC: 45,927
United Kingdom
Message 805260 - Posted: 5 Sep 2008, 13:43:32 UTC - in response to Message 805252.  

Matt--thanks for the updates! I'll give you a call next week about hardware donations


@ Matt,
Is there anything we can do to increase storage (like get some more/bigger drives) or is the server simply not able to plug any more drives (or cope with larger ones)?

Have a look at the server closet photo album (February 2008) to see what they're up against.
ID: 805260 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 805296 - Posted: 5 Sep 2008, 17:50:36 UTC

Hi, looks like you're need storage in PETABYTES rather then TERABYTES ;)
Keep up the good, though hard work.

ID: 805296 · Report as offensive
KM1P

Send message
Joined: 14 May 99
Posts: 3
Credit: 8,116,500
RAC: 1,893
United States
Message 805328 - Posted: 5 Sep 2008, 21:15:53 UTC

Unfortunately budgets are in KILODOLLARS not MEGADOLLARS!
ID: 805328 · Report as offensive

Message boards : Technical News : Ebb and Flow (Sep 04 2008)


 
©2016 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.