Message boards :
Technical News :
Parking (Oct 07 2008)
Message board moderation
Author | Message |
---|---|
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
Had our weekly outage for mysql database backup/compression. Reminder: by "compression" I mean that the rather large tables in the database (notably "workunit" and "result" tables) stay stagnant in size if you go by number of rows. That is, workunits and results are created/deleted at about the same rate. However, when you delete a result you can't reclaim that space in the database again until either (a) a whole page of results is deleted (due to random nature of the project this rarely happens) or (b) we actively do this "compression." Why is this a problem? Well, imagine a city where, once you leave a parking space, nobody can ever park in that spot ever again unless all spaces in that neighborhood are vacated. This would make hunting for parking quite a chore. As time goes on, we see a similar effect on the database I/O. Seems silly that the database has this issue, but consider how many endeavors around the world, commercial or otherwise, require a database as large as ours in which a million rows get deleted and added every day? It's not a common problem, to say the least. At least at our scope. People seem to be experiencing slowness uploading/downloading work. I know why: I've been pumping raw data over our network to our offsite archive (HPSS) over the same network link as the uploads/downloads. Usually we don't, and in fact after the current batch is done (later tonight) I'll archive over the campus network (which is what we usually do). - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
Blurf Send message Joined: 2 Sep 06 Posts: 8962 Credit: 12,678,685 RAC: 0 |
Thanks Matt! |
Dr. C.E.T.I. Send message Joined: 29 Feb 00 Posts: 16019 Credit: 794,685 RAC: 0 |
|
Pepo Send message Joined: 5 Aug 99 Posts: 308 Credit: 418,019 RAC: 0 |
People seem to be experiencing slowness uploading/downloading work. I know why: I've been pumping raw data over our network to our offsite archive (HPSS) over the same network link as the uploads/downloads. Aaah, so this was the real reason for yesterday's (and today's) sustained 90 Mbit/s network load. I was thinking of Astropulse WUs a day ago, just there was no typical load pattern like during previous days, just a 6 hours of flat load. After the packets limitation problem from some few months ago, this sort of confirmes the link's ability to do 100 Mbit ;-) Peter |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
|
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
People seem to be experiencing slowness uploading/downloading work. I know why: I've been pumping raw data over our network to our offsite archive (HPSS) over the same network link as the uploads/downloads. The other reason for the sustained high throughput is a large contiguous block of VHAR 'shorty' telescope observations being split: they generate six - well, OK Joe, four, or even three-and-a-bit - times the number of download requests as the lower angle ranges. It would be really neat if Matt could interleave the archiving and the VHAR splits, to get the most out of the cheap Hurricane link without swamping it with two maxxed-out tasks at once. |
W-K 666 Send message Joined: 18 May 99 Posts: 19062 Credit: 40,757,560 RAC: 67 |
Hi, and off 10:30 UTC, BRUNO, the UPload-server, appears to be disabled! Leading on from this, as no results could have been received in last hour. Should the line on server status page that reads, "Results received in last hour" actually be "Results reported in last hour"? |
Pepo Send message Joined: 5 Aug 99 Posts: 308 Credit: 418,019 RAC: 0 |
Aaah, so this was the real reason for yesterday's (and today's) sustained 90 Mbit/s network load. This is also possible. But if you take a look at the network graphs during the regular weekly outages - their all (or most) outgoing transfer rates went way down (nearly to zero). During the yesterday's outage, the transfer rate stayed at 35 Mbit/s. they generate six - well, OK Joe, four, or even three-and-a-bit - times the number of download requests as the lower angle ranges. Good idea! Peter |
piper69 Send message Joined: 25 Sep 08 Posts: 49 Credit: 3,042,244 RAC: 0 |
matt upload server is down. can you give us a forecast when it will be online again? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
matt upload server is down. can you give us a forecast when it will be online again? It was fully up and running smoothly by 15:07 UTC - eight minutes before your post. |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30650 Credit: 53,134,872 RAC: 32 |
Matt thanks for the info. Jeff has posted about NTPCKR over here http://setiathome.berkeley.edu/forum_thread.php?id=44077&nowrap=true#815955 |
piper69 Send message Joined: 25 Sep 08 Posts: 49 Credit: 3,042,244 RAC: 0 |
maybe in youre country. i from europe two diferent countries different isp can't upload my crunched work. that's why i think it is down. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
maybe in youre country. i from europe two diferent countries different isp can't upload my crunched work. that's why i think it is down. Well, the server itself is up and running, as you can check on the server status page. And the restart timing I gave came from my own message logs here in the UK - not a milion miles from Europe. Unfortunately, Matt can't be held responsible for the vagaries of every European ISP! Sorry I was a bit short with you earlier - hadn't noticed you were a new user. Welcome to the boards. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.