Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation
Previous · 1 . . . 75 · 76 · 77 · 78 · 79 · 80 · 81 . . . 94 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13751 Credit: 208,696,464 RAC: 304 |
And we're back to sticking downloads again. Grant Darwin NT |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
Hey, this is nice. Seems the same setting that controls the Upload Retries also controls the Download Retries. Instead of Download retries in minutes, it's seconds. Download 'Project Backoffs' are minutes instead of Hours....this will work. Except as usual, we are now Out Of Work, and my machines are still out of work. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Setting nnt until all work is reported has been very effective for me. . . Reducing work report to 99 and setting NNT did not help here ... :( Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I just noticed we are back. And it wasn't a multi-day shutdown. Just a basic long Tuesday . . Hmmmm, 12 hours is a little more than a basic outage :( Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
It's been this way for at least 8 Years that I'm aware of. It doesn't make any difference whether it runs as Stock or Anonymous. Both those two machines ran as Stock for weeks after the Christmas SNAFU, One is still Stock, no difference 8 years ago or now. Is your Windows machine full yet? I'm finally getting a few downloads now, hopefully I'll get enough to keep the machines running soon. . . I didn't start to get more than an odd task or 2 until 8:30am UTC. :( Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I'm wondering if this issue with handing out work to some systems & not others is related to the Anonymous Platform issue with the new Scheduler version?I have often had one of my hosts getting work on every request while the other host stays dry. And they are both anonymous platform linux boxes. My theory is that because the clients are doing scheduler requests in a regular five minute cadence, then if there is a a big bunch of clients hitting the server at the same time my host hits it, this same bunch will be competing with my host on its next request too. And if my other host hits the server at a quiet point in time, It'll keep hitting this same 'hole' on the subsequent requests. . . My slowest Linux host seems to find that sweet spot regularly and will get regular downloads when the other 3 Linux machines are getting nothing.. All on the same line ... Stephen ? ? |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
Game on, just got two healthy downloads back to back. |
Oddbjornik Send message Joined: 15 May 99 Posts: 220 Credit: 349,610,548 RAC: 1,728 |
Validation and assimilation backlogs are approaching old heights. It's just a question of time before we're stuck again. |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
Validation and assimilation backlogs are approaching old heights. It's just a question of time before we're stuck again.The total result count is over 19.5 million and rising. Somwhere beyond 20 million they don't fit in ram any more and the database performance goes through the floor. What's weird is that on ssp only about 1% of all the returned results are in 'waiting for db purging' state but of all my returned results of the website 75% are in 'valid' state. I guess ssp counts the results associated with workunits waiting for assimilation as 'waiting for validation' but web site counts them as 'valid'. If I estimate the number of those results from the number of workunits waiting for assimilation and move this number from 'waiting for validation' to 'waiting for db purging', then 66% of all the returned results are there and this is a much better match to the 75% fraction within my results. Workunits waiting for assimilation has grown by 600 000 since the downtime. Fixing the problem that is causing that should be on very high priority. Can't blame the blc35 overflow storm any more. |
popandbob Send message Joined: 19 Mar 05 Posts: 551 Credit: 4,673,015 RAC: 0 |
I haven`t noticed anyone comment on this yet but the reason for the growing assimilation number is quite a simple one.... They have less spindles on the storage drive. Less spindles means lower read and write rates. Do you Good Search for Seti@Home? http://www.goodsearch.com/?charityid=888957 Or Good Shop? http://www.goodshop.com/?charityid=888957 |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
I haven`t noticed anyone comment on this yet but the reason for the growing assimilation number is quite a simple one.... They have less spindles on the storage drive. Less spindles means lower read and write rates.Not really because the new spindles read or write many times more bytes per rotation. But it does affect the performance of multiple simultaneous reads or writes as with less spindles there's lower chance for the simultaneous operations affecting different spindles. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
we don't even have confirmation that the new database system is even bought/built/implemented yet. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13751 Credit: 208,696,464 RAC: 304 |
we don't even have confirmation that the new database system is even bought/built/implemented yet.I would expect the system to be down for a day or more when it comes time for getting the new NAS going. First the normal weekly outage to compact & tidy up the database, then the time it takes to transfer it all across, then get the new hardware and transferred database recognised by the rest of the system. I seem to recall a full database transfer taking much longer than was expected once upon a time in the distant past. Grant Darwin NT |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
I would expect the system to be down for a day or more when it comes time for getting the new NAS going. First the normal weekly outage to compact & tidy up the database, then the time it takes to transfer it all across, then get the new hardware and transferred database recognised by the rest of the system.They have the replica db they can copy to the new NAS without impacting the running system. Then they can make the new NAS using db the replica dp and let the replication process bring it up to date. Then the only thing they need to do during the downtime is to swap the roles of the databases so it won't necessarily have any impact on the length of the downtime. We had a period of time a week ago or so where the replica db was offline and the web site was using the master db directly. Perhaps they were doing just this. |
rob smith Send message Joined: 7 Mar 03 Posts: 22228 Credit: 416,307,556 RAC: 380 |
And then of course there is getting like the purchasing done (even for fully pre-funded equipment) within a university - that can be a very fraught and time consuming activity :-( Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
Looks like the splitter throttling is much more effective now when the overflow storm is over. The result table has now grown to 20 million and the splitters are being throttled but when they stop, the table drops under 20 mil almost immediately so the splitters spend only short periods stopped making this almost unnoticeable. During the overflow storm the validators kept adding lot of resends to the result table so the table kept growing fast despite the splitters not splitting anything. |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
I just did a quick up of the big numbers on the service status page. It seems the database can handle over 20 million comfortably when I added up the numbers this is what I got. 22,986,785. Splitter rate is over 67 a second |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
I just did a quick up of the big numbers on the service status page. It seems the database can handle over 20 million comfortably when I added up the numbers this is what I got. 22,986,785The highest number the ssp has had within the last day or so was 20,012,235 and it spends most of its time below 20 mil only doing brief dips above it. I guess you are mixing some non-result fields in your count getting a weird hybrid number that doesn't match the size of any table. That 20 mil is the size of the result table. You get that by summing up all the result fields: 'Results ready to send', 'Results out in the field', 'Results returned and awaiting validation' and 'Results waiting for db purging'. If you add the workunit and file fields, then you will count some results up to four times. And you can't really count the size of the workunit table because ssp only shows a subset of them. |
BetelgeuseFive Send message Joined: 6 Jul 99 Posts: 158 Credit: 17,117,787 RAC: 19 |
Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ... https://setiathome.berkeley.edu/workunit.php?wuid=3871356807 Both computers that have this task marked as valid returned an overflow (and both these hosts return lots of invalids). Both computers that have this task marked as invalid did NOT return an overflow (and both these hosts have no other invalids). Shouldn't there be some kind of mechanism to prevent this (when at least one host did not return an overflow try more hosts) ? Tom |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ... . . The two hosts with lots of invalids have NAVI 5700 GPUs, so there are still some out there not upgrading their drivers to fix this problem. Stephen :( |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.