Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation
Previous · 1 . . . 75 · 76 · 77 · 78 · 79 · 80 · 81 . . . 94 · Next
Author | Message |
---|---|
Oddbjornik ![]() ![]() ![]() ![]() Send message Joined: 15 May 99 Posts: 220 Credit: 349,610,548 RAC: 1,728 ![]() ![]() |
Validation and assimilation backlogs are approaching old heights. It's just a question of time before we're stuck again. |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
Validation and assimilation backlogs are approaching old heights. It's just a question of time before we're stuck again.The total result count is over 19.5 million and rising. Somwhere beyond 20 million they don't fit in ram any more and the database performance goes through the floor. What's weird is that on ssp only about 1% of all the returned results are in 'waiting for db purging' state but of all my returned results of the website 75% are in 'valid' state. I guess ssp counts the results associated with workunits waiting for assimilation as 'waiting for validation' but web site counts them as 'valid'. If I estimate the number of those results from the number of workunits waiting for assimilation and move this number from 'waiting for validation' to 'waiting for db purging', then 66% of all the returned results are there and this is a much better match to the 75% fraction within my results. Workunits waiting for assimilation has grown by 600 000 since the downtime. Fixing the problem that is causing that should be on very high priority. Can't blame the blc35 overflow storm any more. |
![]() Send message Joined: 19 Mar 05 Posts: 551 Credit: 4,673,015 RAC: 0 ![]() |
I haven`t noticed anyone comment on this yet but the reason for the growing assimilation number is quite a simple one.... They have less spindles on the storage drive. Less spindles means lower read and write rates. ![]() Do you Good Search for Seti@Home? http://www.goodsearch.com/?charityid=888957 Or Good Shop? http://www.goodshop.com/?charityid=888957 |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
I haven`t noticed anyone comment on this yet but the reason for the growing assimilation number is quite a simple one.... They have less spindles on the storage drive. Less spindles means lower read and write rates.Not really because the new spindles read or write many times more bytes per rotation. But it does affect the performance of multiple simultaneous reads or writes as with less spindles there's lower chance for the simultaneous operations affecting different spindles. |
Ian&Steve C. ![]() Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 ![]() ![]() |
we don't even have confirmation that the new database system is even bought/built/implemented yet. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours ![]() ![]() |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13913 Credit: 208,696,464 RAC: 304 ![]() ![]() |
we don't even have confirmation that the new database system is even bought/built/implemented yet.I would expect the system to be down for a day or more when it comes time for getting the new NAS going. First the normal weekly outage to compact & tidy up the database, then the time it takes to transfer it all across, then get the new hardware and transferred database recognised by the rest of the system. I seem to recall a full database transfer taking much longer than was expected once upon a time in the distant past. Grant Darwin NT |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
I would expect the system to be down for a day or more when it comes time for getting the new NAS going. First the normal weekly outage to compact & tidy up the database, then the time it takes to transfer it all across, then get the new hardware and transferred database recognised by the rest of the system.They have the replica db they can copy to the new NAS without impacting the running system. Then they can make the new NAS using db the replica dp and let the replication process bring it up to date. Then the only thing they need to do during the downtime is to swap the roles of the databases so it won't necessarily have any impact on the length of the downtime. We had a period of time a week ago or so where the replica db was offline and the web site was using the master db directly. Perhaps they were doing just this. |
rob smith ![]() ![]() ![]() Send message Joined: 7 Mar 03 Posts: 22737 Credit: 416,307,556 RAC: 380 ![]() ![]() |
And then of course there is getting like the purchasing done (even for fully pre-funded equipment) within a university - that can be a very fraught and time consuming activity :-( Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
Looks like the splitter throttling is much more effective now when the overflow storm is over. The result table has now grown to 20 million and the splitters are being throttled but when they stop, the table drops under 20 mil almost immediately so the splitters spend only short periods stopped making this almost unnoticeable. During the overflow storm the validators kept adding lot of resends to the result table so the table kept growing fast despite the splitters not splitting anything. |
Speedy ![]() Send message Joined: 26 Jun 04 Posts: 1646 Credit: 12,921,799 RAC: 89 ![]() ![]() |
I just did a quick up of the big numbers on the service status page. It seems the database can handle over 20 million comfortably when I added up the numbers this is what I got. 22,986,785. Splitter rate is over 67 a second ![]() |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
I just did a quick up of the big numbers on the service status page. It seems the database can handle over 20 million comfortably when I added up the numbers this is what I got. 22,986,785The highest number the ssp has had within the last day or so was 20,012,235 and it spends most of its time below 20 mil only doing brief dips above it. I guess you are mixing some non-result fields in your count getting a weird hybrid number that doesn't match the size of any table. That 20 mil is the size of the result table. You get that by summing up all the result fields: 'Results ready to send', 'Results out in the field', 'Results returned and awaiting validation' and 'Results waiting for db purging'. If you add the workunit and file fields, then you will count some results up to four times. And you can't really count the size of the workunit table because ssp only shows a subset of them. |
BetelgeuseFive ![]() Send message Joined: 6 Jul 99 Posts: 158 Credit: 17,117,787 RAC: 19 ![]() ![]() |
Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ... https://setiathome.berkeley.edu/workunit.php?wuid=3871356807 Both computers that have this task marked as valid returned an overflow (and both these hosts return lots of invalids). Both computers that have this task marked as invalid did NOT return an overflow (and both these hosts have no other invalids). Shouldn't there be some kind of mechanism to prevent this (when at least one host did not return an overflow try more hosts) ? Tom |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ... . . The two hosts with lots of invalids have NAVI 5700 GPUs, so there are still some out there not upgrading their drivers to fix this problem. Stephen :( |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ...It did just that. Twice! But the initial hosts were both bad hosts and returned bad results that matched each other better than the two good results matched each other. Convincing the validator to believe the bad results were more reliable. |
rob smith ![]() ![]() ![]() Send message Joined: 7 Mar 03 Posts: 22737 Credit: 416,307,556 RAC: 380 ![]() ![]() |
I've said this before, but I'll say it again. It is about time "invalid" tasks were treated in much the same was as "error" tasks. Ignore the odd one, but if a computer is returning loads then it gets its allowance progressively cut until the cycle is broken. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
I've said this before, but I'll say it again.There's even more reason to do that with invalids than errors! Errors can never result in bad data going into the science database. Results that should be invalids could end up as false positives and pollute the science data. I also think that validators should trust results from hosts that produce a high percentage of invalids less than results from hosts that produce almost no invalids. The results should be considered valid only when at least one of the pair of matching results is from a 'good' host. If such a match is not found, it should keep resending the task until such a match can be found. Even better would be if the scheduler could filter what it sends to each hosts and make sure no more than one 'bad' host is ever included in the replication of one workunit. Also when a host has produced so much invalids that it gets classified as 'bad' one, a message should appear in the 'messages' tab of boingmgr that states this fact and requests the user to fix his host. This good/bad status should be considered separately for each application. If the host is not an anonymous platform with just one app for the particular processing unit, then the server could also reduce the amount of work it sends to that particular app in that host and use other apps instead. But the amount should not be reduced to zero because then the host can never clear the bad status. |
![]() ![]() ![]() Send message Joined: 19 May 99 Posts: 766 Credit: 354,398,348 RAC: 11,693 ![]() ![]() |
|
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
+1 +1 . . Zero invalids should be the target ... Stephen . . |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19589 Credit: 40,757,560 RAC: 67 ![]() ![]() |
Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ... I warned of that in https://setiathome.berkeley.edu/forum_thread.php?id=84983&postid=2027128#2027128, after I got invalid to two bad ATI hosts which I had observed in https://setiathome.berkeley.edu/forum_thread.php?id=84508&postid=2026843#2026843 |
Speedy ![]() Send message Joined: 26 Jun 04 Posts: 1646 Credit: 12,921,799 RAC: 89 ![]() ![]() |
I just did a quick up of the big numbers on the service status page. It seems the database can handle over 20 million comfortably when I added up the numbers this is what I got. 22,986,785The highest number the ssp has had within the last day or so was 20,012,235 and it spends most of its time below 20 mil only doing brief dips above it. I guess you are mixing some non-result fields in your count getting a weird hybrid number that doesn't match the size of any table. Thanks for the information ![]() |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.