The Server Issues / Outages Thread - Panic Mode On! (118)

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 76 · 77 · 78 · 79 · 80 · 81 · 82 . . . 94 · Next

AuthorMessage
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2031212 - Posted: 7 Feb 2020, 14:58:49 UTC - in response to Message 2031205.  

Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ...
Shouldn't there be some kind of mechanism to prevent this (when at least one host did not return an overflow try more hosts) ?
It did just that. Twice!
But the initial hosts were both bad hosts and returned bad results that matched each other better than the two good results matched each other. Convincing the validator to believe the bad results were more reliable.
ID: 2031212 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22221
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2031250 - Posted: 7 Feb 2020, 19:16:44 UTC

I've said this before, but I'll say it again.
It is about time "invalid" tasks were treated in much the same was as "error" tasks.
Ignore the odd one, but if a computer is returning loads then it gets its allowance progressively cut until the cycle is broken.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2031250 · Report as offensive
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2031256 - Posted: 7 Feb 2020, 20:05:33 UTC - in response to Message 2031250.  
Last modified: 7 Feb 2020, 20:12:24 UTC

I've said this before, but I'll say it again.
It is about time "invalid" tasks were treated in much the same was as "error" tasks.
Ignore the odd one, but if a computer is returning loads then it gets its allowance progressively cut until the cycle is broken.
There's even more reason to do that with invalids than errors! Errors can never result in bad data going into the science database. Results that should be invalids could end up as false positives and pollute the science data.

I also think that validators should trust results from hosts that produce a high percentage of invalids less than results from hosts that produce almost no invalids. The results should be considered valid only when at least one of the pair of matching results is from a 'good' host. If such a match is not found, it should keep resending the task until such a match can be found. Even better would be if the scheduler could filter what it sends to each hosts and make sure no more than one 'bad' host is ever included in the replication of one workunit.

Also when a host has produced so much invalids that it gets classified as 'bad' one, a message should appear in the 'messages' tab of boingmgr that states this fact and requests the user to fix his host.

This good/bad status should be considered separately for each application. If the host is not an anonymous platform with just one app for the particular processing unit, then the server could also reduce the amount of work it sends to that particular app in that host and use other apps instead. But the amount should not be reduced to zero because then the host can never clear the bad status.
ID: 2031256 · Report as offensive
Profile Freewill Project Donor
Avatar

Send message
Joined: 19 May 99
Posts: 766
Credit: 354,398,348
RAC: 11,693
United States
Message 2031259 - Posted: 7 Feb 2020, 20:17:09 UTC - in response to Message 2031256.  

+1
May not be easy to implement, but makes sense! I agree with Ville Saari. Errors can happen for many reasons, including me making a bad edit in an xml file :) but Invalids need to be driven to zero.
ID: 2031259 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2031270 - Posted: 7 Feb 2020, 20:59:54 UTC - in response to Message 2031259.  

+1
May not be easy to implement, but makes sense! I agree with Ville Saari. Errors can happen for many reasons, including me making a bad edit in an xml file :) but Invalids need to be driven to zero.


+1

. . Zero invalids should be the target ...

Stephen

. .
ID: 2031270 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19078
Credit: 40,757,560
RAC: 67
United Kingdom
Message 2031272 - Posted: 7 Feb 2020, 21:01:56 UTC - in response to Message 2031205.  

Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ...

https://setiathome.berkeley.edu/workunit.php?wuid=3871356807

Both computers that have this task marked as valid returned an overflow (and both these hosts return lots of invalids).
Both computers that have this task marked as invalid did NOT return an overflow (and both these hosts have no other invalids).

Shouldn't there be some kind of mechanism to prevent this (when at least one host did not return an overflow try more hosts) ?

Tom

I warned of that in https://setiathome.berkeley.edu/forum_thread.php?id=84983&postid=2027128#2027128, after I got invalid to two bad ATI hosts which I had observed in https://setiathome.berkeley.edu/forum_thread.php?id=84508&postid=2026843#2026843
ID: 2031272 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 2031275 - Posted: 7 Feb 2020, 21:08:41 UTC - in response to Message 2031192.  
Last modified: 7 Feb 2020, 21:15:59 UTC

I just did a quick up of the big numbers on the service status page. It seems the database can handle over 20 million comfortably when I added up the numbers this is what I got. 22,986,785
The highest number the ssp has had within the last day or so was 20,012,235 and it spends most of its time below 20 mil only doing brief dips above it. I guess you are mixing some non-result fields in your count getting a weird hybrid number that doesn't match the size of any table.

That 20 mil is the size of the result table. You get that by summing up all the result fields: 'Results ready to send', 'Results out in the field', 'Results returned and awaiting validation' and 'Results waiting for db purging'. If you add the workunit and file fields, then you will count some results up to four times. And you can't really count the size of the workunit table because ssp only shows a subset of them.

Thanks for the information
ID: 2031275 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2031276 - Posted: 7 Feb 2020, 21:12:15 UTC - in response to Message 2031275.  

Your answer is in your quoted message.
You get that by summing up all the result fields: 'Results ready to send', 'Results out in the field', 'Results returned and awaiting validation' and 'Results waiting for db purging'.

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2031276 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 2031278 - Posted: 7 Feb 2020, 21:15:22 UTC - in response to Message 2031276.  

Your answer is in your quoted message.
You get that by summing up all the result fields: 'Results ready to send', 'Results out in the field', 'Results returned and awaiting validation' and 'Results waiting for db purging'.

So it is thanks Keith I will change my original post
ID: 2031278 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2031287 - Posted: 7 Feb 2020, 21:41:35 UTC - in response to Message 2031272.  

Hmmm, looks like good tasks are being marked as invalid and bad ones as valid ...

https://setiathome.berkeley.edu/workunit.php?wuid=3871356807

Both computers that have this task marked as valid returned an overflow (and both these hosts return lots of invalids).
Both computers that have this task marked as invalid did NOT return an overflow (and both these hosts have no other invalids).

Shouldn't there be some kind of mechanism to prevent this (when at least one host did not return an overflow try more hosts) ?

Tom

I warned of that in https://setiathome.berkeley.edu/forum_thread.php?id=84983&postid=2027128#2027128, after I got invalid to two bad ATI hosts which I had observed in https://setiathome.berkeley.edu/forum_thread.php?id=84508&postid=2026843#2026843


. . The problem with the NAVI AMD cards has been an issue for a couple of months now and has its own thread.

Stephen

<shrug>
ID: 2031287 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22221
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2031289 - Posted: 7 Feb 2020, 21:49:21 UTC - in response to Message 2031270.  

Actually the pure random invalid can be caused by an "event" on a computer that has a very good record. So while 0% is the goal there will always be the odd event that trips one up.
Systematic invalids (which are the ones we are talking about here) are where a computer, for whatever reason, is just chucking out garbage by the truck load, is certainly a big no-no.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2031289 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13746
Credit: 208,696,464
RAC: 304
Australia
Message 2031292 - Posted: 7 Feb 2020, 22:02:42 UTC - in response to Message 2031250.  

I've said this before, but I'll say it again.
It is about time "invalid" tasks were treated in much the same was as "error" tasks.
Ignore the odd one, but if a computer is returning loads then it gets its allowance progressively cut until the cycle is broken.
Invalids as a percentage of Pendings?
0.5% or higher gets you sin binned.
Grant
Darwin NT
ID: 2031292 · Report as offensive
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2031296 - Posted: 7 Feb 2020, 22:18:53 UTC - in response to Message 2031289.  

Actually the pure random invalid can be caused by an "event" on a computer that has a very good record. So while 0% is the goal there will always be the odd event that trips one up.
That 'event' is something that really should happen less than once in a lifetime of a computer. A randomly flipping bit can cause a computer to crash. If a computer crashes spontaneously without a software bug, that could be tolerated once but if it happens again, there is clearly something wrong with the hardware.

Probably the most common cause of those 'events' is that the cpu or gpu has been overclocked too far. For cpus this rarely happens without the user being guilty but the graphics card manufacturers sometimes go too far when competing with other manufacturers producing cards with the same gpu chip so that your graphics card is unstable out of the box.
ID: 2031296 · Report as offensive
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2031303 - Posted: 7 Feb 2020, 23:01:09 UTC - in response to Message 2031292.  

Invalids as a percentage of Pendings?
That's a bad metric because invalids spend 24 hours in the database but the pendings spend quite variable time so the ratio can vary a lot without the actual percentage of invalids returned varying.

Good metric would be the recent average ratio of invalids to valids. Choose a constant x that is a small positive number (a lot smaller than 1), then for each validated task add x to a variable if it was invalid but don't add anything if it was valid and 'decay' the variable between tasks by multiplying it with 1-x. The variable will approach the ratio of invalids to all tasks over time. If the host produces 1% invalids, the value will stabilize at 0.01. Smaller the x, the slower the value changes and more recent tasks affect the current value. The weight of each task affecting it decreases exponentially by the 'age' of the task.
ID: 2031303 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13746
Credit: 208,696,464
RAC: 304
Australia
Message 2031305 - Posted: 7 Feb 2020, 23:09:56 UTC - in response to Message 2031303.  
Last modified: 7 Feb 2020, 23:10:51 UTC

Invalids as a percentage of Pendings?
That's a bad metric because invalids spend 24 hours in the database but the pendings spend quite variable time so the ratio can vary a lot without the actual percentage of invalids returned varying.

Good metric would be the recent average ratio of invalids to valids. Choose a constant x that is a small positive number (a lot smaller than 1), then for each validated task add x to a variable if it was invalid but don't add anything if it was valid and 'decay' the variable between tasks by multiplying it with 1-x. The variable will approach the ratio of invalids to all tasks over time. If the host produces 1% invalids, the value will stabilize at 0.01. Smaller the x, the slower the value changes and more recent tasks affect the current value. The weight of each task affecting it decreases exponentially by the 'age' of the task.
Actually the Pendings number is generally less variable than the Valids number, and it's a good indicator of the amount of work the system is actually processing.
It's also what is used when developing applications with the goal of Inconclusives being 5% or less of the Pending value.
Having some sort of weighting/ time factor may be of use, but would add to the complexity. I'd see how a basic percentage goes at first, and tweak it from there if necessary.
Grant
Darwin NT
ID: 2031305 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22221
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2031306 - Posted: 7 Feb 2020, 23:10:06 UTC

RAC is not a good metric to use for any purpose, it is far too variable.

Far better to keep to the very simple technique that is used for error tasks - let the first couple (in a defined period - 24 hours I think), then reduce the number of tasks permitted for every error task returned until the computer is down to a very low number of tasks allowed (1 per day from memory). Recover slowly, at something like half the decay rate. This is very simple to add to the server code as there are already a couple of error types counted, so just add invalid to the list. Not having the server code to hand just now I can't recall what the decrementor is, but I think it is something like two or three per error over the allowance. I can check in the morning.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2031306 · Report as offensive
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2031309 - Posted: 7 Feb 2020, 23:41:12 UTC - in response to Message 2031306.  
Last modified: 8 Feb 2020, 0:02:59 UTC

RAC is not a good metric to use for any purpose, it is far too variable.
RAC is variable because of CreditScrew. This recent average invalid ratio would vary only if the actual ratio of invalids varies because each invalid would have exactly the same score.

Exponentially decaying average is a good way to calculate stuff like this because you need only one stored number and only one multiply-add per operation. A regular moving average of recent n tasks would need an array of size n to keep track of the tasks falling out of the window and this would fatten the database a lot.

Far better to keep to the very simple technique that is used for error tasks - let the first couple (in a defined period - 24 hours I think), then reduce the number of tasks permitted for every error task returned until the computer is down to a very low number of tasks allowed (1 per day from memory). morning.
This wouldn't work for invalids. Error throttling is intended to limit the server load caused by a broken host that errors out every task. A few - or even a few hundred errors are not an issue but a host that immediately errors out every task would ask for a full cache of tasks every scheduler contact and return them all in the next contact causing a very high server load. But even those few invalids per day that this system would allow without any consequences could be a significant percentage of all tasks for a slow host. The cpu of my slower host crunches about three AstroPulses per day. One invalid per day would be 33% invalid ratio!

And we don't want to throttle the host returning lot of invalids but flag it as an unreliable host that should not be validated against another flagged host.

Exponentially decaying average would be a more server-friendly way to do the error throttling too. Because then you wouldn't need the daily database sweep to reset the error counts.
ID: 2031309 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13746
Credit: 208,696,464
RAC: 304
Australia
Message 2031310 - Posted: 7 Feb 2020, 23:47:52 UTC - in response to Message 2031309.  

RAC is variable because of CreditScrew.
Even without Credit Screw it is variable due to the different WUs- MB & AP & GBT & Arecibo, along with the different angle ranges resulting in differing processing times. Even with the excellent Credit system prior to Credit New (actual FLOP counting), RAC still varied due to this, even with aid of some tweaking that accounted for the differing processing times of some similar AR WUs.
But of course Credit New does take the variability to a whole new, somewhat extreme, level.
Grant
Darwin NT
ID: 2031310 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2031314 - Posted: 8 Feb 2020, 0:17:18 UTC - in response to Message 2031305.  

Invalids as a percentage of Pendings?
That's a bad metric because invalids spend 24 hours in the database but the pendings spend quite variable time so the ratio can vary a lot without the actual percentage of invalids returned varying.

Good metric would be the recent average ratio of invalids to valids. Choose a constant x that is a small positive number (a lot smaller than 1), then for each validated task add x to a variable if it was invalid but don't add anything if it was valid and 'decay' the variable between tasks by multiplying it with 1-x. The variable will approach the ratio of invalids to all tasks over time. If the host produces 1% invalids, the value will stabilize at 0.01. Smaller the x, the slower the value changes and more recent tasks affect the current value. The weight of each task affecting it decreases exponentially by the 'age' of the task.
Actually the Pendings number is generally less variable than the Valids number, and it's a good indicator of the amount of work the system is actually processing.
It's also what is used when developing applications with the goal of Inconclusives being 5% or less of the Pending value.
Having some sort of weighting/ time factor may be of use, but would add to the complexity. I'd see how a basic percentage goes at first, and tweak it from there if necessary.


. . Sorry Grant but you are looking at the wrong set of numbers. Pendings have NOT been through the validation process and are irrelevant. The Set of validation processed numbers are 'valids', 'invalids' and 'inconclusives'. The only significant ratio is of one of those subsets to the overall set. 100*Xx/valids+invalids+inconclusives where Xx is one of the subsets. Also time is a factor because when things are running right the valids are only shown on the system for approx 24 hours, so when dealing with the inconclusives only those that occurred in the same 24 hour period as the valids can be treated as significant. So for invalids it is 100*'invalids in the last 24 hours"/(valids+inconclusives"24 hours"+invalids"24 hours").

Stephen

:(
ID: 2031314 · Report as offensive
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2031315 - Posted: 8 Feb 2020, 0:23:19 UTC - in response to Message 2031310.  

Even without Credit Screw it is variable due to the different WUs- MB & AP & GBT & Arecibo, along with the different angle ranges resulting in differing processing times. Even with the excellent Credit system prior to Credit New (actual FLOP counting), RAC still varied due to this,
Ideal FLOP counting would give you very similar credit per crunching time for different tasks because most of the difference in the time needed to crunch them is caused by the different amonut of FLOPs needed. But FLOP counting is very imprecise art when you cant rely on every CPU and GPU used having hardware support for counting them. FLOP guessing would be more appropriate term than FLOP counting. Also better optimized clients could use less FLOPs for the same task so actual FLOP counting would penalize them unfairly.

But invalid task counting can be done exactly.
ID: 2031315 · Report as offensive
Previous · 1 . . . 76 · 77 · 78 · 79 · 80 · 81 · 82 . . . 94 · Next

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (118)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.