Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation
Previous · 1 . . . 77 · 78 · 79 · 80 · 81 · 82 · 83 . . . 94 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13874 Credit: 208,696,464 RAC: 304 |
. . Sorry Grant but you are looking at the wrong set of numbers. Pendings have NOT been through the validation process and are irrelevant.Actually it is what makes them relevant. What matters is what percentage of the WUs being processed are Invalid/Inconclusive/Errors. It's not about absolute numbers, but the percentage of crud out of all the work done. The only significant ratio is of one of those subsets to the overall set. 100*Xx/valids+invalids+inconclusives where Xx is one of the subsets.Why make something more complicated than it needs to be? As i pointed out above, what matters is how many of the WUs a system processes are Errors, or how many are Inconclusive, or how many are Invalid out if it's total output. Credit New is an example of a system that is way more complicated than it needs to be. There is no need to make this an even bigger mess when a simple system will get a good result. So for invalids it is 100*'invalids in the last 24 hours"/(valids+inconclusives"24 hours"+invalids"24 hours").Making the simple unnecessarily complicated. Validation Inconclusive/Validation Pending*100= Inconclusives as a percentage. Both simple, and accurate. You can sample those numbers every 10 min & workout the result over a 1 day 5 day or 7 day or one month period, but the single value will be pretty close to all of those other results if things are relatively steady. All the other sample will just let you see if things re getting work or better, which you can do anyway by comparing the result at any time you chose to do the calculation. No need to add complication to something when the result is no better than the simpler method. Grant Darwin NT |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
The only significant ratio is of one of those subsets to the overall set. 100*Xx/valids+invalids+inconclusives where Xx is one of the subsets.You shouldn't count inconclusives because then you would count them twice as they will eventually become valids or invalids. Inconclusives are equivalent to pendings and should stay out of this. But I was suggesting counting the actual validated tasks. Each time a task is transitioned to valid or invalid state, the exponentially decaying average would be updated. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13874 Credit: 208,696,464 RAC: 304 |
But FLOP counting is very imprecise art when you cant rely on every CPU and GPU used having hardware support for counting them.Which is one of the arguments for why Credit New was introduced. The FLOP counting gave consistent results. FLOP guessing would be more appropriate term than FLOP counting.That was the system before the FLOP counting, and it was even messier than Credit New. Then when Credit new came it went back to FLOP guessing, with all sorts of addition massaging of the numbers, hence the mess we have now. Also better optimized clients could use less FLOPs for the same task so actual FLOP counting would penalize them unfairly.That was the problem with the original FLOPS guessing system (and one of the problems with the present Credit New system). But it didn't occur with the FLOPS counting system because the FLOP counter was independent of the application that processed the WU- so a highly optimised application that didn't have to do as many operations as poorly optimised application still claimed similar Credit, even though their processing times differed hugely. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13874 Credit: 208,696,464 RAC: 304 |
And everyone seems to have gotten way, away, way off topic yet again. Grant Darwin NT |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
You are demonstrating that you don't understand what those different states mean.. . Sorry Grant but you are looking at the wrong set of numbers. Pendings have NOT been through the validation process and are irrelevant.Actually it is what makes them relevant. What matters is what percentage of the WUs being processed are Invalid/Inconclusive/Errors. It's not about absolute numbers, but the percentage of crud out of all the work done. Pendings and Inconclusivees spend different times in the database than valids and invalids. And they both will eventually become valids or invalids. When you compare numbers of currently existing results in two sets that spend different times in the database, you get meaningless garbage. And when you count tasks on both sides of the validation process, you'll include the same task twice in your counts. The only meaningful ratio you can derive from the data displayed on the web site is the ratio between invalids and valids. They are the only two states that are mutually exclusive (the same task can't be in both) and spend the same time span in the database and are thus comparable. But the ratio of the counts of the existing results is way too coarse to determine the invalid ratios of the hosts with sufficient accuracy to flag them. If my slower host's cpu produced 1% invalids, you would see an invalid on the web page during about one day each month as it is doing AstroPulses exclusively and poops out about three results per day. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I want to comment on the fact that the percentage of Inconclusives has shot WAY up since all the issues with the validators and bad hosts. Now even if two hosts get the EXACT same results for a early overflow, it needs to go out to a third wingman. That alone has inflated the percentage of Inconclusives. I normally would be sitting on around 2.9% Inconclusives on every host of mine. Now I'm looking at 19.6% of Inconclusives. Case in point: https://setiathome.berkeley.edu/workunit.php?wuid=3873231309 Host 8030022 SETI@Home Informational message -9 result_overflow NOTE: The number of results detected equals the storage space allocated. Best spike: peak=26.3049, time=57.04, d_freq=1419564949.49, chirp=10.1, fft_len=64k Best autocorr: peak=18.55554, time=6.711, delay=4.3744, d_freq=1419560425.62, chirp=-18.068, fft_len=128k Best gaussian: peak=6.091602, mean=0.6524873, ChiSq=1.386699, time=46.14, d_freq=1419558524.42, score=-1.352335, null_hyp=2.124649, chirp=-35.051, fft_len=16k Best pulse: peak=6.526803, time=9.463, period=1.346, d_freq=1419563648.09, score=0.9578, chirp=59.644, fft_len=512 Best triplet: peak=0, time=-2.124e+11, period=0, d_freq=0, chirp=0, fft_len=0 Spike count: 28 Autocorr count: 2 Pulse count: 0 Triplet count: 0 Gaussian count: 0 Host 8826748 SETI@Home Informational message -9 result_overflow NOTE: The number of results detected equals the storage space allocated. Best spike: peak=26.3049, time=57.04, d_freq=1419564949.49, chirp=10.1, fft_len=64k Best autocorr: peak=18.55554, time=6.711, delay=4.3744, d_freq=1419560425.62, chirp=-18.068, fft_len=128k Best gaussian: peak=6.091602, mean=0.6524873, ChiSq=1.386699, time=46.14, d_freq=1419558524.42, score=-1.352335, null_hyp=2.124649, chirp=-35.051, fft_len=16k Best pulse: peak=6.526803, time=9.463, period=1.346, d_freq=1419563648.09, score=0.9578, chirp=59.644, fft_len=512 Best triplet: peak=0, time=-2.124e+11, period=0, d_freq=0, chirp=0, fft_len=0 Spike count: 28 Autocorr count: 2 Pulse count: 0 Triplet count: 0 Gaussian count: 0 Identical result down to the umpteenth decimal place for all spikes, autocorrs, gaussians, pulses and triplets. This task should have been validated on the first two results and not gone out for a third wingman. I know necessary evil now with the bad Windows and bad AMD hosts but even with the current mechanism in place, bad results are still going into the database and invalidating good results from good hosts. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
And everyone seems to have gotten way, away, way off topic yet again.I think this is quite appropriate thread for speculating how to fix server issues. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13874 Credit: 208,696,464 RAC: 304 |
So lets leave RAC out of any such discussions then.And everyone seems to have gotten way, away, way off topic yet again.I think this is quite appropriate thread for speculating how to fix server issues. Grant Darwin NT |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
I know necessary evil now with the bad Windows and bad AMD hosts but even with the current mechanism in place, bad results are still going into the database and invalidating good results from good hosts.And this why a bad host flagging mechanism is needed to prevent them from ganging up against a good host. But Setiathome staff can't do this by configuring their servers. This needs new code in Boinc. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13874 Credit: 208,696,464 RAC: 304 |
That alone has inflated the percentage of Inconclusives. I normally would be sitting on around 2.9% Inconclusives on every host of mine. Now I'm looking at 19.6% of Inconclusives.That's a huge improvement over what it was. I'm down to around 16%, it was up to 50% for a while there, and i saw some systems in the low 60% region. Grant Darwin NT |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
I'm down to around 16%, it was up to 50% for a while there, and i saw some systems in the low 60% region.These percentages are again meaningless numbers. There is no way to see from the web page data what has been the total number of tasks in the same timespan those inconclusives cover so it is impossible to calculate the real percentage. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13874 Credit: 208,696,464 RAC: 304 |
They are very meaning full and give a valid indication of how things stand at that time.I'm down to around 16%, it was up to 50% for a while there, and i saw some systems in the low 60% region.These percentages are again meaningless numbers. There is no way to see from the web page data what has been the total number of tasks in the same timespan those inconclusives cover so it is impossible to calculate the real percentage. Grant Darwin NT |
bluestar Send message Joined: 5 Sep 12 Posts: 7315 Credit: 2,084,789 RAC: 3 |
It could be still for only a narrowband signal perhaps meant to be here, and such a thing could be among the results, for only scores. Here noticing the precisely identical results for only such a thing, except not any developer thinking about intended purpose, for also means of detection. For just a -9 result_overflow, the spike count only got to 28 here, for not any 30, while the other counts were much lower, including also Autocorr as well. |
W-K 666 Send message Joined: 18 May 99 Posts: 19459 Credit: 40,757,560 RAC: 67 |
I know necessary evil now with the bad Windows and bad AMD hosts but even with the current mechanism in place, bad results are still going into the database and invalidating good results from good hosts.And this why a bad host flagging mechanism is needed to prevent them from ganging up against a good host. When there are problems with hardware or drivers is it possible for the project to look at two lines in the "Computer Information" page, CPU type GenuineIntel Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz [Family 6 Model 158 Stepping 10] and Coprocessors NVIDIA GeForce RTX 2060 (4095MB) driver: 442.19 OpenCL: 1.2 and if the information in either match a list of known problems, stop sending these hosts any tasks, or in the present case of the driver problem, tasks of specific Angle Range. |
rob smith Send message Joined: 7 Mar 03 Posts: 22609 Credit: 416,307,556 RAC: 380 |
You just do not understand that that would work just as well your suggestion of doing as calculation to see if the invalid rate has exceeded the permitted level. Indeed it would work better in that it would catch a new computer with a very low RAC almost as soon as it started. How would your scheme work for a computer with 0 RAC and returning invalids from the start - answer IT WOULD FAIL. A computer that never generated a RAC, but only returned invalids using your scheme would NEVER be trapped, whereas using the real count it would very soon, after the first couple of returned tasks be against the wall. While BOINC on the computer requests work, the server responds with the "Not sending any, too many errors". We do want to stop sending work to that class of computers and get it out of the system. Your closing comments show that you have a very low understanding of the vagaries of RAC generation. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
Overflows can have any angle range and stopping amd hosts from getting any work would mean we lose all the good results from them too. What they really need is the ability to prevent a bad host from getting a specific task if there already is another bad host among the other hosts that same task has been sent to. Then if an amd host returned a bad result, its wingman would't be amd host and the results would mismatch. Also the tie-breaker host the task is then resent to can't be another amd host, so they can't gang up and make the good result a minority. What they did was make any overflow result to be automatically sent to a third host. This produces lot of extra server load in a situation where some file is producing lot of overflows and because the scheduler doesn't look at what host it is sent to, nothing prevents two of the three results coming from bad hosts. This actually made the situation worse: If we assume 10% of the hosts are bad ones, then before they did anything an affected workunit had a 1% chance for both of the initial results be bad producing a false positive and 1.8% chance for one of the inital hosts and the first resend host be bad producing both a false positive and a false negative. So 2.8% chance for bad data to enter database and 1.8% chance for a good host to receive an unfair invalid. This change to triple validation of overflows changed nothing for the latter case because they would be resent anyway but made the first case worse. The bad result will still go into science database because the two initial bad results have more votes than the third result. If the third result is good, it'll automatically become false negative. Still 2.8% of the affected workunits enter bad data in science database but now 2.7% instead of 1.8% produce an unfair invalid. the change didn't help the original problem at all but made the collateral damage worse. In addition to making all the real overflow results cause 50% more server load. |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
work better in that it would catch a new computer with a very low RAC almost as soon as it started.Another alternative would be to count the valid percentage instead of invalid percentage. Just add to the variable for valids instead of invalids. Then a new host/app with zero value would be assumed bad from the start and would earn its good status over time. How would your scheme work for a computer with 0 RAC and returning invalids from the start - answer IT WOULD FAIL. A computer that never generated a RAC, but only returned invalids using your scheme would NEVER be trappedRAC has nothing to do with this. If the host produced only invalids, then its invalid score would rise very fast! |
rob smith Send message Joined: 7 Mar 03 Posts: 22609 Credit: 416,307,556 RAC: 380 |
Now you are heading to toward my suggestion, but working on a percentage at very low numbers is not as easy - the key is to keep things simple and on the server. This is for four reasons, first, if you do it on the host every client, on every operating system and hardware must have the current version of the BOINC client. Not everyone likes to, or wants to, have the latest version,m and in some cases there is nobody left in the BOINC community left to develop a new BOINC client. Second is that,as you know, being open-source people are a liberty to alter the client code and one would have to have a mechanism to ensure that every single depository would have all the appropriate code in it, and that it was not possible to remove or block that part of the code. Third, it is actually much faster do an addition to a value stored in a field than to do a multiplication or division - I know many these days don't get into the "joys" of clock-tick counting, but in an application like that one the SETI server the number of clock ticks is fairly important. Finally, for now, the "problem" may only apply to a limited number of projects of which SETI is one, but the BOINC client has to support them all. The the server has a common core of functions, then there are a number of customisable routines around it - error management is one such routine; dangerous as it sounds there are projects out there who have not implemented any error management!!!! Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
rob smith Send message Joined: 7 Mar 03 Posts: 22609 Credit: 416,307,556 RAC: 380 |
RAC has nothing to do with this. If the host produced only invalids, then its invalid score would rise very fast! Why then did YOU mention RAC? - It is YOU who proposed a system based on the RAC of a host. If the host produced only invalids, then its invalid score would rise very fast! Yes, its invalid COUNT would rise, use that as the control, not some derived variable - far simpler, and far more able to catch the event early. Think about a computer with a 100k RAC suddenly starting to throw invalids, set the trigger at 1% - it needs to throw 1000 before it is trapped - now consider a computer with a RAC of 1M (and they do exist) - that figure now becomes 10k - both of which are far more than "just a few" - Using a simple count based scheme both of these would be caught very early and have their daily allowance reduced to getting very few tasks per day until the problem was resolved. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
W-K 666 Send message Joined: 18 May 99 Posts: 19459 Credit: 40,757,560 RAC: 67 |
We need to cut the devices that make "noise bombs" out of all the tasks sent to that device to zero asap. One of the main reasons is that they are only taking ~10 sec/task and even if you compare it to a similar spec GPU that's probably 20* more tasks the device is requesting than normal. But it's not just GPU's that needs to be considered, it's the CPU's as well that can take hours to complete a task. So it is not inconceivable that a faulty GPU could be requesting a 100 times more tasks than normal. The damage done to the reliability of the Science database is such that maybe, without a lot of work, the whole period from before this started until it is absolutely sure the problems have been cleared will have to be deleted. Or we will be getting into an East Anglia situation. |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.