Linux CUDA 'Special' App finally available, featuring Low CPU use

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 83 · Next

AuthorMessage
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6242
Credit: 106,370,077
RAC: 275
Russia
Message 1902048 - Posted: 20 Nov 2017, 12:02:08 UTC - in response to Message 1902045.  

(2) can be most dangerous one if best always wrongly and similarly selected. If cross-validation between same apps on different hosts fails - the danger of this issue much lower and it moves to inefficiency class of issues.
It obviously doesn't happen all the time or the Inconclusive numbers would be much, Much, higher. Once you weed out all the Bad Wingpeople and the Aborted tasks, the actual Inconclusive number is Low. Yes, we are all waiting for an update. It's why I haven't posted the zi3v cuda 9 version, which seems to be working better than the cuda 8 version.

Perhap you misinterpret my statement.
I wrote about wrong best pulse cross validation failure. Much higher inconclusives rate would be if such issue as wrong best pulse happens often.
From other side, low inconclusive rate here is bad (instead of good) sign cause IF bad best pulse will cross-validate there will be no inconclusive but will be wrong data in master database instead. And it's the worst case.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1902048 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 12990
Credit: 208,696,464
RAC: 690
Australia
Message 1902046 - Posted: 20 Nov 2017, 11:37:37 UTC - in response to Message 1902042.  

It's completely arbitrary, and has nothing to do with how well the Host is operating.

You said it there- completely arbitrary. So it's that way for all hosts. So those that show good numbers are subject to the same arbitrariness as those that show poor numbers.

Why you insist it's a fair practice is interesting. The Host has absolutely no control over the number it is sent, which seems to be high for the Linux Hosts. Someone with just a few Inconclusives needs to be sent about a dozen Bad Wingpeople, it happens to me all the time.

The fact that your host processes more work per hour increases the likely hood of it picking up bad wingmen. The fact that your host processes more work per hour also increases the likely hood of it picking up good wingmen.

We're both judged the same- you consider it unfair because it goes against you. I consider it fair because it goes for me.
Because it is, mostly, fair.
Sometimes my Incoclusives rise, and other times they fall- depending on the wingmen. However, at least with the current application versions, my inconclusive rate doesn't vary too much these days.


Why do I have issues getting MB work unless I have the AP application installed & AP work is flowing? Why does my other machine rarely exhibit this behavior? Why do some others also have this issue, yet not others? Why does one of my machines pick up mostly Arecibo WUs when at the same time the other machine gets mostly GBT?
For all the randomness of work allocation and the like, it isn't truly random. However over a certain period of time (be it days, weeks or months), it will appear more random than not. And it would most likely be the same with good/poor wingmen and inconclusives.

It would be good if all the bad wingmen could be taken out of the mix- then if your inconclusives improved significantly, and mine remained unchanged then it would confirm that the wingmen were the issue. And if both yours and mine improved, then it would show they weren't.


Time for me to call it a night.
Today was the first day back at work after a month off and i'm buggered.
Grant
Darwin NT
ID: 1902046 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 6,279
United States
Message 1902045 - Posted: 20 Nov 2017, 11:20:46 UTC - in response to Message 1902040.  

(2) can be most dangerous one if best always wrongly and similarly selected. If cross-validation between same apps on different hosts fails - the danger of this issue much lower and it moves to inefficiency class of issues.
It obviously doesn't happen all the time or the Inconclusive numbers would be much, Much, higher. Once you weed out all the Bad Wingpeople and the Aborted tasks, the actual Inconclusive number is Low. Yes, we are all waiting for an update. It's why I haven't posted the zi3v cuda 9 version, which seems to be working better than the cuda 8 version.
ID: 1902045 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 6,279
United States
Message 1902042 - Posted: 20 Nov 2017, 11:02:06 UTC - in response to Message 1902037.  

The fact is You, or anyone else, can blame a Host for the number of Bad Wingpeople the Server sends. It's completely arbitrary, and has nothing to do with how well the Host is operating.
Why you insist it's a fair practice is interesting. The Host has absolutely no control over the number it is sent, which seems to be high for the Linux Hosts. Someone with just a few Inconclusives needs to be sent about a dozen Bad Wingpeople, it happens to me all the time.
ID: 1902042 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6242
Credit: 106,370,077
RAC: 275
Russia
Message 1902040 - Posted: 20 Nov 2017, 10:59:03 UTC

There are 3 issues:
1. NaN values after restart so restart logic has flaw.
2. Best Pulse that can be result of more severe issue with Pulse or some other issue (and then Pulse issue itself is really fixed already).
3. Different order in overflow.

From all these (2) can be most dangerous one if best always wrongly and similarly selected. If cross-validation between same apps on different hosts fails - the danger of this issue much lower and it moves to inefficiency class of issues.
(1) and (3) result in inefficiency of computations (cause require additional replication).

Seems that's all so far. And until Petri provides new version or testers find some new issue all is void.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1902040 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 12990
Credit: 208,696,464
RAC: 690
Australia
Message 1902037 - Posted: 20 Nov 2017, 10:47:15 UTC - in response to Message 1902035.  

The point I'm actually making is you can't just grab the gross numbers and expect them to be accurate. You seem to like that word

I do.
Another word I like is representative. While there is a degree of variability on the accuracy of using the gross Pendings & Inconclusives to determine the percentage of inconclusives, it is representative of the actual value.

, do you realize the Benchmark App rates the CUDA App at 99.7% accuracy?
99.7% accurate against the CPU, do you know how well the SoG App rates? 99.7% is pretty high...for accuracy
Here you go, https://setiathome.berkeley.edu/forum_thread.php?id=78569&postid=1900637
Strongly similar, Q= 99.70%

Which is nice, but shows that there is still room for improvement.
Sure, as long as it's close enough, it's good enough. Because for all of that accuracy, >5% inconclusives isn't close enough, so it's not (yet) good enough.


We could discuss self validation, each applications percentage of the total amount of work returned per hour and the effect that has on inconslusives- both for the number of WUs they return, and the number of WUs returned by problem applications/platforms etc and all sorts of other wonderful things.
But the fact is that the gross numbers of a given host are what is used to determine the percentage of Inconclusives, and to compare between hosts. And presently the Special application on Linux doesn't meet the mark. It's close, damned close, but it's not there yet.
Grant
Darwin NT
ID: 1902037 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 6,279
United States
Message 1902035 - Posted: 20 Nov 2017, 10:21:23 UTC - in response to Message 1902032.  
Last modified: 20 Nov 2017, 10:30:06 UTC

The point I'm actually making is you can't just grab the gross numbers and expect them to be accurate. You seem to like that word, do you realize the Benchmark App rates the CUDA App at 99.7% accuracy?
99.7% accurate against the CPU, do you know how well the SoG App rates? 99.7% is pretty high...for accuracy
Here you go, https://setiathome.berkeley.edu/forum_thread.php?id=78569&postid=1900637
Strongly similar, Q= 99.70%

BTW, Please Explain how you can Blame the Host for the Server sending it a countless stream of Bad Wingpeople. It obviously does work, no matter how you attempt to justify it. I'd just like to know why some Hosts escape the countless numbers I receive.
ID: 1902035 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 12990
Credit: 208,696,464
RAC: 690
Australia
Message 1902032 - Posted: 20 Nov 2017, 10:05:48 UTC - in response to Message 1902030.  
Last modified: 20 Nov 2017, 10:06:18 UTC

Here Grant, tell me how many Real Inconclusives there are on this page, https://setiathome.berkeley.edu/results.php?hostid=6796479&state=3

57, which divided by 1360 times 100 gives 4.2%
Which is within the target range.

The fact is if you pull out the problem children from other systems numbers, then their numbers will likewise look even better. You could fix all those problem systems, either by fixing their applications/hardware/drivers/all of the above, or by stopping them from contributing all those inconclusives, or you could improve your application so that it falls within the 0-5% inconclusives range as it is presently determined.
Congratulations! Your application on that particular systems falls within the 0-5% range, even with all the other problem children there, so it meets the criteria.
Stephen's Linux systems don't.

So i'm not really sure just what point it is you're trying to make?
Grant
Darwin NT
ID: 1902032 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 6,279
United States
Message 1902030 - Posted: 20 Nov 2017, 9:38:09 UTC - in response to Message 1902011.  
Last modified: 20 Nov 2017, 9:41:05 UTC

ID: 1902030 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 12990
Credit: 208,696,464
RAC: 690
Australia
Message 1902011 - Posted: 20 Nov 2017, 6:59:42 UTC - in response to Message 1901851.  

. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few...

Wrong.
Yes it goes up when there are problem wingmen, yes it goes down when there are better than usual wingmen, and yes it goes down if you are getting applications validating against themselves.
The number of Incoclusives divided by the number pending multiplied by 100 is the % of Inconclusives.

Everybody's percentage of inconclusives will vary to some degree, but the fact is your Linux systems are above the target threshold. So there is still work to be done on the application in regards to it's accuracy.
Grant
Darwin NT
ID: 1902011 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 6,279
United States
Message 1901953 - Posted: 19 Nov 2017, 23:52:17 UTC - in response to Message 1901945.  

What a Cluster. I'm not even going to bother except to add a few Facts.
1) Aborted Tasks (Overflows) should not even enter the equation as they are Not used.
Incessant repetition does not turn this alternative fact into a real fact. You have yet to provide a shred of evidence to support this claim.
The people that matter have already spoken on the issue of Aborted Tasks. Your continued denial is related to your motive. You can keep posting in this thread if you wish, you are not making very many friends here. If you want to see just how bad an App can be and continue to survive on SETI just consider the Intel iGPU App. The fact that the Intel app is still here should say everything you need to know.
ID: 1901953 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901945 - Posted: 19 Nov 2017, 23:26:43 UTC - in response to Message 1901908.  

What a Cluster. I'm not even going to bother except to add a few Facts.
1) Aborted Tasks (Overflows) should not even enter the equation as they are Not used.
Incessant repetition does not turn this alternative fact into a real fact. You have yet to provide a shred of evidence to support this claim.

2) Inconclusive tasks from Obviously Bad Wingmen Can Not be used against a Host.
Ummm...okay, that's certainly true, but how does this fit into the discussion?

3) The Net Inconclusives are Much different than Gross Inconclusives, you should check it out.
I agree that if some folks are going to throw around Inconclusive percentages, which I don't believe I have, there should be a consistent frame of reference. However, my glossary doesn't seem to have any entries or formulas for those terms. Why don't you spell them out?

and finally, If the people here were Really concerned about Large amounts of Inconclusive Results they would be posting about the Intel iGPU Hosts which frequently have upwards of 90% Inconclusive results.
Since they aren't posting about the Intel GPUs, I can only assume there is an alternate motive.
Why does there have to be some sinister "alternative motive"? This thread is about the Linux Special App, so that tends to be the focus. It's also an app which is currently in development, whereas the Intel GPU apps somehow made it into the mainstream. Last year, there were a couple of threads where the Intel GPU app issues were raised on multiple occasions. As I recall, it boiled down to having an active developer who could look into the problem(s), since they occurred most frequently on Macs. For the record, the most recent Inconclusive list that I generated for my machines last evening shows that, out of 212 Inconclusive WUs, 41 involve some flavor of the "intel-gpu" app (28 of them running on Macs). That's certainly excessive. But "upwards of 90% Inconclusive results" seems like another alternative fact with no support. I looked at the first 7 of the hosts that showed up in my list and didn't see one that exceeded 50%, though 6 of the 7 certainly exceeded any definition of a 5% threshold (using a ratio of Inconclusive tasks to Valid tasks). So, yes it's a significant problem, but plucking numbers out of the air doesn't do much to support that conclusion.

...most of the SoG Bad Best Gaussians Validate against each other.
Another unsupported over-the-top claim which I guess is supposed to justify problems in the Special App by redirecting attention to SoG. It's not an "either/or" situation. Bad Best Gaussians in SoG may indeed be a problem that also needs fixing. I see 7 WUs (out of those 212 Inconclusives in my list) where Best Gaussian seems to be the only point of disagreement between SoG and another app. Two of them have since validated. In neither case did the tiebreaker match the SoG Best Gaussian or hand the canonical result to the SoG task. In fact, in WU 2749967652, the tiebreaker was another SoG task, and it actually agreed with the zi3t2b result:

x41p_zi3t2b, Cuda 8.00 special: Best gaussian: peak=7.343542, mean=0.6160311, ChiSq=1.177911, time=84.72, d_freq=1420891151.79, score=0.8839912, null_hyp=2.121852, chirp=-22.353, fft_len=16k
v8.22 (opencl_nvidia_SoG): Best gaussian: peak=6.274947, mean=0.6099422, ChiSq=1.409922, time=94.79, d_freq=1420886270.36, score=0.6061153, null_hyp=2.235723, chirp=33.225, fft_len=16k
v8.22 (opencl_nvidia_SoG): Best gaussian: peak=7.343541, mean=0.6160314, ChiSq=1.177918, time=84.72, d_freq=1420891151.79, score=0.8840132, null_hyp=2.121858, chirp=-22.353, fft_len=16k

...so, whatever the SoG Best Gaussian problem is, it's appearance apparently depends on other factors, and it certainly doesn't appear to be cross-validating "most" of the time.
ID: 1901945 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 1901935 - Posted: 19 Nov 2017, 21:37:34 UTC - in response to Message 1901899.  

My current #1 box ran with Windows for about 4 years, completing more than 340K tasks with a grand total of 33 Invalid results. All but a half dozen or so of those occurred due to a truncated Stderr issue in 2014 that was eventually addressed by a BOINC modification back in 2015. I had only one Invalid in the last year and a half before switching over to Linux to try out the Special App. In less than 7 months with Linux, I've gotten 102 Invalids on that box, every one of them due to some sort of unresolved issue with the Special App.

We all see it with our wingmen on a daily basis, where ongoing Invalid results signal a problem with the host returning those results. We get frustrated that our wingmen don't pay attention and fix whatever is ailing with their machines (which periodically surfaces in the Invalid Host Messaging thread). It should never be an acceptable situation to have a science application be responsible for Invalid results on a continuing basis. Invalids and errors should be extremely rare. As far as I'm concerned, the only reason my own reported results should ever end up in an Invalid state is if my hardware has started to fail, or I've made some blunder with a command line setting. In such cases I consider it my responsibility to figure out the cause, then either blow out the dust, adjust the command line, replace a GPU, or whatever. With the current situation, I might not even notice if an Invalid was my machine's fault, what with the numbing effect of seeing Invalids show up in my task list on a daily basis, rather than once in a blue moon (or less).

Focusing on the amount of credit "lost" is a specious argument. That should be totally irrelevant. It's the reliability and accuracy of the science app that's at issue here.


. . It is good that you take such an interest in your part in the project, very few contributors do (myself included) but mainly because very, very few can track their results with the detail you seem to be able to. My comment was not that there is nothing to address but that we are dealing with a very low incidence situation. One that is being pretty well filtered out by the validation process. Your (and Grant's) bugbear is that work that is less than 100% will slip through because of cross validation by equally "flawed" apps (other special sauce users) but most of my results (unlike you I can only speak roughly judging by the ones I have looked at manually) are being validated by other apps. And the major part of an already small number are due to the sort order issue that only affects noise bombs and is not an accuracy or reliability issue but a reporting issue. The only accuracy issue from your reporting is the false best spike which is much rarer again. Sure this all needs to be addressed but the impact is far smaller than that of a single delinquent host pumping out dozens, hundreds, or even thousands of totally bogus results daily. And we all know from issues with wingmen that there are larger numbers of those than people running special sauce. And it was Grant who raised concerns over lost credits (and time), hence my reply to that. The most pressing issue is the false best spike and I am not in a position to do anything about that, and since the development is a volunteer contribution I can only wait for Petri to get his round tuit. I am sure you know he is aware of it and hopefully there will be a solution forthcoming.

Stephen

..
ID: 1901935 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 1901934 - Posted: 19 Nov 2017, 21:15:03 UTC - in response to Message 1901901.  

I not even has a linux host but all this readings makes me ask just one question:

Could we loose the ET call because this issues?

If yes something must be done to stop this invalids. What to do? It´s well beyound my pay grade.


. . Hi Juan,

. . Answer in this case to number 1, most probably not. And the solution is out of my hands also ....

Stephen
ID: 1901934 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 6,279
United States
Message 1901908 - Posted: 19 Nov 2017, 19:08:48 UTC
Last modified: 19 Nov 2017, 19:25:58 UTC

What a Cluster. I'm not even going to bother except to add a few Facts.
1) Aborted Tasks (Overflows) should not even enter the equation as they are Not used.
2) Inconclusive tasks from Obviously Bad Wingmen Can Not be used against a Host.
3) The Net Inconclusives are Much different than Gross Inconclusives, you should check it out.
and finally, If the people here were Really concerned about Large amounts of Inconclusive Results they would be posting about the Intel iGPU Hosts which frequently have upwards of 90% Inconclusive results.
Since they aren't posting about the Intel GPUs, I can only assume there is an alternate motive.
For a Task to be labeled Invalid, 50% of the signals would have to be different. That means a Host with 16 signals would validate against a Host with 30 signals. Since the most the Special App is off by is usually ONE signal, the chances of getting an Invalid against the Special App is extremely slim, even more slim than getting a Windows SoG App with a Bad Best Gaussian. Which does happen much more than realized because most of the SoG Bad Best Gaussians Validate against each other.

If you want to see which Apps are the most offending, go to the CPU Only Hosts on the Top Hosts List and see which Apps are failing most against the CPUs, note, it is Not the CUDA App.
ID: 1901908 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 1901901 - Posted: 19 Nov 2017, 18:30:24 UTC

I not even has a linux host but all this readings makes me ask just one question:

Could we loose the ET call because this issues?

If yes something must be done to stop this invalids. What to do? It´s well beyound my pay grade.
ID: 1901901 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901899 - Posted: 19 Nov 2017, 18:20:22 UTC

My current #1 box ran with Windows for about 4 years, completing more than 340K tasks with a grand total of 33 Invalid results. All but a half dozen or so of those occurred due to a truncated Stderr issue in 2014 that was eventually addressed by a BOINC modification back in 2015. I had only one Invalid in the last year and a half before switching over to Linux to try out the Special App. In less than 7 months with Linux, I've gotten 102 Invalids on that box, every one of them due to some sort of unresolved issue with the Special App.

We all see it with our wingmen on a daily basis, where ongoing Invalid results signal a problem with the host returning those results. We get frustrated that our wingmen don't pay attention and fix whatever is ailing with their machines (which periodically surfaces in the Invalid Host Messaging thread). It should never be an acceptable situation to have a science application be responsible for Invalid results on a continuing basis. Invalids and errors should be extremely rare. As far as I'm concerned, the only reason my own reported results should ever end up in an Invalid state is if my hardware has started to fail, or I've made some blunder with a command line setting. In such cases I consider it my responsibility to figure out the cause, then either blow out the dust, adjust the command line, replace a GPU, or whatever. With the current situation, I might not even notice if an Invalid was my machine's fault, what with the numbing effect of seeing Invalids show up in my task list on a daily basis, rather than once in a blue moon (or less).

Focusing on the amount of credit "lost" is a specious argument. That should be totally irrelevant. It's the reliability and accuracy of the science app that's at issue here.
ID: 1901899 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 1,893
Canada
Message 1901894 - Posted: 19 Nov 2017, 18:01:41 UTC - in response to Message 1901890.  

That still doesn't work, because you would need to include every valid task that was an inonclusive, but validated in the last day (or that day).
ID: 1901894 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 11744
Credit: 1,160,866,277
RAC: 4,249
United States
Message 1901890 - Posted: 19 Nov 2017, 17:48:05 UTC - in response to Message 1901851.  


. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few.
Stephen

..

That's my assessment too. You can't just use the total tally of Inconclusives on your stats page because some of those go back 3 months, even before you were using the special app. To get a true percentage add up all tasks completed, validated, errored or put into Inconclusive for a single 24 hour period and then do your math. And for good measure do the same for a couple other full days just to make sure you didn't have an outlier day.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1901890 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 1901851 - Posted: 19 Nov 2017, 13:41:05 UTC - in response to Message 1901830.  

. . Agreed, resends are a bore, but my rigs running zi3v Cuda80 special are doing thousands of WUs, averaging about 1800 to 1900 a day between them, and the rates of both inconclusives and invalids are very, very low.

Resends aren't a bore, they are an unnecessary load on the (already struggling) servers.

Your Windows system runs at about 2.2% Inconclusives.
Your Linux systems are 5.6% and 5.9%

My understanding is that the project goal was up to 5%, 0% being the target, but up to 5% would be acceptable, Over 5% isn't acceptable.

In the past people (such as myself) have felt hard done by when they have missed out on Credit when 2 machines with problem applications/drivers/hardware/whatever had validated against each other, so even invalids aren't really acceptable. And the faster a system is, the lower the number of such problems needs to be. People losing out on Credit because a couple of Special applications have validated against each other will annoy them just as much.

Getting the most from our hardware would be great, but not at the expense of accuracy.
The results are most important, not how quickly they are produced; that is secondary.


. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few. As I said from 6 to maybe 18 depending on the day and the wingmen. That is up to maybe 18 out of 1800. If you think that equals 5% I think you need a new formula for working it out. And with Apple Darwin and IGPU hosts you can get those numbers on any machine. If the problems being examined are sorted out the numbers will only drop slightly and not disappear altogether. The issue causing the concerns that seem to plague some people are all but a storm in a tea cup. And as I pointed out 99% are validating against non-special app hosts, so no one is missing out on much. And the miniscule number of invalids on my machine are all noise bombs, the only one missing out on points on those tasks is me and I can live without the 1 or 2 credits they might lose. Especially when CreditScrew can cost me 10 or 20 times that on each of a great many of the valid full length tasks that have validated against other non-special app hosts. Also from all that I have read there is, for the most part, no matter of accuracy except in the very low incidence of false best spikes. Something that is best sorted out yes, but something that is not occurring in highly significant numbers.

Stephen

..
ID: 1901851 · Report as offensive
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 83 · Next

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use


 
©2020 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.