Linux CUDA 'Special' App finally available, featuring Low CPU use

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 68 · 69 · 70 · 71 · 72 · 73 · 74 . . . 83 · Next

AuthorMessage
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1900816 - Posted: 13 Nov 2017, 3:49:42 UTC - in response to Message 1900645.  

Just a quick follow-up to my previous post. The Intel GPU app agreed with the Cuda 50 app, while the Cuda 8 zi3v results on my host were close enough, with just that one Pulse difference, to also get credit. However, the Cuda 9 zi3x got marked as Invalid, with all those totally different Spike signals reported.
ID: 1900816 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901107 - Posted: 15 Nov 2017, 3:40:31 UTC

The only difference between these three results is the Best Pulse (on a WU that has no reportable Pulses).

Workunit 2743697628 (03mr07af.31613.162369.7.34.129)
Task 6161054028 (S=9, A=0, P=0, T=4, G=0, BS=27.09694, BG=0) v8.22 (opencl_ati5_SoG_nocal) windows_intelx86
Task 6161054029 (S=9, A=0, P=0, T=4, G=0, BS=27.09692, BG=0) x41p_zi3t2b, Cuda 8.00 special
Task 6162885280 (S=9, A=0, P=0, T=4, G=0, BS=27.09692, BG=0) x41p_zi3x, Cuda 9.00 special

v8.22 (opencl_ati5_SoG_nocal) -- Best pulse: peak=6.538054, time=19.73, period=0.3473, d_freq=1418759357.4, score=0.9353, chirp=72.126, fft_len=16
zi3t2b, Cuda 8.00 -- Best pulse: peak=2.51637, time=71.16, period=0.1098, d_freq=1418757324.22, score=0.8347, chirp=0, fft_len=8
zi3x, Cuda 9.00 -- Best pulse: peak=5.104422, time=21.14, period=0.2652, d_freq=1418757324.22, score=0.8557, chirp=0, fft_len=8

Sure would be nice to have this issue resolved in the Special App so that WUs like this don't have to be processed 3 or 4 (or more) times.
ID: 1901107 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1901130 - Posted: 15 Nov 2017, 6:35:20 UTC - in response to Message 1901107.  

4th host is ATi SoG so 1st and 4th would agree most probably. Better to check offline with stock CPU app for completeness.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1901130 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901203 - Posted: 15 Nov 2017, 16:53:12 UTC - in response to Message 1901130.  

4th host is ATi SoG so 1st and 4th would agree most probably. Better to check offline with stock CPU app for completeness.
Yep, you're right. The two SoG hosts agreed and everybody got a passing grade. I just started an offline Windows stock CPU run run a few minutes ago. I would expect the results to match SoG but, we'll see.
ID: 1901203 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901208 - Posted: 15 Nov 2017, 18:03:56 UTC - in response to Message 1901203.  

Stock Windows CPU app does, indeed, agree with SoG.

<best_pulse>
  <peak_power>6.5380554199219</peak_power>
ID: 1901208 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901781 - Posted: 19 Nov 2017, 3:40:31 UTC

I'm going to post this Inconclusive just because it highlights the problem where Best Spike is sometimes reported as "NaN" following a restart. I think this is a really good example because it shows a Cuda 6.0 zi3v that was restarted, versus a Cuda 8.0 zi3v that wasn't.

Workunit 2747390619 (25fe07aa.18846.12342.3.30.103)
Task 6168764638 (S=0, A=2, P=2, T=0, G=0, BS=nan, BG=4.000394) x41p_zi3v, Cuda 6.00 special
Task 6168764639 (S=0, A=2, P=2, T=0, G=0, BS=22.95163, BG=4.000392) x41p_zi3v, Cuda 8.00 special

I expect both will get validated in the end, but the third host could really be doing something more productive with its time.
ID: 1901781 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1901828 - Posted: 19 Nov 2017, 9:29:05 UTC - in response to Message 1901781.  

I'm going to post this Inconclusive just because it highlights the problem where Best Spike is sometimes reported as "NaN" following a restart. I think this is a really good example because it shows a Cuda 6.0 zi3v that was restarted, versus a Cuda 8.0 zi3v that wasn't.

Workunit 2747390619 (25fe07aa.18846.12342.3.30.103)
Task 6168764638 (S=0, A=2, P=2, T=0, G=0, BS=nan, BG=4.000394) x41p_zi3v, Cuda 6.00 special
Task 6168764639 (S=0, A=2, P=2, T=0, G=0, BS=22.95163, BG=4.000392) x41p_zi3v, Cuda 8.00 special

I expect both will get validated in the end, but the third host could really be doing something more productive with its time.


. . Agreed, resends are a bore, but my rigs running zi3v Cuda80 special are doing thousands of WUs, averaging about 1800 to 1900 a day between them, and the rates of both inconclusives and invalids are very, very low. I am getting maybe one or two invalids, on noise bomb tasks that waste only seconds of the third wingman's time, every few days. So the rate is maybe 1 per 5,000 to 6,000 tasks. The rate on inconclusives is a handful or so per day, totalling one to 2 dozen per day all up, making that rate not much greater than one percent at worst. Not a glaring problem that I can see, since 99% of them validate, and very few against other Cuda special apps. Would it be good if zi3v Cuda90 does better? Sure. Or maybe a later revision completely removes the sort order issue? Sure again. But for the moment it is not a massive problem.

. . I am not losing sleep over using this app. And since Petri (and TBar) has already advised people to move to zi3v from 3t2b I don't think it is threatening any impending disasters.

Stephen

:)
ID: 1901828 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13731
Credit: 208,696,464
RAC: 304
Australia
Message 1901830 - Posted: 19 Nov 2017, 10:02:09 UTC - in response to Message 1901828.  
Last modified: 19 Nov 2017, 10:05:05 UTC

. . Agreed, resends are a bore, but my rigs running zi3v Cuda80 special are doing thousands of WUs, averaging about 1800 to 1900 a day between them, and the rates of both inconclusives and invalids are very, very low.

Resends aren't a bore, they are an unnecessary load on the (already struggling) servers.

Your Windows system runs at about 2.2% Inconclusives.
Your Linux systems are 5.6% and 5.9%
My understanding is that the project goal was up to 5%, 0% being the target, but up to 5% would be acceptable, Over 5% isn't acceptable.

In the past people (such as myself) have felt hard done by when they have missed out on Credit when 2 machines with problem applications/drivers/hardware/whatever had validated against each other, so even invalids aren't really acceptable. And the faster a system is, the lower the number of such problems needs to be. People losing out on Credit because a couple of Special applications have validated against each other will annoy them just as much.

Getting the most from our hardware would be great, but not at the expense of accuracy.
The results are most important, not how quickly they are produced; that is secondary.
Grant
Darwin NT
ID: 1901830 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1901833 - Posted: 19 Nov 2017, 10:37:21 UTC - in response to Message 1901828.  

The rate of inconclusives is also a false number when you consider that many get validated by the same application. Since the Linux boxes have strongly taken over as the heavy hitters, they are also validating many tasks that they shouldn't be. So YES, this is still a problem.

Which is one reason I don't jump on the band wagon for testing apps that have only been out for a very short time. If there is more than 1 machine running a new app you lose all control of testing when you're validating tasks you shouldn't against the same app.
And there is NO way of cross checking valid tasks if there was never a resend needed to validate.
ID: 1901833 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1901851 - Posted: 19 Nov 2017, 13:41:05 UTC - in response to Message 1901830.  

. . Agreed, resends are a bore, but my rigs running zi3v Cuda80 special are doing thousands of WUs, averaging about 1800 to 1900 a day between them, and the rates of both inconclusives and invalids are very, very low.

Resends aren't a bore, they are an unnecessary load on the (already struggling) servers.

Your Windows system runs at about 2.2% Inconclusives.
Your Linux systems are 5.6% and 5.9%

My understanding is that the project goal was up to 5%, 0% being the target, but up to 5% would be acceptable, Over 5% isn't acceptable.

In the past people (such as myself) have felt hard done by when they have missed out on Credit when 2 machines with problem applications/drivers/hardware/whatever had validated against each other, so even invalids aren't really acceptable. And the faster a system is, the lower the number of such problems needs to be. People losing out on Credit because a couple of Special applications have validated against each other will annoy them just as much.

Getting the most from our hardware would be great, but not at the expense of accuracy.
The results are most important, not how quickly they are produced; that is secondary.


. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few. As I said from 6 to maybe 18 depending on the day and the wingmen. That is up to maybe 18 out of 1800. If you think that equals 5% I think you need a new formula for working it out. And with Apple Darwin and IGPU hosts you can get those numbers on any machine. If the problems being examined are sorted out the numbers will only drop slightly and not disappear altogether. The issue causing the concerns that seem to plague some people are all but a storm in a tea cup. And as I pointed out 99% are validating against non-special app hosts, so no one is missing out on much. And the miniscule number of invalids on my machine are all noise bombs, the only one missing out on points on those tasks is me and I can live without the 1 or 2 credits they might lose. Especially when CreditScrew can cost me 10 or 20 times that on each of a great many of the valid full length tasks that have validated against other non-special app hosts. Also from all that I have read there is, for the most part, no matter of accuracy except in the very low incidence of false best spikes. Something that is best sorted out yes, but something that is not occurring in highly significant numbers.

Stephen

..
ID: 1901851 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1901890 - Posted: 19 Nov 2017, 17:48:05 UTC - in response to Message 1901851.  


. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few.
Stephen

..

That's my assessment too. You can't just use the total tally of Inconclusives on your stats page because some of those go back 3 months, even before you were using the special app. To get a true percentage add up all tasks completed, validated, errored or put into Inconclusive for a single 24 hour period and then do your math. And for good measure do the same for a couple other full days just to make sure you didn't have an outlier day.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1901890 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1901894 - Posted: 19 Nov 2017, 18:01:41 UTC - in response to Message 1901890.  

That still doesn't work, because you would need to include every valid task that was an inonclusive, but validated in the last day (or that day).
ID: 1901894 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901899 - Posted: 19 Nov 2017, 18:20:22 UTC

My current #1 box ran with Windows for about 4 years, completing more than 340K tasks with a grand total of 33 Invalid results. All but a half dozen or so of those occurred due to a truncated Stderr issue in 2014 that was eventually addressed by a BOINC modification back in 2015. I had only one Invalid in the last year and a half before switching over to Linux to try out the Special App. In less than 7 months with Linux, I've gotten 102 Invalids on that box, every one of them due to some sort of unresolved issue with the Special App.

We all see it with our wingmen on a daily basis, where ongoing Invalid results signal a problem with the host returning those results. We get frustrated that our wingmen don't pay attention and fix whatever is ailing with their machines (which periodically surfaces in the Invalid Host Messaging thread). It should never be an acceptable situation to have a science application be responsible for Invalid results on a continuing basis. Invalids and errors should be extremely rare. As far as I'm concerned, the only reason my own reported results should ever end up in an Invalid state is if my hardware has started to fail, or I've made some blunder with a command line setting. In such cases I consider it my responsibility to figure out the cause, then either blow out the dust, adjust the command line, replace a GPU, or whatever. With the current situation, I might not even notice if an Invalid was my machine's fault, what with the numbing effect of seeing Invalids show up in my task list on a daily basis, rather than once in a blue moon (or less).

Focusing on the amount of credit "lost" is a specious argument. That should be totally irrelevant. It's the reliability and accuracy of the science app that's at issue here.
ID: 1901899 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1901901 - Posted: 19 Nov 2017, 18:30:24 UTC

I not even has a linux host but all this readings makes me ask just one question:

Could we loose the ET call because this issues?

If yes something must be done to stop this invalids. What to do? It´s well beyound my pay grade.
ID: 1901901 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1901908 - Posted: 19 Nov 2017, 19:08:48 UTC
Last modified: 19 Nov 2017, 19:25:58 UTC

What a Cluster. I'm not even going to bother except to add a few Facts.
1) Aborted Tasks (Overflows) should not even enter the equation as they are Not used.
2) Inconclusive tasks from Obviously Bad Wingmen Can Not be used against a Host.
3) The Net Inconclusives are Much different than Gross Inconclusives, you should check it out.
and finally, If the people here were Really concerned about Large amounts of Inconclusive Results they would be posting about the Intel iGPU Hosts which frequently have upwards of 90% Inconclusive results.
Since they aren't posting about the Intel GPUs, I can only assume there is an alternate motive.
For a Task to be labeled Invalid, 50% of the signals would have to be different. That means a Host with 16 signals would validate against a Host with 30 signals. Since the most the Special App is off by is usually ONE signal, the chances of getting an Invalid against the Special App is extremely slim, even more slim than getting a Windows SoG App with a Bad Best Gaussian. Which does happen much more than realized because most of the SoG Bad Best Gaussians Validate against each other.

If you want to see which Apps are the most offending, go to the CPU Only Hosts on the Top Hosts List and see which Apps are failing most against the CPUs, note, it is Not the CUDA App.
ID: 1901908 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1901934 - Posted: 19 Nov 2017, 21:15:03 UTC - in response to Message 1901901.  

I not even has a linux host but all this readings makes me ask just one question:

Could we loose the ET call because this issues?

If yes something must be done to stop this invalids. What to do? It´s well beyound my pay grade.


. . Hi Juan,

. . Answer in this case to number 1, most probably not. And the solution is out of my hands also ....

Stephen
ID: 1901934 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1901935 - Posted: 19 Nov 2017, 21:37:34 UTC - in response to Message 1901899.  

My current #1 box ran with Windows for about 4 years, completing more than 340K tasks with a grand total of 33 Invalid results. All but a half dozen or so of those occurred due to a truncated Stderr issue in 2014 that was eventually addressed by a BOINC modification back in 2015. I had only one Invalid in the last year and a half before switching over to Linux to try out the Special App. In less than 7 months with Linux, I've gotten 102 Invalids on that box, every one of them due to some sort of unresolved issue with the Special App.

We all see it with our wingmen on a daily basis, where ongoing Invalid results signal a problem with the host returning those results. We get frustrated that our wingmen don't pay attention and fix whatever is ailing with their machines (which periodically surfaces in the Invalid Host Messaging thread). It should never be an acceptable situation to have a science application be responsible for Invalid results on a continuing basis. Invalids and errors should be extremely rare. As far as I'm concerned, the only reason my own reported results should ever end up in an Invalid state is if my hardware has started to fail, or I've made some blunder with a command line setting. In such cases I consider it my responsibility to figure out the cause, then either blow out the dust, adjust the command line, replace a GPU, or whatever. With the current situation, I might not even notice if an Invalid was my machine's fault, what with the numbing effect of seeing Invalids show up in my task list on a daily basis, rather than once in a blue moon (or less).

Focusing on the amount of credit "lost" is a specious argument. That should be totally irrelevant. It's the reliability and accuracy of the science app that's at issue here.


. . It is good that you take such an interest in your part in the project, very few contributors do (myself included) but mainly because very, very few can track their results with the detail you seem to be able to. My comment was not that there is nothing to address but that we are dealing with a very low incidence situation. One that is being pretty well filtered out by the validation process. Your (and Grant's) bugbear is that work that is less than 100% will slip through because of cross validation by equally "flawed" apps (other special sauce users) but most of my results (unlike you I can only speak roughly judging by the ones I have looked at manually) are being validated by other apps. And the major part of an already small number are due to the sort order issue that only affects noise bombs and is not an accuracy or reliability issue but a reporting issue. The only accuracy issue from your reporting is the false best spike which is much rarer again. Sure this all needs to be addressed but the impact is far smaller than that of a single delinquent host pumping out dozens, hundreds, or even thousands of totally bogus results daily. And we all know from issues with wingmen that there are larger numbers of those than people running special sauce. And it was Grant who raised concerns over lost credits (and time), hence my reply to that. The most pressing issue is the false best spike and I am not in a position to do anything about that, and since the development is a volunteer contribution I can only wait for Petri to get his round tuit. I am sure you know he is aware of it and hopefully there will be a solution forthcoming.

Stephen

..
ID: 1901935 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1901945 - Posted: 19 Nov 2017, 23:26:43 UTC - in response to Message 1901908.  

What a Cluster. I'm not even going to bother except to add a few Facts.
1) Aborted Tasks (Overflows) should not even enter the equation as they are Not used.
Incessant repetition does not turn this alternative fact into a real fact. You have yet to provide a shred of evidence to support this claim.

2) Inconclusive tasks from Obviously Bad Wingmen Can Not be used against a Host.
Ummm...okay, that's certainly true, but how does this fit into the discussion?

3) The Net Inconclusives are Much different than Gross Inconclusives, you should check it out.
I agree that if some folks are going to throw around Inconclusive percentages, which I don't believe I have, there should be a consistent frame of reference. However, my glossary doesn't seem to have any entries or formulas for those terms. Why don't you spell them out?

and finally, If the people here were Really concerned about Large amounts of Inconclusive Results they would be posting about the Intel iGPU Hosts which frequently have upwards of 90% Inconclusive results.
Since they aren't posting about the Intel GPUs, I can only assume there is an alternate motive.
Why does there have to be some sinister "alternative motive"? This thread is about the Linux Special App, so that tends to be the focus. It's also an app which is currently in development, whereas the Intel GPU apps somehow made it into the mainstream. Last year, there were a couple of threads where the Intel GPU app issues were raised on multiple occasions. As I recall, it boiled down to having an active developer who could look into the problem(s), since they occurred most frequently on Macs. For the record, the most recent Inconclusive list that I generated for my machines last evening shows that, out of 212 Inconclusive WUs, 41 involve some flavor of the "intel-gpu" app (28 of them running on Macs). That's certainly excessive. But "upwards of 90% Inconclusive results" seems like another alternative fact with no support. I looked at the first 7 of the hosts that showed up in my list and didn't see one that exceeded 50%, though 6 of the 7 certainly exceeded any definition of a 5% threshold (using a ratio of Inconclusive tasks to Valid tasks). So, yes it's a significant problem, but plucking numbers out of the air doesn't do much to support that conclusion.

...most of the SoG Bad Best Gaussians Validate against each other.
Another unsupported over-the-top claim which I guess is supposed to justify problems in the Special App by redirecting attention to SoG. It's not an "either/or" situation. Bad Best Gaussians in SoG may indeed be a problem that also needs fixing. I see 7 WUs (out of those 212 Inconclusives in my list) where Best Gaussian seems to be the only point of disagreement between SoG and another app. Two of them have since validated. In neither case did the tiebreaker match the SoG Best Gaussian or hand the canonical result to the SoG task. In fact, in WU 2749967652, the tiebreaker was another SoG task, and it actually agreed with the zi3t2b result:

x41p_zi3t2b, Cuda 8.00 special: Best gaussian: peak=7.343542, mean=0.6160311, ChiSq=1.177911, time=84.72, d_freq=1420891151.79, score=0.8839912, null_hyp=2.121852, chirp=-22.353, fft_len=16k
v8.22 (opencl_nvidia_SoG): Best gaussian: peak=6.274947, mean=0.6099422, ChiSq=1.409922, time=94.79, d_freq=1420886270.36, score=0.6061153, null_hyp=2.235723, chirp=33.225, fft_len=16k
v8.22 (opencl_nvidia_SoG): Best gaussian: peak=7.343541, mean=0.6160314, ChiSq=1.177918, time=84.72, d_freq=1420891151.79, score=0.8840132, null_hyp=2.121858, chirp=-22.353, fft_len=16k

...so, whatever the SoG Best Gaussian problem is, it's appearance apparently depends on other factors, and it certainly doesn't appear to be cross-validating "most" of the time.
ID: 1901945 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1901953 - Posted: 19 Nov 2017, 23:52:17 UTC - in response to Message 1901945.  

What a Cluster. I'm not even going to bother except to add a few Facts.
1) Aborted Tasks (Overflows) should not even enter the equation as they are Not used.
Incessant repetition does not turn this alternative fact into a real fact. You have yet to provide a shred of evidence to support this claim.
The people that matter have already spoken on the issue of Aborted Tasks. Your continued denial is related to your motive. You can keep posting in this thread if you wish, you are not making very many friends here. If you want to see just how bad an App can be and continue to survive on SETI just consider the Intel iGPU App. The fact that the Intel app is still here should say everything you need to know.
ID: 1901953 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13731
Credit: 208,696,464
RAC: 304
Australia
Message 1902011 - Posted: 20 Nov 2017, 6:59:42 UTC - in response to Message 1901851.  

. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few...

Wrong.
Yes it goes up when there are problem wingmen, yes it goes down when there are better than usual wingmen, and yes it goes down if you are getting applications validating against themselves.
The number of Incoclusives divided by the number pending multiplied by 100 is the % of Inconclusives.

Everybody's percentage of inconclusives will vary to some degree, but the fact is your Linux systems are above the target threshold. So there is still work to be done on the application in regards to it's accuracy.
Grant
Darwin NT
ID: 1902011 · Report as offensive
Previous · 1 . . . 68 · 69 · 70 · 71 · 72 · 73 · 74 . . . 83 · Next

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.