Linux CUDA 'Special' App finally available, featuring Low CPU use

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 69 · 70 · 71 · 72 · 73 · 74 · 75 . . . 83 · Next

AuthorMessage
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902030 - Posted: 20 Nov 2017, 9:38:09 UTC - in response to Message 1902011.  
Last modified: 20 Nov 2017, 9:41:05 UTC

ID: 1902030 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1902032 - Posted: 20 Nov 2017, 10:05:48 UTC - in response to Message 1902030.  
Last modified: 20 Nov 2017, 10:06:18 UTC

Here Grant, tell me how many Real Inconclusives there are on this page, https://setiathome.berkeley.edu/results.php?hostid=6796479&state=3

57, which divided by 1360 times 100 gives 4.2%
Which is within the target range.

The fact is if you pull out the problem children from other systems numbers, then their numbers will likewise look even better. You could fix all those problem systems, either by fixing their applications/hardware/drivers/all of the above, or by stopping them from contributing all those inconclusives, or you could improve your application so that it falls within the 0-5% inconclusives range as it is presently determined.
Congratulations! Your application on that particular systems falls within the 0-5% range, even with all the other problem children there, so it meets the criteria.
Stephen's Linux systems don't.

So i'm not really sure just what point it is you're trying to make?
Grant
Darwin NT
ID: 1902032 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902035 - Posted: 20 Nov 2017, 10:21:23 UTC - in response to Message 1902032.  
Last modified: 20 Nov 2017, 10:30:06 UTC

The point I'm actually making is you can't just grab the gross numbers and expect them to be accurate. You seem to like that word, do you realize the Benchmark App rates the CUDA App at 99.7% accuracy?
99.7% accurate against the CPU, do you know how well the SoG App rates? 99.7% is pretty high...for accuracy
Here you go, https://setiathome.berkeley.edu/forum_thread.php?id=78569&postid=1900637
Strongly similar, Q= 99.70%

BTW, Please Explain how you can Blame the Host for the Server sending it a countless stream of Bad Wingpeople. It obviously does work, no matter how you attempt to justify it. I'd just like to know why some Hosts escape the countless numbers I receive.
ID: 1902035 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1902037 - Posted: 20 Nov 2017, 10:47:15 UTC - in response to Message 1902035.  

The point I'm actually making is you can't just grab the gross numbers and expect them to be accurate. You seem to like that word

I do.
Another word I like is representative. While there is a degree of variability on the accuracy of using the gross Pendings & Inconclusives to determine the percentage of inconclusives, it is representative of the actual value.

, do you realize the Benchmark App rates the CUDA App at 99.7% accuracy?
99.7% accurate against the CPU, do you know how well the SoG App rates? 99.7% is pretty high...for accuracy
Here you go, https://setiathome.berkeley.edu/forum_thread.php?id=78569&postid=1900637
Strongly similar, Q= 99.70%

Which is nice, but shows that there is still room for improvement.
Sure, as long as it's close enough, it's good enough. Because for all of that accuracy, >5% inconclusives isn't close enough, so it's not (yet) good enough.


We could discuss self validation, each applications percentage of the total amount of work returned per hour and the effect that has on inconslusives- both for the number of WUs they return, and the number of WUs returned by problem applications/platforms etc and all sorts of other wonderful things.
But the fact is that the gross numbers of a given host are what is used to determine the percentage of Inconclusives, and to compare between hosts. And presently the Special application on Linux doesn't meet the mark. It's close, damned close, but it's not there yet.
Grant
Darwin NT
ID: 1902037 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1902040 - Posted: 20 Nov 2017, 10:59:03 UTC

There are 3 issues:
1. NaN values after restart so restart logic has flaw.
2. Best Pulse that can be result of more severe issue with Pulse or some other issue (and then Pulse issue itself is really fixed already).
3. Different order in overflow.

From all these (2) can be most dangerous one if best always wrongly and similarly selected. If cross-validation between same apps on different hosts fails - the danger of this issue much lower and it moves to inefficiency class of issues.
(1) and (3) result in inefficiency of computations (cause require additional replication).

Seems that's all so far. And until Petri provides new version or testers find some new issue all is void.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1902040 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902042 - Posted: 20 Nov 2017, 11:02:06 UTC - in response to Message 1902037.  

The fact is You, or anyone else, can blame a Host for the number of Bad Wingpeople the Server sends. It's completely arbitrary, and has nothing to do with how well the Host is operating.
Why you insist it's a fair practice is interesting. The Host has absolutely no control over the number it is sent, which seems to be high for the Linux Hosts. Someone with just a few Inconclusives needs to be sent about a dozen Bad Wingpeople, it happens to me all the time.
ID: 1902042 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902045 - Posted: 20 Nov 2017, 11:20:46 UTC - in response to Message 1902040.  

(2) can be most dangerous one if best always wrongly and similarly selected. If cross-validation between same apps on different hosts fails - the danger of this issue much lower and it moves to inefficiency class of issues.
It obviously doesn't happen all the time or the Inconclusive numbers would be much, Much, higher. Once you weed out all the Bad Wingpeople and the Aborted tasks, the actual Inconclusive number is Low. Yes, we are all waiting for an update. It's why I haven't posted the zi3v cuda 9 version, which seems to be working better than the cuda 8 version.
ID: 1902045 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1902046 - Posted: 20 Nov 2017, 11:37:37 UTC - in response to Message 1902042.  

It's completely arbitrary, and has nothing to do with how well the Host is operating.

You said it there- completely arbitrary. So it's that way for all hosts. So those that show good numbers are subject to the same arbitrariness as those that show poor numbers.

Why you insist it's a fair practice is interesting. The Host has absolutely no control over the number it is sent, which seems to be high for the Linux Hosts. Someone with just a few Inconclusives needs to be sent about a dozen Bad Wingpeople, it happens to me all the time.

The fact that your host processes more work per hour increases the likely hood of it picking up bad wingmen. The fact that your host processes more work per hour also increases the likely hood of it picking up good wingmen.

We're both judged the same- you consider it unfair because it goes against you. I consider it fair because it goes for me.
Because it is, mostly, fair.
Sometimes my Incoclusives rise, and other times they fall- depending on the wingmen. However, at least with the current application versions, my inconclusive rate doesn't vary too much these days.


Why do I have issues getting MB work unless I have the AP application installed & AP work is flowing? Why does my other machine rarely exhibit this behavior? Why do some others also have this issue, yet not others? Why does one of my machines pick up mostly Arecibo WUs when at the same time the other machine gets mostly GBT?
For all the randomness of work allocation and the like, it isn't truly random. However over a certain period of time (be it days, weeks or months), it will appear more random than not. And it would most likely be the same with good/poor wingmen and inconclusives.

It would be good if all the bad wingmen could be taken out of the mix- then if your inconclusives improved significantly, and mine remained unchanged then it would confirm that the wingmen were the issue. And if both yours and mine improved, then it would show they weren't.


Time for me to call it a night.
Today was the first day back at work after a month off and i'm buggered.
Grant
Darwin NT
ID: 1902046 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1902048 - Posted: 20 Nov 2017, 12:02:08 UTC - in response to Message 1902045.  

(2) can be most dangerous one if best always wrongly and similarly selected. If cross-validation between same apps on different hosts fails - the danger of this issue much lower and it moves to inefficiency class of issues.
It obviously doesn't happen all the time or the Inconclusive numbers would be much, Much, higher. Once you weed out all the Bad Wingpeople and the Aborted tasks, the actual Inconclusive number is Low. Yes, we are all waiting for an update. It's why I haven't posted the zi3v cuda 9 version, which seems to be working better than the cuda 8 version.

Perhap you misinterpret my statement.
I wrote about wrong best pulse cross validation failure. Much higher inconclusives rate would be if such issue as wrong best pulse happens often.
From other side, low inconclusive rate here is bad (instead of good) sign cause IF bad best pulse will cross-validate there will be no inconclusive but will be wrong data in master database instead. And it's the worst case.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1902048 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902049 - Posted: 20 Nov 2017, 12:22:49 UTC - in response to Message 1902048.  
Last modified: 20 Nov 2017, 13:00:45 UTC

Your case is only valid if the Apps were only matched against each other. They aren't. Most of the tasks are against other Apps, not the Special App. In fact, I receive a number of inconclusives against older versions of the Special App. I ran a few tasks at Beta where No One Else was running the Special App. There were only a couple inconclusives against the ever present Intel App, https://setiweb.ssl.berkeley.edu/beta/results.php?hostid=63959
The nice thing about Beta is you don't receive very many Obviously Bad Wingpeople, the Intel App seems to be unavoidable. If they ever get something besides BLC4s at Beta I might run a few more.

BTW, there is yet another Intel iGPU attempt at C.A. This one doesn't have JSPF which makes it different than the one currently at Beta, https://setiweb.ssl.berkeley.edu/beta/result.php?resultid=29132652 Whether it will be any better is anyone's guess.
ID: 1902049 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902050 - Posted: 20 Nov 2017, 12:33:48 UTC - in response to Message 1902046.  
Last modified: 20 Nov 2017, 12:39:28 UTC

The fact that your host processes more work per hour increases the likely hood of it picking up bad wingmen...
It sounds as though you just admitted your method of calculating Inconclusive rates is basically Flawed. I would go further and suggest the Server actually plays favorites. The numbers some receive just can't be attributed to random chance, I've seen some Hosts that seem to never be sent Bad Wingpeople while others are sent constant streams. The only way to equalize the calculation is to remove them, and the Aborted tasks. That leaves you with Net Inconclusives, which is a much more accurate method.

BTW, I think you will find a 99.7% Accuracy rate is well within the guidelines, and very high when compared against other GPU Apps.
ID: 1902050 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1902058 - Posted: 20 Nov 2017, 14:37:04 UTC - in response to Message 1902030.  

Here Grant, tell me how many Real Inconclusives there are on this page, https://setiathome.berkeley.edu/results.php?hostid=6796479&state=3 These are the some of the Obvious Bad Wingpeople on that One page;
https://setiathome.berkeley.edu/workunit.php?wuid=2751985115
https://setiathome.berkeley.edu/workunit.php?wuid=2751831123
https://setiathome.berkeley.edu/workunit.php?wuid=2751746801
https://setiathome.berkeley.edu/workunit.php?wuid=2751746807
https://setiathome.berkeley.edu/workunit.php?wuid=2751747096
https://setiathome.berkeley.edu/workunit.php?wuid=2751746966
https://setiathome.berkeley.edu/workunit.php?wuid=2751746978
https://setiathome.berkeley.edu/workunit.php?wuid=2751642848
https://setiathome.berkeley.edu/workunit.php?wuid=2751605257
That's about half the page, besides those, I count 9 Aborted tasks on that One page.
The above is typical.
Anyone with just a handful of Inconclusive results simply isn't getting their fair share of Misery.


. . Have you sent X-File a friendly message?

Stephen

??
ID: 1902058 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1902059 - Posted: 20 Nov 2017, 15:07:42 UTC - in response to Message 1902011.  

. . Wrong! The inconclusive numbers you are looking at are a false impression created by the delay in the 3rd wingmen clearing tasks and represent inconclusives not from one day but from many. If you look at only the inconclusives from any one day's returns there are only a few...

Wrong.
Yes it goes up when there are problem wingmen, yes it goes down when there are better than usual wingmen, and yes it goes down if you are getting applications validating against themselves.
The number of Incoclusives divided by the number pending multiplied by 100 is the % of Inconclusives.

Everybody's percentage of inconclusives will vary to some degree, but the fact is your Linux systems are above the target threshold. So there is still work to be done on the application in regards to it's accuracy.


. . OK, I'll stop you there. The rate of inconclusives is as a subset of validated tasks not pending tasks which have yet to be checked at all.

. . The right formula is 'tasks judged inconclusive within a given time period' multiplied by 100 and divided by 'the total number of tasks checked in that same time period'. In other words, the percentage of tasks within a set period of time which do not successfully validate from the total number of tasks checked for validation.

Stephen

..
ID: 1902059 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902076 - Posted: 20 Nov 2017, 17:30:06 UTC
Last modified: 20 Nov 2017, 17:38:27 UTC

So, I grabbed one of the remaining 2 tasks on that page of inconclusives, mainly because it was a difference in Best Gaussians with a SoG task and I pretty much knew how it would turn out. Been there, got the T-Shirt.
This one, https://setiathome.berkeley.edu/workunit.php?wuid=2751780910
CUDA Special 9.0;
Best spike: peak=27.62942, time=73.82, d_freq=1419628836.76, chirp=-11.973, fft_len=128k
Best autocorr: peak=18.17568, time=100.7, delay=1.3053, d_freq=1419627102.04, chirp=-17.923, fft_len=128k
Best gaussian: peak=3.550708, mean=0.5256593, ChiSq=1.295515, time=34.39, d_freq=1419629788.24,
score=1.28786, null_hyp=2.256684, chirp=-15.498, fft_len=16k
Best pulse: peak=2.482953, time=48.47, period=0.7122, d_freq=1419631092.84, score=1.009, chirp=47.473, fft_len=512
Best triplet: peak=0, time=-2.12e+11, period=0, d_freq=0, chirp=0, fft_len=0

Windows SoG 3584;
Best spike: peak=27.62945, time=73.82, d_freq=1419628836.76, chirp=-11.973, fft_len=128k
Best autocorr: peak=18.17569, time=100.7, delay=1.3053, d_freq=1419627102.04, chirp=-17.923, fft_len=128k
Best gaussian: peak=3.397433, mean=0.5405703, ChiSq=1.389597, time=37.75, d_freq=1419629736.23,
score=0.7114021, null_hyp=2.280532, chirp=-15.498, fft_len=16k
Best pulse: peak=2.482952, time=48.47, period=0.7122, d_freq=1419631092.84, score=1.009, chirp=47.473, fft_len=512
Best triplet: peak=0, time=-2.12e+011, period=0, d_freq=0, chirp=0, fft_len=0

CPU 3711;
Best spike: peak=27.62945, time=73.82, d_freq=1419628836.76, chirp=-11.973, fft_len=128k
Best autocorr: peak=18.17567, time=100.7, delay=1.3053, d_freq=1419627102.04, chirp=-17.923, fft_len=128k
Best gaussian: peak=3.550708, mean=0.5256588, ChiSq=1.29552, time=34.39, d_freq=1419629788.24,
score=1.287918, null_hyp=2.25669, chirp=-15.498, fft_len=16k
Best pulse: peak=2.482954, time=48.47, period=0.7122, d_freq=1419631092.84, score=1.009, chirp=47.473, fft_len=512
Best triplet: peak=0, time=-2.12e+11, period=0, d_freq=0, chirp=0, fft_len=0

How did I know? Because I've tested quite a few of them and know the SoG Apps are at times producing Bad Best Gaussians. So you see, it's not just the CUDA App that sometimes produces a Bad Best Result, the other Apps do as well. But, if you keep harping on the One App, people will think it's the only App that has that problem and ask Stupid Sh..tuff like, Gee do you think these problems will cause us to miss ET? No more than the problems with the other Apps will...is the correct answer. Anyone who thinks the SoG Apps aren't cross validating with Bad Best Gaussians is in severe denial. There are many, Many more SoG tasks validating each other than there are CUDA tasks. This effects All SoG Apps, nVidia, ATI, Windows, and Linux. There are currently No Mac SoG Apps.
ID: 1902076 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1902086 - Posted: 20 Nov 2017, 19:39:07 UTC

You can also check my host here to add it to the mix..

I don't really see the Point for anybody here to be upset over high amount of inconclusives. As long as there are alot of other applications that the particular host gets matched to and the host is fast the higher the inconclusive rate it would be until the others Catch up and the workunit is either invalid/errored out or validated.
Ofcourse we need to monitor the values etc but as stated earlier the main story of the inconclusive rate is still the out of order sorting of the Linux Cuda app (if i haven't accidently missed somewhere that it has been fixed) and how fast the particular host is..

For instance.. Tbars Host has alot of 1050/1060s and those are faster than in my case 750Tis.. https://setiathome.berkeley.edu/results.php?hostid=8053171 My inconclusive rate is 3,5% and Tbars is 4,7% and Petris host has for the moment 4,6% of inconclusives .
Tbar is running: x41p_zi3v , i am running x41p_zi3x-32 and Petri is running x41p_zi3xs3.. So we can't really compare apples/oranges there but there seems to be an indication that Petris app "seems" to be even more on spot and conclusive for the moment than Tbars..

And as i said the reason i Believe my inconclusive rate is lower is that my GPUs are slower so they don't turn back that much in time to get a large pending que compared. Tbars pending/inconclusive ratio is 26.9 , my is 19,7 and Petris is 22,1
I Think there is a correlation between the pending/inconclusive rate if compared with the same application to give the inconclusive rate in percentage. I suspect that if the host is faster the P/I ratio should rise and thus the I percentage likewise if compared!
For example if my host had even slower GPUs the P/I should go down somewhat further to perhaps 17 and i Believe that the I percentage would drop perhaps to 3,35%. So in my mind we really cant compare the I percentage on which app is better or not fully to that extent that this discussion is leaning to.

What we all should be concerned of is make sure the app matches the original stock CPU application to the max and keep the invalid/error rates down that really shows that the particular application run is not in par with the rest. Period!
Inconclusives is just a hint/guideline but i belive that host speed must be taken in consideration aswell as only to look at the I percentage value.

Insert alot of unnecessary wait states and slow down the application and the median value of I most certainly would decrease aswell..

Most importantly now, is my thinking and assumption wrong here regarding adding host speed to the mix? Give me your thoughts and please keep this thread less hostile than it has turned out to be lads..

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1902086 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1902130 - Posted: 21 Nov 2017, 2:22:36 UTC
Last modified: 21 Nov 2017, 2:26:34 UTC

I got another one, https://setiathome.berkeley.edu/workunit.php?wuid=2752595674 It looked the same, I decided to run it.

CUDA Special 9.0;
Best spike: peak=25.70546, time=85.56, d_freq=1420565431.23, chirp=90.947, fft_len=32k
Best autocorr: peak=18.00695, time=100.7, delay=4.8311, d_freq=1420567715.68, chirp=13.008, fft_len=128k
Best gaussian: peak=3.818873, mean=0.5320869, ChiSq=1.266398, time=26, d_freq=1420563643.87,
score=0.9975023, null_hyp=2.222546, chirp=-36.341, fft_len=16k
Best pulse: peak=5.220406, time=46.27, period=1.73, d_freq=1420565622.82, score=1.009, chirp=-68.462, fft_len=512
Best triplet: peak=0, time=-2.12e+11, period=0, d_freq=0, chirp=0, fft_len=0

Windows SoG 3584;
Best spike: peak=25.70546, time=85.56, d_freq=1420565431.23, chirp=90.947, fft_len=32k
Best autocorr: peak=18.00693, time=100.7, delay=4.8311, d_freq=1420567715.68, chirp=13.008, fft_len=128k
Best gaussian: peak=3.664716, mean=0.5392017, ChiSq=1.408692, time=27.68, d_freq=1420563582.9,
score=0.614264, null_hyp=2.284244, chirp=-36.341, fft_len=16k
Best pulse: peak=5.22041, time=46.27, period=1.73, d_freq=1420565622.82, score=1.009, chirp=-68.462, fft_len=512
Best triplet: peak=0, time=-2.12e+011, period=0, d_freq=0, chirp=0, fft_len=0

CPU 3711;
Best spike: peak=25.70545, time=85.56, d_freq=1420565431.23, chirp=90.947, fft_len=32k
Best autocorr: peak=18.00695, time=100.7, delay=4.8311, d_freq=1420567715.68, chirp=13.008, fft_len=128k
Best gaussian: peak=3.818873, mean=0.5320868, ChiSq=1.2664, time=26, d_freq=1420563643.87,
score=0.9975224, null_hyp=2.222548, chirp=-36.341, fft_len=16k
Best pulse: peak=5.220407, time=46.27, period=1.73, d_freq=1420565622.82, score=1.009, chirp=-68.462, fft_len=512
Best triplet: peak=0, time=-2.12e+11, period=0, d_freq=0, chirp=0, fft_len=0

This looks to be a pretty common problem. This Host had ZERO Inconclusive results before, https://setiathome.berkeley.edu/results.php?hostid=8282042
I'll bet he has more Bad Gaussians, they are just not showing up because the other SoG Hosts are validating with the same Bad Gaussians. He obviously isn't being introduced to mister Bad Wingman either...I wonder Why?
I just found Two examples of the Bad Best Results....and I wasn't even trying.
ID: 1902130 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1902149 - Posted: 21 Nov 2017, 3:55:54 UTC

Seeing as how this thread has now apparently been expanded to encompass topics beyond the Linux Special App, I might as well just make my entire Inconclusive list available to all. Crowdsourcing it, I guess. That way, those who wish can pick out and report on any interesting Inconclusive that catches their eye, rather than the narrow focus I was trying to maintain.

Special App of course (though no Cuda 9 this evening), stock Cuda, SoG, Intel GPU, Mac, runaway hosts, overflows, non-overflows, etc. There should be something interesting for almost everybody among this evening's 214 WUs (except Astropulse, which I did weed out).

Download today's list here: https://www.dropbox.com/s/l1rp2kr5lv9u9vc/Inconclusives_20171120.7z?dl=0

The zip file includes the list itself, in HTML format, as well as a .bbc file which contains the same info but with BBC code tags instead of HTML tags, to allow easy copy-and-paste into a forum post.
ID: 1902149 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1902179 - Posted: 21 Nov 2017, 5:55:49 UTC - in response to Message 1902050.  
Last modified: 21 Nov 2017, 6:04:10 UTC

The fact that your host processes more work per hour increases the likely hood of it picking up bad wingmen...
It sounds as though you just admitted your method of calculating Inconclusive rates is basically Flawed.

Please don't cherry pick- read everything and read it in context.
If you had actually read everything that I posted you would have seen the following part of the statement you quoted was this The fact that your host processes more work per hour also increases the likely hood of it picking up good wingmen.
Each negates the other.


BTW, I think you will find a 99.7% Accuracy rate is well within the guidelines, and very high when compared against other GPU Apps.

Never said it wasn't.
But for all of it's accuracy in the benchmarks, it hasn't translated to here on main where if it were as accurate as it's meant to be, it wouldn't have such a (relatively) high percentage of inconclusives.
Grant
Darwin NT
ID: 1902179 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1902180 - Posted: 21 Nov 2017, 5:56:47 UTC - in response to Message 1902076.  
Last modified: 21 Nov 2017, 6:07:31 UTC

So you see, it's not just the CUDA App that sometimes produces a Bad Best Result, the other Apps do as well. But, if you keep harping on the One App,

I haven't been harping on- My initial response was to a post that claimed it wasn't an issue- the fact is it is an issue, that needs to be addressed.
Since then I have just been responding to (mostly) your posts.
Grant
Darwin NT
ID: 1902180 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1902188 - Posted: 21 Nov 2017, 6:24:20 UTC - in response to Message 1902130.  

just found Two examples of the Bad Best Results....and I wasn't even trying.

I just hope you kept tasks cause after prev netbook death not sure if I have previously collected test cases. In summer that were last benches it did.
Your reaction is understandable if you perceive any found bug in Petri's app as personal offence. But why so?

And regarding Best Gaussian SoG app issue (though it was stated just in this very thread before, but if we make circles here...):
- yes, issue exists
-yes, I confirmed it already on ATi host (and there it was specific to SoG path; non-SoG OpenCL doesn't have it)
-no, it still not fixed
-yes, it will be fixed when I'll have working dev environment at my disposal again. Currently it's not the case. Here I could add possibility that some other could fix issue (it's open source freely available: https://setisvn.ssl.berkeley.edu/svn/branches/sah_v7_opt) but chances ~same as someone but Petri could fix his app (approach to zero).
-yes, sad that this issue was found only after deployment on main. It just shows another flaw of beta site testing. In this point of view one should be happy that in some sense similar issue in Petri's CUDA app was found on much earlier stage.
But, as they say, two wrongs doesn't make it right, yeah?
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1902188 · Report as offensive
Previous · 1 . . . 69 · 70 · 71 · 72 · 73 · 74 · 75 . . . 83 · Next

Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.