Posts by Jeff Buck

1) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1914902)
Posted 24 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
I don't get upset about Petri trying out the latest zi3xs4 on Main. It is just one computer. Now if he made that app available to the public and dozens of computers were trying to run it, I would have an issue.
Well, although zi3xs4 may not be "publicly" available, I just noticed that Laurent (W3Perl) is running it on some of his machines. I don't know how many of them, because I stopped counting after 6. It's probably a safe bet that there are more. "Dozens" might very well be apt.

So.......somewhere in there, I would also say it's a safe bet that there have been cross-validations between Petri's box and Laurent's. If any of those involve that 30-Pulse overflow bug that Petri was testing a fix for last week, then that's definitely bad, because they would have either overruled a 3rd host reporting legitimate results or, if they were the first two hosts on the WU, prevented a non-impaired host from even getting that WU. That situation should be completely unacceptable.
Inasmuch as it would be impossible for me to manually go through all of Petri's or Laurent's tasks to see how often those 30-Pulse overflows might have cross-validated, I've instead used my own hosts' history with Petri as wingman to come up with an estimate.

In the last 30 days, Petri's host has been wingman on 319 WUs across my 3 Linux boxes. Excluding the 20 of his "ghosts" that have yet to time out, that leaves 299 tasks that have been completed. Of those, Petri's task has been marked as Invalid 6 times, most likely due to that 30-Pulse overflow bug. Had Petri's wingman for those tasks been one of Laurent's boxes running zi3xs4, they would likely have successfully validated, putting garbage into the DB.

Now, I don't know exactly how many of Laurent's boxes are running zi3xs4. though I did identify that 24 out of the first 34 that I looked at were running it, about 70%. Since Laurent's current RAC is about 10 times mine, I think it's probably safe to assume that he's processing about 10 times the number of tasks that I am, also. And if 70% of them are run with zi3xs4. I would estimate that Petri has been his wingman on about 2,100 of those zi3xs4 tasks. Therefore, if the ratio of Invalids that I saw from Petri's tasks against my hosts holds, a reasonable estimate is that somewhere around 42 of those bogus 30-Pulse overflows will have been cross-validated and found their way into the DB. That's just in the last 30 days.

In my opinion, that needs to stop immediately. If Petri's come up with a fix for that bug, and it appeared as if he did, based on the test results he posted last week, that fix needs to be implemented NOW. Either that, or get zi3xs4 out of the production environment entirely.
2) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1914682)
Posted 23 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
I don't get upset about Petri trying out the latest zi3xs4 on Main. It is just one computer. Now if he made that app available to the public and dozens of computers were trying to run it, I would have an issue.
Well, although zi3xs4 may not be "publicly" available, I just noticed that Laurent (W3Perl) is running it on some of his machines. I don't know how many of them, because I stopped counting after 6. It's probably a safe bet that there are more. "Dozens" might very well be apt.

So.......somewhere in there, I would also say it's a safe bet that there have been cross-validations between Petri's box and Laurent's. If any of those involve that 30-Pulse overflow bug that Petri was testing a fix for last week, then that's definitely bad, because they would have either overruled a 3rd host reporting legitimate results or, if they were the first two hosts on the WU, prevented a non-impaired host from even getting that WU. That situation should be completely unacceptable.
3) Message boards : Number crunching : Your Donations to SETI@home (Message 1914381)
Posted 21 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
Yes, there were a bunch of password, authentication and login commits this month that have already been propagated to the server software last week.
Add support for "visible password" checkbox

You can review all the commits related to login on that page.
It looks like it's probably the one where the "forgot email address?" field and link was removed. The github code, though is basically the back end, where the form input gets processed. I don't know if there's any record of the HTML changes to the actual login page, anywhere.

I don't think removing that one field, by itself, would normally have had any effect, but it would be my guess that the old HTML must have included "tabindex" attributes. Likely the old page had the Email address input field as "tabindex=1", with the Password field as "tabindex=2" and the Login button as "tabindex=3". I'm not sure you could even have tabbed to the "forgot email" or "forgot password" links, but with the "tabindex" attributes gone, now every field is included in the default tab order. Just one of those pesky little details that someone probably never even gave a thought to.
4) Message boards : Number crunching : Your Donations to SETI@home (Message 1914360)
Posted 21 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
The problem would have to be put in as a new issue on the BOINC development list where it will be classified as unimportant and forgotten. At least that is what happened to my Issue#2106
Yeah, it's not a major issue, just a minor annoyance to a few people....and I'm not actually one of them. I just identified the simple solution in case someone else wanted to pursue it.

Speaking of minor annoyances, has anyone else noticed the change to the Login screen two or three days ago. In addition to what appear to be minor font changes, the tab sequence has changed. It used to just require a single tab to go from the Userid field to the Password field. Now, it takes two tabs. My keyboard habits for a repetitive task like logging in are so deeply ingrained that I don't even think about what my fingers are doing, or watch the screen. Two days in a row I ended up on the Reset Password screen because I only hit the Tab key once, as I've been doing for years. Induced a rather startled WTF response, I can tell you!
5) Message boards : Number crunching : Panic Mode On (110) Server Problems? (Message 1914355)
Posted 21 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
I've never seen any recommendation for the special app to use anything other than the -nobs no blocking sync flag. The whole reason for the special app is to utilize the maximum potential of the graphics card. Not sure where the -bs blocking sync flag would be useful. Maybe the low end dual core and 1050 class systems?
As Grant mentioned, the original publicly available Special App version that launched the Linux thread had Blocking Sync built in. I ran into a problem with that on the first machine I tried it on, seemingly slowing down one of the GPUs that was on a riser cable off an x8(x4) slot. That caused me to switch to Petri's version without Blocking Sync, although the other two GPUs in that machine worked fine either way.

My other two Linux machines have been tested both ways and, reluctant as I am to lose a full core to support each GPU, I ultimately ended up using -nobs across the board. I think my last post on the subject was back in July (Message 1878599 ), in response to a question you had, as a matter of fact. :^)
6) Message boards : Number crunching : Your Donations to SETI@home (Message 1914343)
Posted 21 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
(I couldn't find a way to put them in one horizontal line, though).
That's one of the "features" of the so-called web site upgrade that got dropped on us over a year ago. Kittyman has expressed his disappointment several times that he can't get his signature images to appear side-by-side, as they used to. The problem is in the new CSS, where the "display: inline" default attribute is getting overridden by "display:block". I can't see any useful reason for that change. It simply came with the new CSS and was never changed back to the old attribute. I commented on it a couple times, in response to kittyman's posts (see Messages 1834656 and 1870131) , but nothing ever came of it.
7) Message boards : Number crunching : Panic Mode On (109) Server Problems? (Message 1913874)
Posted 19 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
MW is the most set and forget project I have run. I never have to micromanage it at all.
I only keep a backup available on one of my crunch-only machines, just to make sure it maintains a little heat in the bedroom on chilly nights when SaH runs out of work. My first choice is Asteroids, but they're often out of work, too, so I added MilkyWay as a backup to the backup. The last time it ran on Windows was about 3 years ago. It worked fine. But that machine is now Linux, and when MilkyWay kicked in one night a couple months ago, it turned out to be a colossal waste of time. I don't remember how many tasks it ran, but when I checked the results the next day, I found that all but one of them had been marked Invalid. I think they all ran to completion without throwing any errors, but it was all just wasted electricity (except for the little bit of extra heat). I never did try to figure out what might have happened, just turned off MilkyWay and added Einstein for the next time that both SaH and Asteroids ran out.
8) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1913829)
Posted 18 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
That's very encouraging news. That particular bug was a new one that obviously didn't exist in zi3v, so it's good to hear that it's been squashed. I don't know if you looked at the individual reported signals for that WU but, in case it might be useful, here's what the zi3v Cuda 9.00 reported for that first one.

Pulse: peak=1.996181, time=45.9, period=3.165, d_freq=7367429608.67, score=1.039, chirp=39.863, fft_len=2k
Pulse: peak=2.723336, time=45.86, period=4.847, d_freq=7367428898.79, score=1.037, chirp=44.771, fft_len=1024 
Pulse: peak=5.816846, time=45.84, period=11.48, d_freq=7367435017.85, score=1.024, chirp=44.921, fft_len=512 
Spike: peak=24.17922, time=20.4, d_freq=7367434859.72, chirp=-47.133, fft_len=8k
Spike: peak=24.39612, time=20.4, d_freq=7367434859.75, chirp=-47.338, fft_len=8k
Spike: peak=24.55352, time=20.4, d_freq=7367434859.77, chirp=-47.542, fft_len=8k
Pulse: peak=9.582356, time=46.17, period=25.77, d_freq=7367433634.62, score=1.044, chirp=-47.542, fft_len=8k
Spike: peak=24.10306, time=20.4, d_freq=7367434859.9, chirp=-47.673, fft_len=8k
Spike: peak=24.64978, time=20.4, d_freq=7367434859.79, chirp=-47.746, fft_len=8k
Spike: peak=24.06996, time=20.4, d_freq=7367434859.66, chirp=-47.821, fft_len=8k
Spike: peak=24.6841, time=20.4, d_freq=7367434859.81, chirp=-47.951, fft_len=8k
Spike: peak=24.30376, time=20.4, d_freq=7367434859.68, chirp=-48.026, fft_len=8k
Spike: peak=24.65621, time=20.4, d_freq=7367434859.83, chirp=-48.155, fft_len=8k
Spike: peak=24.47851, time=20.4, d_freq=7367434859.7, chirp=-48.23, fft_len=8k
Spike: peak=24.68352, time=20.4, d_freq=7367434859.83, chirp=-48.361, fft_len=8k
Spike: peak=24.59247, time=20.4, d_freq=7367434859.73, chirp=-48.434, fft_len=8k
Spike: peak=24.57421, time=20.4, d_freq=7367434859.85, chirp=-48.565, fft_len=8k
Spike: peak=24.6457, time=20.4, d_freq=7367434859.75, chirp=-48.639, fft_len=8k
Spike: peak=24.40411, time=20.4, d_freq=7367434859.87, chirp=-48.77, fft_len=8k
Spike: peak=24.63735, time=20.4, d_freq=7367434859.77, chirp=-48.843, fft_len=8k
Spike: peak=24.17529, time=20.4, d_freq=7367434859.9, chirp=-48.974, fft_len=8k
Spike: peak=24.53525, time=20.4, d_freq=7367434859.77, chirp=-49.049, fft_len=8k
Spike: peak=24.44674, time=20.4, d_freq=7367434859.79, chirp=-49.253, fft_len=8k
Spike: peak=24.29813, time=20.4, d_freq=7367434859.81, chirp=-49.457, fft_len=8k
Spike: peak=24.09077, time=20.4, d_freq=7367434859.83, chirp=-49.662, fft_len=8k
Pulse: peak=1.930647, time=45.9, period=3.232, d_freq=7367431788.23, score=1.003, chirp=-70.058, fft_len=2k

Best spike: peak=24.6841, time=20.4, d_freq=7367434859.81, chirp=-47.951, fft_len=8k
Best autocorr: peak=17.15316, time=40.09, delay=3.4191, d_freq=7367432119.01, chirp=11.946, fft_len=128k
Best gaussian: peak=0, mean=0, ChiSq=0, time=-2.124e+11, d_freq=0,
	score=-12, null_hyp=0, chirp=0, fft_len=0 
Best pulse: peak=9.582356, time=46.17, period=25.77, d_freq=7367433634.62, score=1.044, chirp=-47.542, fft_len=8k
Best triplet: peak=0, time=-2.124e+11, period=0, d_freq=0, chirp=0, fft_len=0 
Spike count:    21
Autocorr count: 0
Pulse count:    5
Triplet count:  0
Gaussian count: 0

That's the task that was awarded the canonical result when it validated against the v8.22 (opencl_ati5_nocal) windows_intelx86 tiebreaker.

If that detail is helpful, I should be able to supply it for the other two WUs, as well, though your "Strongly similar, Q= 99.76%" result probably indicates that you don't really need it.

EDIT: Okay, well here's the second one, anyway, this one from zi3v Cuda 8.00 which validated against v8.06 arm-unknown-linux-gnueabihf and also was awarded the canonical result.
Pulse: peak=0.9703487, time=45.84, period=1.106, d_freq=7477586881.35, score=1.071, chirp=-8.7555, fft_len=512 
Triplet: peak=11.34332, time=57.8, period=24.85, d_freq=7477592726.71, chirp=-21.437, fft_len=512 
Pulse: peak=9.662902, time=45.99, period=27.38, d_freq=7477596337.82, score=1.023, chirp=-31.288, fft_len=4k
Pulse: peak=5.316748, time=45.99, period=12.17, d_freq=7477587075.59, score=1.004, chirp=-34.269, fft_len=4k
Pulse: peak=3.947454, time=45.84, period=8.663, d_freq=7477592545.33, score=1.031, chirp=54.348, fft_len=512 
Pulse: peak=7.959779, time=45.9, period=20.52, d_freq=7477590483.52, score=1.048, chirp=-66.124, fft_len=2k
Pulse: peak=5.490968, time=45.99, period=13.33, d_freq=7477597761.59, score=1.034, chirp=82.956, fft_len=4k
Pulse: peak=3.772126, time=45.86, period=8.198, d_freq=7477596805.94, score=1.009, chirp=-92.09, fft_len=1024 
Pulse: peak=4.049043, time=45.84, period=7.958, d_freq=7477596138.5, score=1.06, chirp=-94.505, fft_len=512 

Best spike: peak=23.56962, time=62.99, d_freq=7477590158.97, chirp=-6.9772, fft_len=128k
Best autocorr: peak=17.02896, time=51.54, delay=4.9579, d_freq=7477592869.41, chirp=7.7832, fft_len=128k
Best gaussian: peak=0, mean=0, ChiSq=0, time=-2.124e+11, d_freq=0,
	score=-12, null_hyp=0, chirp=0, fft_len=0 
Best pulse: peak=0.9703487, time=45.84, period=1.106, d_freq=7477586881.35, score=1.071, chirp=-8.7555, fft_len=512 
Best triplet: peak=11.34332, time=57.8, period=24.85, d_freq=7477592726.71, chirp=-21.437, fft_len=512 

Spike count:    0
Autocorr count: 0
Pulse count:    8
Triplet count:  1
Gaussian count: 0
9) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1913637)
Posted 17 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
I hope someone has those wu's saved somewhere. They might reveal a bug in off line testing.
I tried the links and the wu is not available any more and the one in parentheses gives a ngnix error).
Here you go:

https://www.dropbox.com/s/bqy3c14hvpmgymv/InconclusivesWUs_20180112.7z?dl=0

Always happy to help facilitate offline bug-hunting. :^)
10) Message boards : Number crunching : Panic Mode On (109) Server Problems? (Message 1913566)
Posted 17 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
Well, this is bizarre. My WinVista machine just got this:

1/17/2018 11:32:03 AM | SETI@home | Sending scheduler request: To fetch work.
1/17/2018 11:32:03 AM | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
1/17/2018 11:32:05 AM | SETI@home | Scheduler request completed: got 0 new tasks
1/17/2018 11:32:05 AM | SETI@home | Project has no tasks available
1/17/2018 11:32:07 AM | SETI@home | Started download of arecibo_181.png
1/17/2018 11:32:07 AM | SETI@home | Started download of sah_40.png
1/17/2018 11:32:07 AM | SETI@home | Started download of sah_banner_290.png
1/17/2018 11:32:07 AM | SETI@home | Started download of sah_ss_290.png
1/17/2018 11:32:09 AM | SETI@home | Finished download of arecibo_181.png
1/17/2018 11:32:09 AM | SETI@home | Finished download of sah_40.png
1/17/2018 11:32:09 AM | SETI@home | Finished download of sah_banner_290.png
1/17/2018 11:32:09 AM | SETI@home | Finished download of sah_ss_290.png
11) Message boards : Number crunching : A very steep decline in Average Credits!!! (Message 1913558)
Posted 17 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
But I don't think you can make the assumption that for AR= X. Task(A) AR = X and Task(B) AR =X therefore task A = task B and thus credit Task(A ) = credit Task(B). It just doesn't work out that way. BOINC still generates the random number for credit for each task.
That's why I looked at multiple tasks in each category and just posted the raw results. It was how the range of credits in each rescheduled group compared with the similar control groups that I felt was meaningful, not the individual credit amount for any specific task. Obviously, the more tasks in each group, the better the comparison that can be made. Only one group in my test (non-VLARs rescheduled from CPU to GPU on one machine) contained fewer than 4 tasks. The other 11 groups all had 4.
12) Message boards : Number crunching : A very steep decline in Average Credits!!! (Message 1913547)
Posted 17 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
How would you test for that. Easy enough to run the same task in the benchmark apparatus with a cpu and then gpu. But how do you submit the same task to the project for validation and credit award?
For this type of test, there's really nothing an offline bench can tell you. You actually have to move one group of tasks from CPU to GPU and another group from GPU to CPU. Then just match the task types and ARs as closely as possible and record the amount of credit awarded.

In the tables that I linked to, you can see that I included 46 different tasks, all grouped by task type (Guppi VLAR and Arecibo non-VLAR), with ARs as closely matched as I could get them over the period that I was monitoring. Then each group is further broken down based on the device originally assigned vs. the device where the tasks actual ran, with unrescheduled tasks on both CPU and GPU as controls.

Since we currently have very few Arecibo non-VLARs showing up, any current test could only compare similar Guppi VLARs, but that could still be informative.

Of course, all that is dependent on actually receiving new tasks of any kind sometime later this year. ;^)
13) Message boards : Number crunching : A very steep decline in Average Credits!!! (Message 1913415)
Posted 17 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
If somebody has some hard data showing just what the impact of rescheduling is on granted credits, or can run some new tests to generate a comparison, I think it would be very useful. When I first experimented with rescheduling in June of 2016, there were some people who said it did affect credit and others who said that was a myth that had already been put to rest long before.

So, just to make sure that my own rescheduling wasn't messing up other people's credit, I did some fairly extensive comparisons. My results were posted in Message 1799300. The conclusion I reached, based on those results, was that rescheduling had "no more impact to the credits than is caused by the random number generator that assigns them in the first place."

Rescheduling at that time simply meant moving Guppi VLARs that were originally assigned to the GPUs over to the CPUs, and moving non-VLAR Arecibo tasks that were originally assigned to the CPUs over to the GPUs. So, yes, tasks were being run on a different device than what they were originally assigned to, which is the issue that is being raised again here.

Now, perhaps things have changed in some way in the last year and a half, such that my previous conclusion is no longer valid. If so, I think new testing and documented results would be needed to demonstrate it.
14) Message boards : Number crunching : Panic Mode On (109) Server Problems? (Message 1913262)
Posted 15 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
My backup project is to improve the code and run local tests.
Recovering some of those 5,300+ ghosts you've created wouldn't be a bad use of your time, either. Having those locked away when other people are starving for work is not particularly helpful.
15) Message boards : Number crunching : Panic Mode On (109) Server Problems? (Message 1913044)
Posted 14 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
If I/O contention is the issue, I suppose it would depend on what I/O is causing the bottleneck. The GBT splitters are all on Centurion, and I don't see any other processes that would contend with them there. However, they must be hitting the BOINC DB, probably on Oscar, and feeding the split files to the scheduler over on Synergy. The file deleters are over on Georgem and Bruno, so it wouldn't seem as if they would contend with the splitters anywhere but at the DB. But.........it's certainly complicated.

I wonder if it would help, as Keith suggested earlier, if new GBT splitters could be added over on Lando or Vader, where the PFB splitters used to run, avoiding any further contention on Centurion.
16) Message boards : Number crunching : Panic Mode On (109) Server Problems? (Message 1912845)
Posted 13 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
Back about a week ago, Eric wrote:
If we don't start building a queue I'll add more GBT splitters.
I wonder if that's still an option.
17) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1912834)
Posted 13 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
. . The moral there is for the majority of normal crunchers to NOT change over to the newer/edgier revisions UNTIL the developers announce they believe them to be trusty and stable. The good news is that Zi3v in both Cuda80 and Cuda90 versions seems to be that. At least the Cuda80 version is judging by my results.

Stephen

. .
The zi3v version may be stable, in the sense that it isn't changing underfoot, but it still does have several nagging issues that have yet to be addressed. I hope those are the sorts of things that Petri's working on with zi3xs4, not just trying to squeeze a few more seconds out of the run times.

However, it certainly appears that zi3xs4 is a work in progress, which absolutely should not be happening in a production environment. At least it appears that Petri is the only one running that version but still, the sort of primary testing that yields results like the 3 WUs in my previous post (where all 3 of the zi3xs4 tasks got marked Invalid) is what should be taking place offline, using a collection of specific WUs which can produce repeatable results until the app gets it right. That collection should already be pretty extensive, but those 3 new WUs could certainly be added. And I'm sure Petri's seen many more such WUs crop up in his own task list.

In my opinion, zi3xs4 should be in use offline, or in Beta, only.
18) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1912717)
Posted 13 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
Well, I don't know what Petri's doing with that zi3xs4 version, but it sure doesn't look stable.

Workunit 2813410758 (blc05_2bit_guppi_57976_77315_HIP46417_0038.12216.818.21.44.188.vlar)
Task 6306343836 (S=0, A=0, P=30, T=0, G=0, BS=13.40984, BG=0, BP=23.54865) x41p_zi3xs4, Cuda 9.10 special
Task 6306343837 (S=2, A=2, P=5, T=2, G=0, BS=24.3649, BG=0, BP=12.14392) x41p_zi3v, Cuda 9.00 special

Workunit 2813410770 (blc05_2bit_guppi_57976_75329_HIP46343_0032.11400.818.21.44.192.vlar)
Task 6306343840 (S=0, A=0, P=30, T=0, G=0, BS=12.58375, BG=0, BP=2.555692) x41p_zi3xs4, Cuda 9.10 special
Task 6306343841 (S=21, A=0, P=5, T=0, G=0, BS=24.6841, BG=0, BP=9.582356) x41p_zi3v, Cuda 9.00 special

Workunit 2813438980 (blc05_2bit_guppi_57976_76984_HIP46432_0037.16675.409.21.44.90.vlar)
Task 6306402535 (S=0, A=0, P=30, T=0, G=0, BS=12.35322, BG=0, BP=3.181179) x41p_zi3xs4, Cuda 9.10 special
Task 6306402536 (S=0, A=0, P=8, T=1, G=0, BS=23.56962, BG=0, BP=0.9703487) x41p_zi3v, Cuda 8.00 special

He's coughing up 30-Pulse hairballs where my zi3v hosts are reporting normal-looking results. And, of the Pulses that are reported by my hosts, I don't see any correlation with his reported Pulses.
19) Message boards : Number crunching : Panic Mode On (109) Server Problems? (Message 1912708)
Posted 13 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
Here's a theory for y'all. Do you suppose Meltdown and Spectre patches have been applied to that server, possibly degrading performance?
20) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1912601)
Posted 12 Jan 2018 by Profile Jeff Buck Special Project $250 donor
Post:
Yeah, it's ultimately a problem with the whole Anonymous Platform concept. As far as I know, it has no way to "certify" non-stock applications as safe to run/test in the production environment. If it did, then apps could also be de-certified once they were deemed obsolete or significant problems were identified.

I realize that SoG had many of the same growing pains that the Special App has had, also. However, the two major differences I see there is that at least the various SoG versions tended to be pushed through Beta, first. And then the developer tended to be more responsive to addressing problems that surfaced than is the case with the Special App (though, at times there could be significant resistance ;^)). Now, the current versions of SoG seem to be pretty well accepted as mainstream apps, but with a similar legacy of numerous outdated versions still floating around in the SETI-sphere.


Next 20


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.