Posts by Jeff Buck


log in
1) Message boards : Number crunching : BOINC assigns device X - Problem (Message 1813812)
Posted 22 hours ago by Profile Jeff Buck
I let my host 8064262 run all night with SoG and with the "<api_version>" line for the SoG app removed from app_info.xml. Everything looks fine, with device numbers being consistently reported correctly in the Stderr.
2) Message boards : Number crunching : BOINC assigns device X - Problem (Message 1813719)
Posted 1 day ago by Profile Jeff Buck
It seems there was a long discussion regarding device assignment logic over in Beta, about a month ago, that might have some bearing on this problem. Based on at least a simplistic understanding of what they were talking about over there, I just tried an experiment that Richard Haselgrove had suggested in that thread, namely to delete the

<api_version>7.5.0</api_version>

line from the SoG <app_version> section in app_info.xml (after shutting down BOINC, of course).

It appears that that might be where this problem originates, as well. Looking at the Stderr for Task 5124498887 which was running on Device 3 both before and after the app_info change, it shows "BOINC assigns device 0" before and both "Running on device number: 3" and "BOINC assigns device 3" after.

I'll also document two tasks that started from scratch after the change: Task 5124498589 also ran on Device 3 and Task 5124498260 ran on Device 2. The Stderr for each shows the correct device.

So, this problem also appears to tie in with that api_version / init_data.xml issue discussed in Beta. What the actual solution to that was, I couldn't actually figure out. ;^)
3) Message boards : Number crunching : BOINC assigns device X - Problem (Message 1813461)
Posted 1 day ago by Profile Jeff Buck

Today's testing was run plain vanilla, with no command line parameters for the SoG tasks. The host is on BOINC 7.6.22.

put some option into command line and place spacebar after it or just put some spaces into cmd line - will it help?

It doesn't appear that a trailing space makes a difference. I've just run two tasks with SoG, the first with a cmdline containing just two blank spaces. It's task 5123812346. It ran on Device 3 but the Stderr shows the usual "BOINC assigns device 0".

The second task, 5123914697, which ran on Device 1, was initially started with a simple "-use_sleep" cmdline. After confirming that "BOINC assigns device 0" was showing in the slot's Stderr and that -use_sleep was recognized, I suspended the task, added a space to the end of the cmdline ("-use_sleep ") and then resumed the task. The Stderr now shows a second occurrence of "BOINC assigns device 0". I also confirmed that the cmdline file with "-use_sleep" had a size of 10 bytes, while "-use_sleep " was 11 bytes, so I'm certain the trailing space wasn't being stripped before the file was saved.
4) Message boards : Number crunching : BOINC assigns device X - Problem (Message 1813359)
Posted 2 days ago by Profile Jeff Buck
I just noticed this happening today with SoG r3500. I've normally been running Cuda50 on my new host 8064262 which has 4 GTX 960s. However, I took some time today to switch a handful of tasks to SoG, intermixed with the Cuda tasks so I could get some comparisons between the two apps for tasks that were split from the same files and had matching ARs. I realized later, when I was pulling the results to put into my spreadsheet, that while the matching Cuda tasks showed as being distributed across all 4 GPUs, every one of the SoG tasks (all 11 of them) showed "BOINC assigns device 0". Pretty much a statistical impossibility, I think, although I didn't actually watch the tasks run, so I didn't see the true device numbers that each task really ran on.

Just to be sure though, once I saw this thread resurface this evening, I went and swapped another group of four tasks over to SoG and this time recorded which GPU got which task, as follows:

Task 5123149033 - Device 0
Task 5123193201 - Device 1
Task 5123193448 - Device 1
Task 5123193447 - Device 3

Here's a screenshot of the properties for that last one:


The Stderr for every one shows "BOINC assigns device 0", making it impossible to know which GPU the task actually ran on. That makes it difficult to accurately match test results across app, GPU, and AR. Perhaps not such a big deal on this box, although even with 4 GTX 960s, there are 3 different clock speeds involved. However, if I wanted to do something similar on my host with a GTX 670, GTX 780, and GTX 960, I'd have to pay attention to the tasks at some point while they're actually running, and not just pick up the results later. This would also be a long-term problem if it ever became necessary to identify a specific GPU when one starts to cough up hairballs.

Today's testing was run plain vanilla, with no command line parameters for the SoG tasks. The host is on BOINC 7.6.22.

Has anybody done anything to look into this problem?
5) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1813085)
Posted 3 days ago by Profile Jeff Buck
I think it would be useful to know what kind of hit the DB took when the change was made from 100 GPU tasks per host to 100 tasks per GPU. Whatever that increase was, and how well the DB handled it, might be informative in the current discussion. However, I don't think that was ever looked at or, if it was, I don't remember ever seeing it mentioned here.
6) Message boards : Number crunching : A Multi-GPU Cruncher for the Less Affluent Among Us (Message 1813073)
Posted 3 days ago by Profile Jeff Buck
A PM pointed out to me that I omitted the ongoing electricity usage for this rig. Kill-A-Watt is showing me a usage fluctuating from about 510 to 550 watts. That's with all 4 GPUs and 8 CPUs averaging about a 95% load. I don't know how that wattage compares to some of the big rigs with the latest and greatest hardware, but it's the upfront cost that I was focusing on, inasmuch as some people here seem to have very low electric rates, while others trend toward the high end.
7) Message boards : Number crunching : Xeon Phi (Message 1812881)
Posted 4 days ago by Profile Jeff Buck
Ivan posted a lot about it in:
http://setiathome.berkeley.edu/forum_thread.php?id=72020#1381374

He's also posted from time to time in other threads. Do an Advanced search in his posts.
8) Message boards : Number crunching : The Saga Begins (LotsaCores 2.0) (Message 1812792)
Posted 4 days ago by Profile Jeff Buck
Jeff, here is a screenshot of my resource monitor on that machine

http://i.imgur.com/qFHDBp1.jpg

Do you see anything out of the ordinary? I did try disabling things like extra LAN ports, both COM ports, the audio that is installed with each card, that kind of thing, but it didn't seem to help.

No, it doesn't appear that your Hardware Reserve is anywhere near being an issue, at least compared to what I've run into. The problem I had/have on my 6980751 host, with 2 GTX660s and 2 GTX750Tis, is that the Hardware Reserve is 1793 MB for some reason, and when I tried replacing one of the 660s with a GTX960, the Hardware Reserve jumped to 2049 MB. Since it's running 32-bit Win7, that reserve reduced the available memory to 2047 MB, not enough to keep S@H running full blast. All my other boxes, whether 32-bit or 64-bit, only show a Hardware Reserve in the 1 MB to 3 MB range, so the reserved memory on that one machine is very puzzling to me.

You might also try that exercise that Keith mentioned in his post previous to mine:
You would have to look at one 750's system properties and the Resources tab to see how much memory footprint one card takes.

I tried that also, last week, just for my own edification and to see if the total memory mapped for the cards came close to matching the Hardware Reserve, but found it only accounted for about half of it. However, I did notice that the memory that the cards were mapped to appears to be included in the memory allocated to the PCI Buses. It would be just a guess on my part, but if that PCI Bus memory allocation is fixed, perhaps your six cards are using it all up. On my box, the total memory used for the 4 cards was 948 MB out of 1445 MB allocated to the PCI Buses. Perhaps one of the more knowledgeable hardware or OS guys could chime in on how this device memory mapping all works.
9) Message boards : Number crunching : The Saga Begins (LotsaCores 2.0) (Message 1812690)
Posted 5 days ago by Profile Jeff Buck
The issue with Windows is where it maps the video card apertures into available system memory. You might just run out of address space. You would have to look at one 750's system properties and the Resources tab to see how much memory footprint one card takes.

I just ran into a problem like this on one of my machines when I tried to upgrade a GTX 660 to a GTX 960. Being a 32-bit Win7 box, it technically only has about 3.5GB memory available to begin with and it appears the GPUs are requiring a lot of "Hardware Reserved" memory for some reason. (None of my other boxes have that issue.) The GTX960 increased that reserved amount by 256MB, just enough to shrink my available memory to a point where it started causing some of the S@H apps to compete with each other for that limited resource. I went back to the 660, at least until I can do some more research.

Now, your system is vastly different from mine, but it might be useful for you to run Resource Monitor and take a look at the Hardware Reserved amount shown on the Memory tab, even if it just eliminates that as a possible problem.
10) Message boards : Number crunching : A Multi-GPU Cruncher for the Less Affluent Among Us (Message 1812685)
Posted 5 days ago by Profile Jeff Buck
I thought it worthwhile to remind those who are itching to increase their contribution to the project that there can be more economical ways to do it, too.

Lots of GTX 750Tis available at very reasonable prices, and as Shaggie76's charts show they're no slouch when it comes to work done for power consumed. Still keeping pace with the latest & greatest & leading all the older cards.

My other xw9400 (with an 800w PSU) has been running for a couple years now with 2 GTX 750Tis and 2 GTX 660s. The GTX 660s are more productive overall but do use more power.

I went ahead and stuck with the 960s on the new xw9400 because I figured the 1050w PSU gave me more headroom and I went for higher total output instead of output per watt. As it is, the new build actually seems to be drawing slightly less total power than that first xw9400, and my preliminary estimate is for at least 15% more crunching.

I also think that, although the newer Maxwell cards aren't as productive as they could be with the current apps, I've gotten the impression that once Jason and Petri dish out that Special app to the general public, that whole scenario will improve.
11) Message boards : Number crunching : A Multi-GPU Cruncher for the Less Affluent Among Us (Message 1812570)
Posted 5 days ago by Profile Jeff Buck
Yeah, I have nothing at all against those with the money to spend on the shiniest stuff. I get a bit envious sometimes, too. But since it seems like there's been so much emphasis on the high-end toys lately, especially since the 10xx series was launched, I thought it worthwhile to remind those who are itching to increase their contribution to the project that there can be more economical ways to do it, too.
12) Message boards : Number crunching : A Multi-GPU Cruncher for the Less Affluent Among Us (Message 1812550)
Posted 5 days ago by Profile Jeff Buck
Being rather frugal and with apparently somewhat shallower pockets than some crunchers, I read with some bemusement the threads detailing the building of crunch-monsters with the latest and greatest top-of-the-line processors, motherboards, GTX 10xx GPUs, and cooling solutions that rival the refrigerator in my kitchen. All with apparently little regard to the dollars/euros/pounds/yen/rupees that go out the door.

My approach tends to be a bit different, recycling/reusing/repurposing pre-owned components, primarily from sources on eBay and elsewhere online. Just about everything I currently crunch with, except for my daily driver, has come to life via that route.

A recent plan to start upgrading my GTX 660s to GTX 960s, now that the folks on the bleeding edge were starting to dump their barely used 9xx cards in favor of the new 10xx series, instead evolved into the assembly of an entirely new multi-GPU cruncher, at a cost that, at least to me, was relatively affordable.

The new machine is host 8064262. It's an HP xw9400 Workstation that arrived at my door with the following specifications:

Dual Quad Core AMD Opteron 2380 2.5GHz Processors HP OEM Liquid Cooler 16GB ECC RAM Two 300GB SAS Hard Drives 1050-Watt PSU NVIDIA Quadro FX3700 Graphics Card

After pulling half the RAM, one of the HDDs and the Quadro FX3700, I gradually added one GPU at a time to get to the cruncher that's now been happily humming along for a couple of weeks.

At what cost, you may ask. Well, even if you didn't, that's the whole point of this post. Here's the tale of the [cash register] tape (all prices are in US$ and include any S/H charges):

HP xw9400 Workstation =============== $170.00
Windows 7 Pro 64-bit ================= 34.94
HP (OEM) GTX 960 Reference GPU =========== 119.00
Gigabyte GTX 960 GPU ================= 122.50
EVGA GTX 960 SSC GPU ================= 122.50
Gigabyte GTX 960 GPU =================== 120.00
PCIe x16 12-inch Riser Cable =============== 10.32
Two PCIe 12-inch 6-pin Extension Cables ======= 13.98
PCIe x16 7-inch Riser Cable (from my parts bin) === N/C
KVM Cable (from my parts bin) =============== N/C
-------------------------------------------------------------
TOTAL Cost ============================= $713.24

That cost could also be slightly offset by the extra items (8GB ECC Memory, 1 300GB SAS HDD, Quadro FX3700) pulled from the machine. In fact the FX3700 has already been sold for a net return of $10.52.

So, with only a couple weeks of occasionally intermittent crunching, that box has already reached a rack of 20K and I'm hoping to see it reach somewhere close to 35K by the time it stabilizes (not running 24/7 but rather about 143 hours per week). Does that compare with any of the "latest and greatest" boxes built by the deep pockets guys? Probably not, but I still have money to spend on other things, like food.

So, for those of you with a few cobwebs in your wallet, you can see that there are actually some fairly heavy-duty crunching possibilities out there that don't require a second mortgage! ;^)
13) Message boards : Number crunching : Monitoring inconclusive GBT validations and harvesting data for testing (Message 1812373)
Posted 6 days ago by Profile Jeff Buck
Okay, I've tweaked the format of my listing a bit, replacing the "Anonymous platform" designations with, hopefully, more specific app descriptions. I've also identified which Work Units are guppis. The latest file is available from this link. (Unfortunately, Amazon's cloud drive apparently gets all screwed up when one tries to replace an existing file, so the link in my earlier post no longer seems to be valid.)

Examples of the slightly altered format:

Workunit 2243574106 Task 5111259127 (S=7, A=0, P=5, T=0, G=1) v8.12 (opencl_intel_gpu_sah) windows_intelx86 Task 5111259128 (S=8, A=0, P=5, T=0, G=0) x41zi (baseline v8), Cuda 5.00 Workunit 2244749285 (guppi) Task 5113705190 (S=2, A=0, P=8, T=2, G=0) AVXxj Win64 Build 3330 Task 5113705191 (S=2, A=0, P=11, T=2, G=0) v8.12 (opencl_intel_gpu_sah) windows_intelx86

Many can be ruled out as flaky hosts/GPUs pretty easily (with stock Cuda).

That's one of the reasons I figured that a listing showing the signal counts for each task would help to more easily weed out the "off the rails" hosts, without having to download the WU files or otherwise dig any deeper.
14) Message boards : Number crunching : Monitoring inconclusive GBT validations and harvesting data for testing (Message 1812335)
Posted 6 days ago by Profile Jeff Buck
I guess thats a start, but however we would prefer the datafile so we could get some sample results match and see how similar they are.

Sure, but there are over 100 Workunits in that list and it's likely that only a very few of them might be useful for this testing. That's why I included the signal counts in the summary and embedded links for each work unit and task. That way, those who know better than I what it is specifically that they're looking for can more easily winnow down the potential testing candidates and only retrieve those particular WU files from the server.

The one thing I will attempt to do this evening is to replace those "Anonymous platform" IDs with more specific app identifiers wherever I can. Unfortunately, there's not a consistent format for that info in the Stderrs.
15) Message boards : Number crunching : Monitoring inconclusive GBT validations and harvesting data for testing (Message 1812254)
Posted 6 days ago by Profile Jeff Buck
Started out as a gray and dismal morning here, so I decided to see if I could use the current Inconclusives in my own local DB to programmatically generate some sort of formatted list that could possibly be useful to those of you doing the research and testing in this area. Here's a sampling of WUs from my initial stab at it:

Workunit 2213859925
Task 5047460794 (S=0, A=0, P=14, T=2, G=0) Anonymous platform (NVIDIA GPU)
Task 5047460795 (S=0, A=0, P=14, T=2, G=0) v8.12 (opencl_intel_gpu_sah) windows_intelx86

Workunit 2237343062
Task 5097864317 (S=0, A=1, P=4, T=0, G=0) Anonymous platform (NVIDIA GPU)
Task 5097864318 (S=0, A=1, P=3, T=0, G=0) v8.00 (opencl_intel_gpu_sah) x86_64-apple-darwin
Task 5102639717 (S=3, A=1, P=4, T=0, G=0) v8.12 (opencl_ati5_SoG_cat132) windows_intelx86

Workunit 2236974648
Task 5097074623 (S=4, A=0, P=8, T=1, G=0) Anonymous platform (NVIDIA GPU)
Task 5111493046 (S=4, A=0, P=10, T=1, G=0) v8.12 (opencl_intel_gpu_sah) windows_intelx86
Task 5113043247 (S=4, A=0, P=5, T=1, G=0) v8.00 (opencl_intel_gpu_sah) x86_64-apple-darwin

The full list (in html format, with WU and Task links) is available in a cloud file, if anyone wants to take a further look.

These are all just WUs that one of my own hosts is involved in, either already with an Inconclusive result, or in my queue waiting to run as a tiebreaker. I notice that there are already a few that have been resolved, just in the few hours since my DB was updated.

Let me know if this is at all useful. Meantime, the sun is now out (albeit through a high smoky haze) and it's lunchtime.
16) Message boards : Number crunching : The Saga Begins (LotsaCores 2.0) (Message 1811886)
Posted 7 days ago by Profile Jeff Buck
I recently had some frequent "memory related failure" issues (on Cuda50 tasks) resulting in task postponement on my two 4-GPU machines with driver 361.91. At that time, one machine (a new build) included a single 750Ti in the mix and the other had (and still has) two. (I had just upgraded the driver on that one because I was attempting to replace a GTX 660 with a 960, a move which I've had to back off from for other reasons.)

No actual memory, CPU or other issues were detected in many hours of diagnostic testing, but since both machines had just had driver 361.91 installed, I decided to try backing up to earlier drivers. Going back to 361.75 on the new build didn't eliminate the problem, but backing up to 359.00 seemed to make it go away. Since the other machine had been crunching just fine for a long time on 353.30, and since the GTX 960 had been removed, I simply reverted to that version and the problem also went away. Both machines have been error free for about 10 days now.

Not definitive evidence of a problem with the newer drivers, perhaps, but good enough for me to avoid them for now.
17) Message boards : Number crunching : Panic Mode On (103) Server Problems? (Message 1811525)
Posted 8 days ago by Profile Jeff Buck
If any of you hungering for guppis happen to have any recent "lost" or "ghost" tasks in your task lists, you might have some guppis there and there's always the "report the same task twice trick" available to retrieve them. See Message 1798083. I had 9 lost tasks from a snafu last week, 7 of which were guppis, and just retrieved mine a few minutes ago:
8/22/2016 1:56:49 PM | SETI@home | Reporting 1 completed tasks
8/22/2016 1:56:49 PM | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
8/22/2016 1:56:53 PM | SETI@home | Scheduler request completed: got 9 new tasks
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_71111_HIP117779_OFF_0028.24420.831.17.26.118.vlar_1
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_71786_HIP117779_OFF_0030.24392.416.18.27.205.vlar_1
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_71786_HIP117779_OFF_0030.24392.416.18.27.208.vlar_1
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_69735_HIP117559_OFF_0024.24364.831.17.26.169.vlar_1
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_69735_HIP117559_OFF_0024.2764.416.18.27.82.vlar_1
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_69735_HIP117559_OFF_0024.2764.416.18.27.144.vlar_1
8/22/2016 1:56:53 PM | SETI@home | Resent lost task blc5_2bit_guppi_57451_69735_HIP117559_OFF_0024.2764.416.18.27.150.vlar_0
8/22/2016 1:56:53 PM | SETI@home | Resent lost task 24mr10an.30485.11523.5.32.206_2
8/22/2016 1:56:53 PM | SETI@home | Resent lost task 04no09ae.4389.18886.8.35.192_2
18) Message boards : Number crunching : Monitoring inconclusive GBT validations and harvesting data for testing (Message 1811462)
Posted 8 days ago by Profile Jeff Buck
if you are able to email me ( jason underscore groothuis at hotmail dot com ) result files, The task files, and stderrs from runs, I can certainly stick them under the microscope. The result files say a lot more than stderr ever can.

On the way....I hope.

EDIT: Just in case there's an email hitch, I've also uploaded a zip file to Amazon Cloud.
19) Message boards : Number crunching : Monitoring inconclusive GBT validations and harvesting data for testing (Message 1811454)
Posted 8 days ago by Profile Jeff Buck
So, does any of that point to something that would help identify the root of the inconsistencies?


It does help, though the Cuda app isn't represented here, looks familiar.

I could've run my task w/ Cuda50 but wouldn't have gotten the signal detail that SoG provided (hint, hint).

Depending on if those OpenCLs are nv GPUs,

Ah, a detail I left out of my post. My task 5104834544 ran on a GTX 750Ti and the Mac task 5000863926 ran on a GTX 680MX.

I have the complete task detail pages, the original WU file, and my task's result file tucked away if anyone wants them for further testing or analysis.
20) Message boards : Number crunching : Monitoring inconclusive GBT validations and harvesting data for testing (Message 1811431)
Posted 8 days ago by Profile Jeff Buck
I think it would only be fair to link back to Jeff Buck's message 1810642, which sparked the whole idea off.

What particularly caught my eye in that specific WU was that each of the original 3 tasks flagged as Inconclusive showed counts that were different from each other, not just a two-against-one situation. The Triplet count, in particular, was different in each of the three. The signal summaries for the Pulses and Triplets appear to show where the main disagreements lie.

Task 5000863925 - SETI@home v8 v8.00 windows_intelx86
and my tiebreaker Task 5104834544 - SETI@home v8 Anonymous platform (NVIDIA GPU) [SSE3xj Win32 Build 3500] which supplied the detail shown
Spike count: 5
Autocorr count: 0
Pulse count: 5
Triplet count: 3
Gaussian count: 0

Pulse: peak=3.463834, time=45.84, period=6.89, d_freq=1793048198.64, score=1.04, chirp=-15.375, fft_len=512
Triplet: peak=10.94141, time=79.19, period=4.576, d_freq=1793047058.11, chirp=-18.222, fft_len=32
Triplet: peak=11.145, time=79.19, period=4.576, d_freq=1793047051.92, chirp=-27.333, fft_len=32
Pulse: peak=7.657701, time=45.9, period=20.34, d_freq=1793047378.49, score=1.009, chirp=-51.723, fft_len=2k
Pulse: peak=0.4216424, time=45.81, period=0.2394, d_freq=1793043214.91, score=1.003, chirp=-60.739, fft_len=32

Triplet: peak=11.70704, time=59.39, period=21.47, d_freq=1793054369.21, chirp=-63.776, fft_len=64
Pulse: peak=2.575877, time=45.9, period=4.757, d_freq=1793047910.32, score=1.004, chirp=-74.832, fft_len=2k
Pulse: peak=5.652046, time=45.9, period=14.41, d_freq=1793053296.17, score=1.036, chirp=96.186, fft_len=2k
Best pulse: peak=3.463834, time=45.84, period=6.89, d_freq=1793048198.64, score=1.04, chirp=-15.375, fft_len=512

Best triplet: peak=11.70704, time=59.39, period=21.47, d_freq=1793054369.21, chirp=-63.776, fft_len=64

Task 5000863926 - SETI@home v8 v8.00 (opencl_nvidia_mac) x86_64-apple-darwin [SSE3x OS X 64bit Build 3321]
Spike count: 5
Autocorr count: 0
Pulse count: 6
Triplet count: 2
Gaussian count: 0

Pulse: peak=3.448323, time=45.84, period=6.89, d_freq=1793048198.64, score=1.036, chirp=-15.375, fft_len=512
Triplet: peak=10.5995, time=86.75, period=4.384, d_freq=1793050046.62, chirp=-48.401, fft_len=1024
Pulse: peak=7.653532, time=45.9, period=20.34, d_freq=1793047378.49, score=1.008, chirp=-51.723, fft_len=2k
Pulse: peak=0.4235946, time=45.81, period=0.2394, d_freq=1793043214.91, score=1.008, chirp=-60.739, fft_len=32

Triplet: peak=11.62673, time=59.39, period=21.47, d_freq=1793054369.21, chirp=-63.776, fft_len=64
Pulse: peak=2.582622, time=45.9, period=4.757, d_freq=1793047910.32, score=1.007, chirp=-74.832, fft_len=2k
Pulse: peak=1.258106, time=45.84, period=1.71, d_freq=1793049293.14, score=1.001, chirp=-95.853, fft_len=512
Pulse: peak=5.619142, time=45.9, period=14.41, d_freq=1793053296.17, score=1.029, chirp=96.186, fft_len=2k
Best pulse: peak=3.448323, time=45.84, period=6.89, d_freq=1793048198.64, score=1.036, chirp=-15.375, fft_len=512

Best triplet: peak=11.62673, time=59.39, period=21.47, d_freq=1793054369.21, chirp=-63.776, fft_len=64

Task 5103387390 - SETI@home v8 v8.12 (opencl_intel_gpu_sah) windows_intelx86 [SSSE3xj Win32 Build 3430]
Spike count: 5
Autocorr count: 0
Pulse count: 6
Triplet count: 4
Gaussian count: 0

Pulse: peak=3.479678, time=45.84, period=6.89, d_freq=1793048198.64, score=1.045, chirp=-15.375, fft_len=512
Triplet: peak=10.85742, time=79.19, period=4.576, d_freq=1793047058.11, chirp=-18.222, fft_len=32
Triplet: peak=11.05782, time=79.19, period=4.576, d_freq=1793047051.92, chirp=-27.333, fft_len=32

Pulse: peak=7.675513, time=45.9, period=20.34, d_freq=1793047378.49, score=1.011, chirp=-51.723, fft_len=2k
Triplet: peak=10.98131, time=60.22, period=19.37, d_freq=1793046782.23, chirp=51.817, fft_len=1024
Pulse: peak=4.355305, time=45.9, period=11.04, d_freq=1793048395.46, score=1.002, chirp=60.881, fft_len=2k
Triplet: peak=12.00244, time=59.39, period=21.47, d_freq=1793054369.21, chirp=-63.776, fft_len=64
Pulse: peak=2.598966, time=45.9, period=4.757, d_freq=1793047910.32, score=1.013, chirp=-74.832, fft_len=2k
Pulse: peak=5.616441, time=45.9, period=14.41, d_freq=1793053296.17, score=1.029, chirp=96.186, fft_len=2k
Pulse: peak=1.084218, time=45.82, period=1.296, d_freq=1793043223.34, score=1, chirp=99.459, fft_len=128
Best pulse: peak=3.479678, time=45.84, period=6.89, d_freq=1793048198.64, score=1.045, chirp=-15.375, fft_len=512

Best triplet: peak=12.00244, time=59.39, period=21.47, d_freq=1793054369.21, chirp=-63.776, fft_len=64

So, does any of that point to something that would help identify the root of the inconsistencies?


Next 20

Copyright © 2016 University of California