Posts by jason_gee

1) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1878678)
Posted 17 Jul 2017 by Profile jason_gee
Post:
Just a quick drop by to give a heads up, mostly directed at Petri, while sorting other stuff. Upcoming Cuda changes for [redacted] will likely require checking/modifications to all Warp Synchronous code. Both Baseline and Special.
2) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875834)
Posted 30 Jun 2017 by Profile jason_gee
Post:

Thanks. Still need to read it... Repo version is in sync enough?


Not quite yet, though I have a copy. Am using this weekend to do that, now that switchover to faster internet is done and teething problems ironed out.

[Edit:] Well that goes much quicker with 20Mbps upload, instead of 1Mbps :)
Cuda multibeam Alpha, Updated client/alpha to Petri's zi3v. Addresses the pulse race condition. Some questions about best Gaussian are being investigated.
3) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875602)
Posted 29 Jun 2017 by Profile jason_gee
Post:
While I'm certainly eagerly awaiting a Windows version of the Special App (I view my Linux excursion as merely a temporary visit to a carnival funhouse), I surely hope you don't have to end up in the unemployment line to make it happen. ;^)


Oh that's on the cards anyway, partially intentionally, partially the local economy not doing so well. Quite sick of working for 'the man', I've elected to study updated web development, and freelance tech help around the neighbourhood like I used to as a kid, adding the web dev stuff to the portfolio as I go with that. There'll be some short term pain, but likely a lot more time to dedicate to what I want to do over time.
4) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875594)
Posted 29 Jun 2017 by Profile jason_gee
Post:
As for 1Mb cards, couldn't that be coded? If Vram<1200mb -> set unroll=1


Conceptually things like this can be done as things are made more generalised, though it is complicated by a number of factors. These include that since VRAM was virtualised after original XP driver models, on all the platforms, the memory reporting virtually doesn't really match physical things much anymore. It was much easier when GPUs could only really do one thing at a time.
5) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875588)
Posted 29 Jun 2017 by Profile jason_gee
Post:
Problems with restarts have existed forever with Petri's App. Since his tasks finish in under a few minutes I don't think he uses checkpoints.
But he's not just coding for himself. Or is he?


Petri has repeatedly made clear to me that he's tweaking it more or less for his special situation, and that generalising it for wider use will need to come with a separate effort. Fortunately that area is my kindof speciality, and that becomes more feasible now the pulse thing is addressed. Time to work on this has been a problem for me, though am expecting things to get better soon, especially since work is drying up, and I have better internet. I'll probably end up dedicating a portion of streaming airtime to open source development, as interest was high when I watched other developers stream, and it looked like fun to gasbag online.
6) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875585)
Posted 29 Jun 2017 by Profile jason_gee
Post:
Okey doke. Plenty to work with on both issues when I can. Fingers crossed I get some home & work things out of the way this week.

[Edit:] Will walk through the stock CPU code against Eric's responses, query if I can't see why best shouldn't be a duplicate of a reportable if present. For the restart issue, I'll have a peek at the checkpointing and restart process before updating the alpha in svn, either commit as is and work on it incrementally, or fix it first if simple.
7) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875579)
Posted 29 Jun 2017 by Profile jason_gee
Post:
...
To make it even more bizarre, there were 5 Spikes reported before the shutdown and those appear to match the wingmen's reports. But 5 + 27 should equal 32, whereas the reported Spike count for the -9 Overflow is still just 30.


As the spike processing is on GPU, it's parallelised, therefore theoretically it can pick up many more spikes, but will raise a -9 overflow exception during recording to the result file at 30, and bail. So they can show more in the log, but only store 30.

What might be going wrong with restart though, is probably something with startup, such as the initial baseline smooth possibly being omitted by accident. Fortunately since the problem shows with spikes, it means there isn't much of complexity to check going on before that. Just the task load, smooth, FFT planning, Chirps, and FFT. So it *should* be relatively simple to remedy.
8) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875572)
Posted 29 Jun 2017 by Profile jason_gee
Post:
In the meantime, multiple response tweets from Eric (lmao). Analysing & compiling the info.

my original query:
@SETIEric If a 'best Gaussian' looks more 'Guassianey' than the reportables, why may it not necessarily be reportable ?

A Gaussian has to pass 3 thresholds. 1. A power threshold for the fit to even occur 2. A chisqr "gaussianness" thresholld and

3. A null chisqr "integrated power" threshold. The "Best Gaussian" is chosen by a score computed from chisqr and nullchisqr.

But for best Gaussian, the thresolds are lower than for reported Gaussians, especially early in the run (we always want a best Gaussian)

So it's possible for a high scoring Gaussian not to meet the chisqr threshold and not be reported.

Cool, seems to gel somewhat with what I thought. Will have to figure out which builds differ in what ways. my response:
Thanks for the detailed responses :D. Context is we currently have different apps doing different things. Can now compare against intent.
9) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875485)
Posted 28 Jun 2017 by Profile jason_gee
Post:
Can't do that @petri, let it stew.
10) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875454)
Posted 28 Jun 2017 by Profile jason_gee
Post:
Ah, but I guess I have one more post to make before cutting some ZZZs. And back on topic, too. After 14+ hours my Windows CPU (setiathome_8.00_windows_intelx86) bench of 23se08ac.6875.22968.6.33.135 just finished. TBar had indicated that he thought this WU was a bit of a problem case. He may have been right. My original post:

Workunit 2573263722 (23se08ac.6875.22968.6.33.135)
Task 5805117074 (S=3, A=0, P=1, T=3, G=0) x41p_zi3t2b, Cuda 8.00 special
Task 5805117075 (S=3, A=0, P=1, T=3, G=0) v8.22 (opencl_nvidia_SoG) windows_intelx86

Cuda 8.00 special - Best gaussian: peak=3.252388, mean=0.5397108, ChiSq=1.344394, time=14.26, d_freq=1418816790.11,
score=-1.169299, null_hyp=2.144445, chirp=-39.071, fft_len=16k
v8.22 SoG - Best gaussian: peak=3.76217, mean=0.5480909, ChiSq=1.226871, time=39.43, d_freq=1418822660.68,
score=-1.169124, null_hyp=2.078196, chirp=43.425, fft_len=16k
Well, it seems that in this case the "gold standard" agrees with SoG:
<best_gaussian>
<peak_power>3.7621715068817</peak_power>

G'night.


Exactly. Note the Higher ChiSq. Therefore the Cuda 8 special one looks more 'Gaussianey' than the 8.22 SoG one. Hence my Tweet/Query to Eric.
11) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875452)
Posted 28 Jun 2017 by Profile jason_gee
Post:
Oh you'd be surprised [as I was]. The immediacy bypasses all sorts of tradition and other impediments. Eliminates the old 'Chinese Whispers' (aka Fake news)
TraDITION! Oh great, first Flying Toasters and now Fiddler on the Roof flashbacks. I think it's past my bedtime.


*opens beer* ... Guess my work here is done :)
12) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875449)
Posted 28 Jun 2017 by Profile jason_gee
Post:
Tweeted Eric:
@SETIEric If a 'best Gaussian' looks more 'Guassianey' than the reportables, why may it not necessarily be reportable ?
Heh, I suppose it's just my old age, but I tend to have a hard time keeping a straight face when I read that somebody "Tweeted" something. It always seems about as frivolous as, oh I dunno, flying toasters perhaps! ;^D


Oh you'd be surprised [as I was]. The immediacy bypasses all sorts of tradition and other impediments. Eliminates the old 'Chinese Whispers' (aka Fake news)
13) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875443)
Posted 28 Jun 2017 by Profile jason_gee
Post:
Tweeted Eric:
@SETIEric If a 'best Gaussian' looks more 'Guassianey' than the reportables, why may it not necessarily be reportable ?
14) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875441)
Posted 28 Jun 2017 by Profile jason_gee
Post:
Still waiting on my Seti Toaster :(
15) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875435)
Posted 28 Jun 2017 by Profile jason_gee
Post:
How there can be a "best" signal that isn't worth reporting (when there are apparently 3 inferior signals that are) is beyond me, but that's apparently the standard. :^)


For best there is a check, added ~2011, of the CHiSq fit ('i.e. 'Gaussian-ness') , in addition to the score used for reporting. My cursory reading suggests the Best may be reportable, maybe not, though yet to do a full line by line analysis. The suspected variation is in the multiple different implementations in that logic in the different branches, though that doesn't rule out other bugs or cumulative error
I mean, I can understand situations where the "best" signal, of any type, still wouldn't be good enough to "report" as worthy of further investigation. However, it seems to me that if one or more signals do achieve that reportable threshold, that the "best" signal should be one of those. If it's not, it just seems really screwy to me. Out of sync, I guess. Perhaps the dictionary the scientists use has a different definition of "best" than the one most of us common folk use. ;^)


Certainly something worthy of bringing up with Eric IMO. He may well examine the stock CPU code and say 'That's not what was intended', or say 'that's correct'. In terms of purpose, the 'best' is used for Screensaver display, so it would entirely make sense to me if the intent is to choose the most 'Gaussian-ey' looking signal to display, whether reportable or not (i.e . marketing). Naturally I can also see the point of view that if the score wasn't good enough to rep[ort, then why store it at all ? Unfortunately the CHiSq and null hypotheses aggravate a part of my brain that burned out on statistics long ago (as I was too good at it and fried that area of my brain), therefore I don't have definitive answers on what's meant to happen in this particular case.
16) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875421)
Posted 28 Jun 2017 by Profile jason_gee
Post:
How there can be a "best" signal that isn't worth reporting (when there are apparently 3 inferior signals that are) is beyond me, but that's apparently the standard. :^)


For best there is a check, added ~2011, of the CHiSq fit ('i.e. 'Gaussian-ness') , in addition to the score used for reporting. My cursory reading suggests the Best may be reportable, maybe not, though yet to do a full line by line analysis. The suspected variation is in the multiple different implementations in that logic in the different branches, though that doesn't rule out other bugs or cumulative error
17) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875374)
Posted 27 Jun 2017 by Profile jason_gee
Post:
...All those people testing these Apps at Beta and no one picked this up? Nevermind.


This far into the noise floor, there is no shame, only things to learn. Nobody's been here before.
18) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875334)
Posted 27 Jun 2017 by Profile jason_gee
Post:
Some definite headscratchers in the last few posts :D Will think about those while planning the attack.
19) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875316)
Posted 27 Jun 2017 by Profile jason_gee
Post:
Still don't know how to do a Cuda bench run, but I did run what I think is the Windows stock CPU app today, setiathome_8.00_windows_intelx86. The numbers in the results file (<peak_power>0.46856832504272</peak_power>, etc.) appear to match the opencl_ati_cat132 and r3330 results (allowing for the fact that I don't know how to convert that "time" value, and the "score" has a value of 0).

FWIW, today's run reminded me of why I haven't run stock CPU in a long time. It took about 6 hours and 45 minutes, versus 3 hours and 13 minutes for the r3330 I ran yesterday!


Cheers! The observations will help narrow things down. Yep, win32 8.00 CPU isn't quick ;D. Cuda bench on Windows is just a matter of throwing the exe, two suitable cu DLLs, and optionally an mbcuda.cfg into the science_apps folder before running the bench. If you do get it working, luckily the CPU reference result should be cached from the prior run and skipped, so it just runs any other app comparison against that.

Myself I'll probably attempt GPU-passthrough of the 780 &/or from the OSX+Linux Host to a Win10vm, scheduled for the weekend. If I do get that operational, I may script a rough automation to distribute and accumulate results from the 3 platforms, letting each OS have a batch of normal test and suspect tasks with various apps. If that works out as hoped, I'll enable some kindof facility to dump in suspects remotely for cross platform match, but that of course is further down the line.
20) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1875313)
Posted 27 Jun 2017 by Profile jason_gee
Post:
..
Another re-re-processing could be done. But I really would like to know why it happens in the first place...


Agreed. The pulse race before was challenging to visualise and describe, but serialised reprocessing was one correct way to handle it. This other odd thing I don't have a similar clear idea on yet.


Next 20


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.