Question about the Graphical FFT


log in

Advanced search

Message boards : SETI@home Science : Question about the Graphical FFT

Author Message
Finnen78
Volunteer tester
Send message
Joined: 15 Oct 09
Posts: 4
Credit: 307,684
RAC: 0
Sweden
Message 1007935 - Posted: 24 Jun 2010, 23:23:41 UTC

Hello all Boinc crunchers!

i read all that nice info on the page about how the screensaver works and what all that nonsence ;) in those windows mean... although...

i got a question about that graphical thing...

how come sometimes those power bars are wide through the frequency axis and slow and sometimes very thin and fast through the time axis, does that say anything about the signal at that given time?

Profile jason_gee
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 24 Nov 06
Posts: 5081
Credit: 74,109,998
RAC: 4,683
Australia
Message 1008102 - Posted: 25 Jun 2010, 9:31:23 UTC - in response to Message 1007935.
Last modified: 25 Jun 2010, 9:35:52 UTC

...how come sometimes those power bars are wide through the frequency axis and slow and sometimes very thin and fast through the time axis, does that say anything about the signal at that given time?


Hi there, the number of frequency 'bins' (fftlength) varies from 8 through 128k bins, from the dataset of 1*1024*1024 complex single precision floating point dataset.

Since we have a fixed data set size, the # of FFT frequency bins will form one side of a rectangular 'new 2d data set' of Frequency (power at each) x time

When the FFT Length is short, CPUs can calculate these component FFTs (Time offset is the other dimension of the 'rectangle') much faster due to the way CPU cache works, and how FFT algorithms exchange data internally. Longer FFTs need to swap/combine data much further apart in memory, so cache & memory subsystem comes under more pressure, as well as longer FFTs requiring more processing, making longer FFTs Actually slower..(Opposite to your observation :-O )...

However!

The other dimension of the graph (Time) is primarily concerned with looking for pulses & triplets, and by far dominates the time taken as the time axis 'extends'. That is because there are many more possible offsets / periods to look for indications of a signal, and that becomes memory & CPU intensive ... overtaking the shorter FFT requirement by a significant proportion (In processing time)

Fast Fourier transforms have been improved over decades, and heaps of papers have been written about them, yet pulsefinding, apart from some few key papers is quite difficult to find detailed work on.

So for the time being,short searches in the FFT axis, but long in the Time axis will continue to dominate the time taken ... much to the chagrin of us GPU crunchers also, but would like to hope that eventually we can get the timexfrequecy 'area' fairly constant.

Hope that points you toward some of the answers you're seeking.

Jason
____________
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

Finnen78
Volunteer tester
Send message
Joined: 15 Oct 09
Posts: 4
Credit: 307,684
RAC: 0
Sweden
Message 1008575 - Posted: 26 Jun 2010, 11:38:53 UTC - in response to Message 1008102.
Last modified: 26 Jun 2010, 11:40:28 UTC

Hahahaha!!!

Thank you Jason!

well :) ... i kind of understod about 70% of the computer stuff but u totaly lost me with the other stuff.

so... the longer the "bar" on the FFT graph the more time the computer needed to work with that specific prequency at that given time of the package

could u stupify that other stuff a little? ;D

i mean... im crunching 24/7 with at least one of my computers and sometimes even i have to access it for stuff i only save at that computer, sometimes i have to wait 2 minutes for IE/wordpad/HDD whatever to respond & sometimes the responce is instantanious, and yes... i have allocated 100% of my resources to crunshing.

Profile jason_gee
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 24 Nov 06
Posts: 5081
Credit: 74,109,998
RAC: 4,683
Australia
Message 1008579 - Posted: 26 Jun 2010, 12:00:32 UTC - in response to Message 1008575.
Last modified: 26 Jun 2010, 12:00:52 UTC

Hahahaha!!!

Thank you Jason!

well :) ... i kind of understod about 70% of the computer stuff but u totaly lost me with the other stuff.

so... the longer the "bar" on the FFT graph the more time the computer needed to work with that specific prequency at that given time of the package

could u stupify that other stuff a little? ;D

i mean... im crunching 24/7 with at least one of my computers and sometimes even i have to access it for stuff i only save at that computer, sometimes i have to wait 2 minutes for IE/wordpad/HDD whatever to respond & sometimes the responce is instantanious, and yes... i have allocated 100% of my resources to crunshing.


Sure! , yep you have it pretty much right anyway :). Short time axis with fewer points, means 'easy' pulsefinds, which can run fast and smooth. Long time axis means longer pulsefinds, which can 'choke'.

Somewhere on the screensaver you should see a task parameter called "AR' or 'Angle Range', and the low ones less than about 0.13 tend to have many long pulsefinds. These would show the worst lagging behaviour you describe.

Since you are running CPU multibeam, and experiencing lag problems, once you get sick of the screen saver you might want to look at the possibility of using optimised applications for about 2x speedup or so. I wouldn't recommend that if you prefer 'set & forget' type operation, don't want to have to check for updates regularly, or a particularly attached to the screensaver.

Jason
____________
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

Profile ML1
Volunteer tester
Send message
Joined: 25 Nov 01
Posts: 8572
Credit: 4,233,365
RAC: 844
United Kingdom
Message 1009775 - Posted: 29 Jun 2010, 15:24:16 UTC - in response to Message 1008102.
Last modified: 29 Jun 2010, 15:24:55 UTC

... Since we have a fixed data set size...

When the FFT Length is short...

However!

The other dimension...


Thanks for a good description including the hardware problems. (Nicely on the intelligible side of technobabble! :-) )


Are there no parallel segmentation algorithms for the pulse finding that would be blindingly fast on GPUs?

Keep searchin',
Martin
____________
See new freedom: Mageia4
Linux Voice See & try out your OS Freedom!
The Future is what We make IT (GPLv3)

Profile jason_gee
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 24 Nov 06
Posts: 5081
Credit: 74,109,998
RAC: 4,683
Australia
Message 1010022 - Posted: 30 Jun 2010, 15:39:51 UTC - in response to Message 1009775.
Last modified: 30 Jun 2010, 15:53:52 UTC

Are there no parallel segmentation algorithms for the pulse finding that would be blindingly fast on GPUs?


Sure, there're ways to do things faster (& more reliably) than current GPU apps, and in the scheme of things we're all learning how to use these new languages & tools, that are based on supercomuter architectures from prior decades, with little prior work to draw on other than the odd SDK samples and stock code.

Aside from that, one of the major constraints on technique, is having to achieve results that will validate within tolerances to stock (mostly) serial CPU code, most importantly warts and all.

On the high level side of things, there's *relatively* little published work on pulsefinding (serial OR parallel) in comparison to FFT, which has huge volumes of science & mathematics papers spanning decades (about 6 in current serial split Radix form, IIRC) to wring out every flop for different applications. That could perhaps be surprising, seeing as the existing serial algorithms are each of similar algorithmic nature & complexity in many ways.

As well as still maturing tools, drivers & techniques, there isn't the same decades of experience to draw on as for [the 'black art' of] CPU micro-architectural optimisation either. Parallel algorithms that have heavy publication tend to have been focussed on the supercomputing platforms of those decades, and the (relatively specialised & obscure) tasks they were expected to perform ... which didn't have to do things like watch videos/screensavers or even run operating systems ;).

So all and all, IMO, while there is some experience amongst us with the predecessor architectures & principles, and there are quite a few notable similarities, the current & maturing platforms are turning out to be a different beast than, e.g. The CSIRO GIS Lab 'Hitachi super torus' I worked on back in University.

We're definitely 'getting there' though.

Jason
____________
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

Profile ML1
Volunteer tester
Send message
Joined: 25 Nov 01
Posts: 8572
Credit: 4,233,365
RAC: 844
United Kingdom
Message 1010141 - Posted: 30 Jun 2010, 23:11:20 UTC - in response to Message 1010022.

We're definitely 'getting there' though.

Oooer... Quite a jump from the old Seymour Cray vector pipelines... ;-o


Segmentation is important in image processing. Surprised that hasn't been pushed for 'fast' optimised processing.


Thanks and good luck for pushing the computations.

Keep searchin',
Martin

____________
See new freedom: Mageia4
Linux Voice See & try out your OS Freedom!
The Future is what We make IT (GPLv3)

Message boards : SETI@home Science : Question about the Graphical FFT

Copyright © 2014 University of California