Message boards :
Number crunching :
New Greenbank Files
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
What about AstroPulse task from GBT? Will they be available in future, or are AP's only from Arecibo? I was hopeful that there would be tons of APs from the GBT data, as well, but I'm going to assume/surmise at this point in time that the type of signals being looked-for, and the nature of the recorded data itself, would not be suitable for APs, being as AP is wide/broadband signals, and MB is narrowband. I think wide/broadband signals would be what you would use for interstellar/cross-galaxy communication, and narrowband is more along the lines of what we use here on our humble planet for communications. Both types of signalling are reasonable to assume would be in-use, but in two entirely different scenarios. Seeing as the Breakthrough Listen! data is looking at specific exoplanets/systems in a focused manner, we're really just looking to see if any radio frequencies which are not-natural emanates from those planets. We're looking at the planets themselves, not so much long-range communication (which AP would be best-suited for). Data best-suited for APs would be when the telescope isn't pointing at a specific target, but is instead pointing into the void between targets, and would be slewing from one target to another (VHAR) simply because you're covering a large arc of the sky between targets, which is where a broadband, long-range carrier signal would have the highest probability of being found. That's my understanding of things, and I am not educated in the matter more than what I've read here on the forum over the years. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
One problem issuing work out to specific resources/devices creates, is extra project scheduler complexity (a project resource). Another is on the development side and more subtle, it slows work on the problems by lack of diversity, for example to compare results to something else to find flaws. A good example is the limit of not sending VLAR work to Cuda GPUs by default. It's enabled for some GPUs on Beta for development purposes, however while massive project changes are underway (such as v7 phaseout and v8 + GBT phasein) development using Beta isn't always practical. (for example mixing high throughput applications/devices in with the project trying to test/refine a CPU app update, clouding the data the project needs). If it were practical (project resource, and Boinc mechanism wise) I'd be in favour of specifying GPU VLAR as a different application, so that it automatically received an option to enable/disable in existing preferences, and by extension fairer Credit due to VLAR's reduced efficiency on GPU (compared to CPU). I suspect without being credit knobbled by the AVX CPU app effective underclaim on top of the GPU app reduced efficiency at these angle ranges, that the credits would be awesome, and so incentive raised all around to solve the usability and efficiency problems in question. Questions to me would be then, "is Credit really such a motivator ?", "should it be ?", and, "if so in both cases, would separating the app make things faster, better, simpler(i.e. cheaper)?" . My answers (not knowing the full complexity of the last part), would be "Yes", "Probably better useful for something than mere bragging rights, if indirectly" and "highly likely" "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Because AFAIK most of the GB data is VLAR and the paucity of APs I can foresee many of the big Nvidea GPUs will run out of data. Potential GBT AP is an intersting one, because I saw back of the envelope estimates that a CPU app might take a full 6 months CPU time on such a task, so require the full implementation of Trickles and other Boinc Features. My feeling is that GPU acceleration of these large tasks would be highly efficient Multithreaded, GPU accelerated, heterogeneous and potentially clustered within single tasks. That would push technology along quite nicely, where current tasks don't really have the 'meat' to scale efficiency far over the 10% mark, while larger datasets could flex those muscles very effectively. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
rob smith Send message Joined: 7 Mar 03 Posts: 22204 Credit: 416,307,556 RAC: 380 |
...and so Nvidia GPUs would get their own "feeding pond" ;-) Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
...and so Nvidia GPUs would get their own "feeding pond" ;-) Well in my mind (doable or appropriate or not), any applications/devices feeding in that pond (not necessarily just NV Cuda), could contribute to improving the technology, and cross pollinate better. At the moment we have a pretty vast collection of emerging, and prior working but shelved solutions (because of unfair credits compared to other work). Difficult to revive and improve those ideas with limits on everything and credit for work done so low. In any case, would have to figure out if the splitting and scheduling process would let the VLARs be assigned a different app altogether (Giving us MB-CPU with both, MB-GPU without-VLAR, and MB-GPU-Vlar). Don't know if that'd be more or less complex for the project than the current marking with filenames and restricting VLARs by device. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? Meow? "Freedom is just Chaos, with better lighting." Alan Dean Foster |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? Unless I missed a logic change somewhere (quite possible), Since they are just v8 tasks, mostly VLAR, they should work out the same as usual Arecibo VLAR tasks by elapsed time. So about a third of what they should be. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? +1 |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? (+/- ~37% of course, lol). "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Here are several from earlier today I've bolded the credit, times in italics bold 4854603046 2123362344 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 2:22:46 UTC Completed and validated 420.03 411.61 8.86 SETI@home v8 Anonymous platform (CPU) 4854604345 2123362943 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 2:27:54 UTC Completed and validated 662.40 648.89 13.18 SETI@home v8 Anonymous platform (CPU) 4854604378 2123363039 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 2:17:36 UTC Completed and validated 131.38 126.41 2.49 SETI@home v8 Anonymous platform (CPU) 4854604170 2123363024 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 4:24:59 UTC Completed and validated 3,221.55 3,198.65 76.65 SETI@home v8 Anonymous platform (CPU) 4854604470 2123363113 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 13:44:15 UTC Completed and validated 3,087.74 3,062.99 73.86 SETI@home v8 Anonymous platform (CPU) 4854604229 2123362974 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 2:17:36 UTC Completed and validated 129.29 124.13 2.23 SETI@home v8 Anonymous platform (CPU) 4854604237 2123363072 13 Apr 2016, 2:12:28 UTC 13 Apr 2016, 2:17:36 UTC Completed and validated 22.29 20.84 0.36 SETI@home v8 Anonymous platfor |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? I'm seeing the same 'task size' (<rsc_fpops_est>) for guppi VLARs as for Arecibo VLARs - but a processing time of ~75%. I think the credit is proportional to processing time, but too few examples yet to make it worth analysing. But doing the same work in less time is going to skew APR eventually.... |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Here are several from earlier today But which of those were GBT (guppi)? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? So, about the same then. And the random credit generator will eventually pound it down to the same level it has been. About what I expected. As I have said before, not really important, as all users are treated equally. Or randomly, as the case may be....LOL. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Here are several from earlier today all of them |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? Yeah, but then I suspect the efficiency differences in that same work, creating the skew, shorter elapsed should lower the pfc's accordingly. If the volume is enough it could raise the overall MBv8 credit a small portion of that 25%, but seem a bit cheap on the Guppi tasks themselves. We'll see. A bit early to start profiling Guppi specific tasks for low hanging optimisation hotspot fruit, but common sense says improve the pulsefinding and force credit down some more, lol. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Not that it is real important, but out of curiosity, any read on how the credits are working out for the new GBT tasks? Well, it's been a long known fact that Seti is not a credit whore's first choice of projects....LOL. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
So, about the same then. And the random credit generator will eventually pound it down to the same level it has been. Well to be fair on the mechanism's intent, much digging revealed that it wasn't really random, actually notably chaotic (which is different). Since you're a car man as well as a kitty man, closest analogy might be a damaged throttle cable that sometimes sticks and then jerks at the worst moments (Chaos panders to Murphy, Randomness Doesn't and would be fair). Chances are most hosts would get stuck on either on mostly the low credit side, or the bizarre odd boost. Best odds of cheating Murphy in this mechanism is to run as many tasks at a time as possible, as slow as possible, then you get a 50% chance of boom credit on each task. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Never cared for regulators, lol... |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Never cared for regulators, lol... Lol, now where's my Diagram of the Steam engine governor with rotating balls.... "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
When I was a young lad, I drove my cars as if they had a two position toggle switch. Either on, or off. Rather hard on the rubber... Many years later, I drive a '90 Olds Ciera with a 2.5l 4 banger. Much easier on the tyres. And the computers just run flat out all the time. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.