Message boards :
Number crunching :
the latest on release of AP_v7?
Message board moderation
Previous · 1 · 2 · 3
Author | Message |
---|---|
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
It looks like some members didn't understood this thread correct ... Currently the APv7 apps are for SETI BETA. Currently it's not OK to use this apps here at SETI Main (with APv6 WUs). Look at your host overview -> invalid results. Maybe an official statement of Raistmer needed? ;-) |
qbit Send message Joined: 19 Sep 04 Posts: 630 Credit: 6,868,528 RAC: 0 |
@TBar @ Dirk: Yeah, I know, but I had to try because I wanted to know if my computer still has the occasional crashes with V7. Well, he crashed again 2 hours ago, so I guess I have my answer. Going back to V6 now, too many inconclusive/invalid results here already. Looks like the validators are not tuned for V7 yet. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
AFAIK from Raistmer the 'rule of thumb', the -ffa_block & -ffa_block_fetch values for APv6 is to take /2 for APv7. No, the rule of thumb was to halve them for TWIN FFA apps, APv6 has TWIN FFA apps too. For APv6, (in the Lunatics 0.41 Installer) AP6_win_x86_SSE2_OpenCL_ATI_r1843.exe and AP6_win_x86_SSE2_OpenCL_NV_r1843.exe are Single FFA: Build features: Non-graphics OpenCL USE_OPENCL_NV OCL_ZERO_COPY COMBINED_DECHIRP_KERNEL FFTW USE_INCREASED_PRECISION USE_SSE2 x86 For APv6, (in the Lunatics 0.42 Installer) AP6_win_x86_SSE2_OpenCL_ATI_r2399.exe and AP6_win_x86_SSE2_OpenCL_NV_r2399.exe are TWIN_FFA: Build features: Non-graphics OpenCL USE_OPENCL_NV TWIN_FFA OCL_ZERO_COPY COMBINED_DECHIRP_KERNEL FFTW USE_INCREASED_PRECISION USE_SSE2 x86 For APv7, Both Stock and Optimised APv7 are TWIN_FFA: Build features: Non-graphics BLANKIT OpenCL USE_OPENCL_NV TWIN_FFA OCL_ZERO_COPY COMBINED_DECHIRP_KERNEL FFTW USE_INCREASED_PRECISION USE_SSE2 x86 Claggy |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14649 Credit: 200,643,578 RAC: 874 |
@TBar @ Dirk: Yeah, I know, but I had to try because I wanted to know if my computer still has the occasional crashes with V7. Well, he crashed again 2 hours ago, so I guess I have my answer. Going back to V6 now, too many inconclusive/invalid results here already. Looks like the validators are not tuned for V7 yet. No validator tuning is required, or will be applied. All the validator does is to compare two (sometimes more) results. If it is asked to compare a v6 result with a v7 result, the likelihood is that it will declare them different - unless there are no signals to be found in the data. You will be getting inconclusive/invalid results because your results are different from those returned by other users still - and properly - running the v6 app. Wait until the full release takes place for everybody before you use a v7 app here. If you want to test whether the v7 app crashes on your computer, use the Beta site - that's what it's there for. |
merle van osdol Send message Joined: 23 Oct 02 Posts: 809 Credit: 1,980,117 RAC: 0 |
Zalster, Sorry about your stir-fry. Always something for life to struggle with. |
qbit Send message Joined: 19 Sep 04 Posts: 630 Credit: 6,868,528 RAC: 0 |
@TBar @ Dirk: Yeah, I know, but I had to try because I wanted to know if my computer still has the occasional crashes with V7. Well, he crashed again 2 hours ago, so I guess I have my answer. Going back to V6 now, too many inconclusive/invalid results here already. Looks like the validators are not tuned for V7 yet. Thx once again for the explanation, Richard! Somebody else told me that the validators have to be tuned for v7 because signals found within blanking are interpreted differently on v7 then they were on v6. I guess I should really learn a bit more about MB/AP because until now I don't really know how exactly this all works and what exactly the apps are doing when crunching a task. Problem is that all those scientific things can be a bit hard to understand when english isn't your main language. Anyway, as I said, I'm back at v6 now and will wait for official release. And BTW, I wanted to test on Beta first but somethings different there. When I started the AP task it was using more then 60% CPU!?! I never saw something like that here on main, so I deceided to test the beta apps here. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
When I started the AP task it was using more then 60% CPU!?! I never saw something like that here on main, so I deceided to test the beta apps here All Nvidia drivers since 27x.xx have suffered from the Nvidia OpenCL 100% CPU Usage feature/Bug, 337.88 is no exception. You're using -use_sleep at the Main project, did you put put it in your ap_cmdline.txt file for NV 7.04? If you didn't, then you'll get high CPU usage. Claggy |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
AFAIK from Raistmer the 'rule of thumb', the -ffa_block & -ffa_block_fetch values for APv6 is to take /2 for APv7. From where I could know it that r2399 already have TWIN_FFA - and what this means? (-> values /2 for -ffa_block & -ffa_block_fetch) In the 'Lunatics Windows Installer v0.42 Release Notes' thread »here« in the opening message I couldn't read it. Or where is this interesting info available? |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
From where I could know it that r2399 already have TWIN_FFA - and what this means? You won't find it in the 0.42 Readme, or even in the OpenCl AP Readme's because the author didn't mention it, he also used the same command line parameters that were recommended in the 0.41 OpenCL AP readme. Claggy |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
I (or we) don't know which app Eric will choose for reference point. Some time ago Eric proposed to use SLOWEST app as reference point. If he will go ahead and implement this for AP subproject at least I think all will be only happy. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14649 Credit: 200,643,578 RAC: 874 |
I (or we) don't know which app Eric will choose for reference point. I might be wrong, but I don't think that Eric proposed that. I think he simply observed that the current BOINC server code operated that way, and that whatever knob Eric twiddled in an attempt to induce credit-rate parity between AP and MB, nothing changed. Having said that, I don't think that the importance - or otherwise - of raw <rsc_fpops_est> on credit awards has been fully explored experimentally. IF the slowest app for AP v7 were the same speed as the base app here for v6 (not currently the case at Beta, so a big IF), then I'd be interested to see the outcome of halving <rsc_fpops_est> for AP here - but that may be too big and risky an experiment to carry out on the live project. |
Darth Beaver Send message Joined: 20 Aug 99 Posts: 6728 Credit: 21,443,075 RAC: 3 |
Umm guys please don't change or play around with credit broken . We are stuck with it and a lot of people will get rather peed off if you change it again |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
I (or we) don't know which app Eric will choose for reference point. The 'observation' in question did happen, but had nothing to do with 'fastest' or 'slowest'. It was Eric's observation that normalising baseline, i.e. the one that receives exactly COBBLESTONE_SCALE credit, should be the 'least efficient' [does not mean slowest], and I pointed out that the 'most efficient claiming' was normalising baseline by a factor of 3.3x peak ~2x average [too efficient, by 'incorrect measurement']. That's for multibeam because of a design omission in Boinc Whetstone, since proven by SIMAP with its android app in a different scenario with opposite omission [ and similarly confusing results in a different context]. \ Numbers for AP are around 2.25x peak and ~1.5x average. Boinc developers understand neither SIMD, nor multithreading. That's consistently demonstrated through both design and implementation. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Umm guys please don't change or play around with credit broken . We are stuck with it and a lot of people will get rather peed off if you change it again Hope not but did you seriusly expect that? I´m allmost sure this time they will take some more care about credit after the MB 6->7 disaster. If you go to the beta site, you could see the credit per WU it´s relatively similar but still a little less than the actual. The question is that will continue when it goes to main? Something similar happening in the past and what happening.... MB credit paid drops as we all knows. |
Darth Beaver Send message Joined: 20 Aug 99 Posts: 6728 Credit: 21,443,075 RAC: 3 |
Hope not but did you seriusly expect that? juan BFB i don't expect anything to change . Just not shore what to expect i just noticed some of the posts talking about it so it's more a request that it not be changed . People have spent a lot of cash to build systems to get there RAC up to a certain level and will be really peed off if it is changed to much . |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Hope not but did you seriusly expect that? I totaly agree with you, maybe you not know i was one of the rise the creditscrew problem in the past? At that time i crunch MB only and my RAC drops from >400K to about 250k, so i realy knows what you talk about. That´s exactly why i expect some more caution this time. Fingers crossed. :) |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Another possibility I've considered. I've never confirmed that the credits are really scaled to the least efficient CPU version for a platform. In theory, if I were to create a CPU version of SETI@home with no threading or SIMD using the Ooura FFT and release it under the plan class "calibration". After 100 results come back from that version, the credits of everything else should go up. In theory, of course. http://setiathome.berkeley.edu/forum_thread.php?id=73935&postid=1502273 Another possibility I've considered. I've never confirmed that the credits are really scaled to the least efficient CPU version for a platform. In theory, if I were to create a CPU version of SETI@home with no threading or SIMD using the Ooura FFT and release it under the plan class "calibration". After 100 results come back from that version, the credits of everything else should go up. In theory, of course. http://setiathome.berkeley.edu/forum_thread.php?id=73935&postid=1503115 http://setiathome.berkeley.edu/forum_thread.php?id=73935&postid=1503216 So if they are on CreditNew they need to keep the slowest app for 'calibration'? ;) http://setiathome.berkeley.edu/forum_thread.php?id=73935&postid=1504417 |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.