Message boards :
Number crunching :
app_info for AP500, AP503, MB603 and MB608
Message board moderation
Author | Message |
---|---|
MarkJ Send message Joined: 17 Feb 08 Posts: 1139 Credit: 80,854,192 RAC: 5 |
I was going to post this under another thread in reply to someone's question but the thread got locked just as I was about to post. Below are some instructions I typed up about how to have a combined app_info. Please take note of the disclaimer. MarkJ Setting up Seti@home to run cuda and non-cuda apps You will need a copy of BOINC 6.6.15 (or later). This won't work with earlier versions of BOINC. Disclaimer: BOINC 6.6.15 is a Development version and may not function correctly. If you are not comfortable with editing an app_info, using optimised apps and running development versions of BOINC then this is not for you. BOINC is very unforgiving of an incorrect app_info and usually will delete all tasks if you get it wrong. Do NOT use Internet Explorer to edit the xml files, it will stuff up your app_info. Use Notepad or another text editor. Thanks go to Richard Haselgrove and Claggy for ideas in building this app_info.xml file. Notes: Upgrade your BOINC client first and get it working before changing anything else. The app_info.xml below is based on a 32 bit Windows platform. If you are running on another platform you may need to add/amend the <platform_name> tags. My computers support the SSSE3 instruction set. SSE2, SSE3 or SSE4.1 may be more appropiate for you. You will need to amend the program names as appropiate. I've assumed that you have your cuda-capable card up and running and have the necessary nvidia drivers (minimum version is 180.48). Programs needed: a) Optimised multibeam and optimised Astropulse AK_v8_win_SSSE3x.exe ap_5.00r103_SSE3.exe ap_5.03r112_SSE3.exe b) Cuda multibeam and support libraries setiathome_6.08_windows_intelx86__cuda.exe cudart.dll cufft.dll libfftw3f-3-1-1a_upx.dll 1. Download and install BOINC 6.6.15. Get this working before changing anything else. 2. Empty your cache of Seti@home work. This is best achieved by setting the project to No new work and letting it finish off its tasks. Make sure they are all uploaded and reported, there should be none on your tasks list. 3. Download the optimised multibeam and astropulse apps (usually from the lunatics web site) if you don't already have them. 4. Download the cuda multibeam app (from the Seti web site) if you don't already have them. If you run the stock cuda multibeam app then you should already have these in your projects\Setiathome.berkeley.edu folder. 5. Disable network communications in BOINC. Shutdown BOINC. Make sure its shutdown. 6. Browse your client_state.xml file (its in the BOINC data directory) and look for the entry <p_flops>. We need to use this number. Do NOT change this file. 7. Browse the BOINC log file to get the estimated speed of your GPU. This is usually given at the top of the log in Gflops. My 9800GT was estimated at 60Gflops. 8. For each of the apps multiply the p_flops value by the factor below and put this into the appropiate flops entry in the app_info given below. For multibeam 608 you need the estimated Gflops. Astropulse 500 flops = p_flops x 2.25 Astropulse 503 flops = p_flops x 2.6 Multibeam 603 flops = p_flops x 1.75 Multibeam 608 flops = Est.Gflops x 0.2 (eg 60,000,000,000 x 0.2 = 12,000,000,000) 9. Make sure you have all the programs listed above in the projects\Setiathome.berkeley.edu folder. If not copy them there. 10. Save your app_info in the projects\Setiathome.berkeley.edu folder. 11. Start up BOINC. Check the messages tab to see if it lists any [file error] messages. If there are shut BOINC, check you have the correct program names referenced. Go back to step 9. 12. If okay then enable new work for the Seti@home project. 13. Enable network communications again. 14. It should now download work of all types. If not check your Seti@home preferences on the Seti web site, that Astropulse, Astropulse_v5 and "Use GPU if available" are all ticked. If you have a slower computer you may not get astropulse work units anyway. <app_info> <app> <name>astropulse</name> </app> <file_info> <name>ap_5.00r103_SSE3.exe</name> <executable/> </file_info> <app_version> <app_name>astropulse</app_name> <version_num>500</version_num> <flops>5306156897</flops> <file_ref> <file_name>ap_5.00r103_SSE3.exe</file_name> <main_program/> </file_ref> </app_version> <app> <name>astropulse_v5</name> </app> <file_info> <name>ap_5.03r112_SSE3.exe</name> <executable/> </file_info> <app_version> <app_name>astropulse_v5</app_name> <version_num>503</version_num> <flops>6131559081</flops> <file_ref> <file_name>ap_5.03r112_SSE3.exe</file_name> <main_program/> </file_ref> </app_version> <app> <name>setiathome_enhanced</name> </app> <file_info> <name>AK_v8_win_SSSE3x.exe</name> <executable/> </file_info> <file_info> <name>setiathome_6.08_windows_intelx86__cuda.exe</name> <executable/> </file_info> <file_info> <name>cudart.dll</name> <executable/> </file_info> <file_info> <name>cufft.dll</name> <executable/> </file_info> <file_info> <name>libfftw3f-3-1-1a_upx.dll</name> <executable/> </file_info> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>603</version_num> <platform>windows_intelx86</platform> <flops>4127010920</flops> <file_ref> <file_name>AK_v8_win_SSSE3x.exe</file_name> <main_program/> </file_ref> </app_version> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>608</version_num> <platform>windows_intelx86</platform> <avg_ncpus>0.127970</avg_ncpus> <max_ncpus>0.127970</max_ncpus> <flops>12000000000</flops> <plan_class>cuda</plan_class> <file_ref> <file_name>setiathome_6.08_windows_intelx86__cuda.exe</file_name> <main_program/> </file_ref> <file_ref> <file_name>cudart.dll</file_name> </file_ref> <file_ref> <file_name>cufft.dll</file_name> </file_ref> <file_ref> <file_name>libfftw3f-3-1-1a_upx.dll</file_name> </file_ref> <coproc> <type>CUDA</type> <count>1</count> </coproc> </app_version> </app_info> BOINC blog |
ccappel Send message Joined: 27 Jan 00 Posts: 362 Credit: 1,516,412 RAC: 0 |
This is great info. I nominate this thread to be made sticky. |
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
This is great info. I nominate this thread to be made sticky. I'll second that motion. PROUD MEMBER OF Team Starfire World BOINC |
skildude Send message Joined: 4 Oct 00 Posts: 9541 Credit: 50,759,529 RAC: 60 |
I'm not so sure that running the development app is such a good idea for the general public. I'd say yes to sticky if the 6.6.15 were the standard. Currently 6.4.7 is. Mention of which OS this has worked on would be great as well... In a rich man's house there is no place to spit but his face. Diogenes Of Sinope |
ccappel Send message Joined: 27 Jan 00 Posts: 362 Credit: 1,516,412 RAC: 0 |
Ok, postpone my nomination to when 6.6.15 (or whichever becomes the stable version) is available for general release. :) |
MarkJ Send message Joined: 17 Feb 08 Posts: 1139 Credit: 80,854,192 RAC: 5 |
I'm not so sure that running the development app is such a good idea for the general public. I'd say yes to sticky if the 6.6.15 were the standard. Currently 6.4.7 is. Mention of which OS this has worked on would be great as well... I made it very clear that it relates to a Development version in the disclaimer. While it would be nice if it worked under earlier versions of BOINC, unfortunately it doesn't. Re: The OS, It says at the top. Windows 32 bit. To be more precise XP Pro with SP3. BOINC blog |
Blurf Send message Joined: 2 Sep 06 Posts: 8964 Credit: 12,678,685 RAC: 0 |
This is great info. I nominate this thread to be made sticky. Sorry guys-moderator have been repeatedly asked by users to keep the stickies to a minimum wherever possible. |
Charles Anspaugh Send message Joined: 11 Aug 00 Posts: 48 Credit: 22,715,083 RAC: 0 |
<p_fpops>? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
<p_fpops>? "Floating Point Operations per Second" - a measure of speed (CPU in this case). You can copy the value from the line with this tag in the first block of data in client_state.xml: alternatively, the same figure is shown on the 'host details' page on this website, in the line "Measured floating point speed 1830.43 million ops/sec". Notice that the speed shown on the website is in millions of <p_fpops>: you need to multiply those millions back in before using the number in Mark's formula. |
MarkJ Send message Joined: 17 Feb 08 Posts: 1139 Credit: 80,854,192 RAC: 5 |
<p_fpops>? I actually cut it from client_state and pasted it into calculator under windows and then did the multiplication. Less chance of errors that way :) You'll note that I dropped off the decimals in the app_info provided. I figure a half a fpop isn't going to make any difference. BOINC blog |
john deneer Send message Joined: 16 Nov 06 Posts: 331 Credit: 20,996,606 RAC: 0 |
Taken from the app_info.xml posted: <platform>windows_intelx86</platform> Does this actually do anything? And should it be changed when running on the 64-bit version of Windows, or does it only have to be changed when running on Linux or OS X? I'm asking because I feel that after building up my courage for a week I am now ready to adopt 6.6.14 (or .15) on my rig :-) But it runs under x64, not x86. Another question: if I don't want to run multibeam on the cpu, do I just remove the part describing the 6.03 application (ak v8, that is) or are more drastic measures necessary? Regards, John. |
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0 |
Taken from the app_info.xml posted: The equivalent line from my installation: <platform>windows_x86_64</platform> Please do change it - I trashed a lot of WU's because I forgot about this wrinkle :-( With 6.6.14 and .15 you won't get any 6.03's anyway even if you leave the AK_v8 line in there. It is all branded as MultiBeam so the server uses the highest version number (i.e. the 6.08) and you only get CUDA branded MB. F. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
With 6.6.14 and .15 you won't get any 6.03's anyway even if you leave the AK_v8 line in there. It is all branded as MultiBeam so the server uses the highest version number (i.e. the 6.08) and you only get CUDA branded MB. Not true. With 6.6.14 and above, you can receive (I have received) 603 work for AK_v8 as part of a standard work fetch from the project, as well as 608 CUDA work. You can also (and I have done this as a test, but I don't recommend it) ask for and receive AK_v8 work branded as 608, and CUDA work branded as 608, all in the same configuration. The point is, the <plan_class>cuda</plan_class> line (in the 608 section in MarkJ's example) now serves to differentiate that section from the 603 section: MultiBeam and MultiBeam CUDA, although being identical tasks and datafiles, can now be tracked separately through the system and requested individually. That's why v6.6.14 and above is such a big upgrade, and has to be handled with such care. (And Fred - it's also why I'm looking forward with such anticipation to the 608 --> 603 rebranding script for VLAR. One tester, at your service whenever called upon!) |
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0 |
With 6.6.14 and .15 you won't get any 6.03's anyway even if you leave the AK_v8 line in there. It is all branded as MultiBeam so the server uses the highest version number (i.e. the 6.08) and you only get CUDA branded MB. My apologies, then, for providing duff info. I was going by my own experience which is that I have never received a 603 WU since I fired up my graphics card. I bow to Richard's differing experience and assume I was just "unlucky" - but then I prefer to leave as much CPU time for AP as possible anyway (apart from the darned VLARs); seems much cleaner that way. (And Richard - almost there. Just checking the automation on Vista and XP. Think on this, though. With (6.6.x) Boinc's propensity to process CUDA WU's in deadline order, a run of the script to re-brand the VLARs in the cache followed by a work fetch often brings down more VLAR's that have earlier deadlines than the existing non-VLARs so whatever the size of cache some get crunched on the GPU anyway. Another effect of the same "bug" that I noticed immediately following Tuesday's outage was that CUDAs were downloaded with progressively closer deadlines so that I eventually had 8 partially crunched "waiting to run" which had been usurped be those that came after.) F. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
... I have never received a 603 WU since I fired up my graphics card ... I only got the 603s with deliberate testing (by disabling AP allocation). But I got 'em. re VLARs - what threshhold are you using for defining VLAR? I think you can cap it at 0.05, which is the definition of a 'true' VLAR. I believe Raistmer took the dividing line a bit higher for safety - something like 0.14 If you take the 0.05 line, then all VLARs will have exactly the same deadline interval after issue by the scheduler, which means that a 'new' VLAR will never queue-jump another VLAR: but you're right, a new VLAR could easily jump the next available non-VLAR. |
john deneer Send message Joined: 16 Nov 06 Posts: 331 Credit: 20,996,606 RAC: 0 |
[quoteThe equivalent line from my installation: <platform>windows_x86_64</platform> Please do change it - I trashed a lot of WU's because I forgot about this wrinkle :-( [/quote] Thanks Fred, I will use that one as well, then. The funny thing is: I'm at this moment running 6.6.15 with my 'old' app_info.xml that I had for 6.4.7. It has optimized versions of old and new astropulse and the stock cuda application in it. None of the fancy stuff that the proposed app_info has. I upgraded the old boinc by installing 6.6.15 right over it, changing nothing else (well, I removed my cc_config because it had the n+1 cpus in it). When I started the new boincmanager it picked up the old astropulse work flawlessly and started asking for (and got, hooray) new cuda work. It worked right out of the box, somewhat to my surprise. I'm bound to start tinkering with it, though. I'll probably end up using a copy of the app_info provided by MarkJ. Major reason to do so is that I also feel the need to do something about those annoying vlar cuda's, so I'll have a description of akv8 ready to get those things offloaded to the cpu when your fix is ready :-) Regards, John. |
john deneer Send message Joined: 16 Nov 06 Posts: 331 Credit: 20,996,606 RAC: 0 |
The equivalent line from my installation: I had the _x86_64 at both occurrences of <platform> but it trashed all my cuda units immediately on startup. I got an error message indicating that there was no version info for the 64bit cuda executable (?) 20-Mar-2009 10:10:22 [SETI@home] [error] No app version for result: windows_x86_64 608 cuda 20-Mar-2009 10:10:22 [SETI@home] [error] No app version for result: windows_x86_64 608 cuda etc. It seems to think that I want my 608 to be handled by a 64-bit application, at least that's what I make of this message .... I then changed the <platform> in the cuda section back to windows_x86. Seems to work, at least I get no error messages and it is asking for work (but not getting any, according to the message on the home page the science database crashed, no work available). Does this make sense? My ak_v8 is a 64-bit application and uses the x86_64 platform description, my cuda executable is a 32-bit application (I'm not aware of the existence of a 64-bit cuda executable?) and uses the x86 description. Is this as it is supposed to be or am I just screwing things up, as usual :-) Regards, John. |
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0 |
The equivalent line from my installation: After I had trashed my cache, I decided that the harm had already been done and left the platform description at x86_64 throughout the app_info. That's how it has been running quite happily for over a week now. F. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
The equivalent line from my installation: I've left mine as windows_x86, lost several caches worth before the new scheduler was installed on Seti Main, while i tried to get GPU requests to work, and haven't run the cache down yet to try again. Claggy |
dyeman Send message Joined: 3 Apr 99 Posts: 23 Credit: 8,493,140 RAC: 0 |
Well this week's outage opened a good opportunity to empty some queues and install 6.6.17. I still had some AP V5s (and WCG work which hopefully won't care). After the install the APs are estimated to finish in about 3 hours (I wish!) - I guess the estimates will equalise over time (or I've cocked up the flop values in the APP_INFO). Without MB work available I downloaded some GPUGRID work and all seems to re running happily (AP WCG and GPUGRID). |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.