Message boards :
Number crunching :
Three crunshing cards in one host -> how to config "app_config.xml" file ?
Message board moderation
Author | Message |
---|---|
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
Hey, soon i will get a small problem, even i allready got one. I have configed the app_config.xml file as to use 2 wu each gpu which will be found. All gpus is in cc_config.xml file set. Nows theres the problem. The thirid card i will install will be much weaker than the other cards, like the Intel gpu i allready "installed" But how configure i am BOINC to use 1.0 gpus per the third video card and of course the Intel gpu ? The thrid card will be the nvidia GT 640 i allready got, so it will be two nvidias,one AMD and the Intel gpu. Answers/Questions ? Current config file: <app_config> <app> <name>setiathome_v8</name> <version_num>800</version_num> <plan_class>cuda50</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app> </app_config> Greetings, hope for help Edit: ok its crunshing on 1 Intel gpu !!! Have i add the config for 3.2 until 4.2 apps ? |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
Newer config: <app_config> <app> <name>setiathome_v8</name> <version_num>800</version_num> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app> <app> <name>setiathome_v8</name> <plan_class>opencl_ati5_cat132</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app> <app> <name>setiathome_v8</name> <plan_class>opencl_ati_cat132</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app> </app_config> Hmm, will the <name> option in >gpuversions< apply ? Must be final, dont known nomore options... |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I don't know where you got <version_num>800</version_num> but tthat isn't an option for the app_config.xml according to the documentation. I think what you are wanting is something like this. Where each plan_class is defined. Omitting the iGPU configuration should also work. As it is otherwise defied to run 1 instance in the app_info.xml. <app_config> <app_version> <app_name>setiathome_v8</app_name> <plan_class>cuda50</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app_version> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_ati_cat132</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app_version> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_intel_gpu_sah</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>1</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app_version> </app_config> However a more simplified versions could be telling all GPUs to run 2 except iGPU. <app_config> <app> <name>setiathome_v8</name> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_intel_gpu_sah</plan_class> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>1</gpu_usage> <cpu_usage>.22</cpu_usage> </gpu_versions> </app_version> </app_config> SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
Thx, thats what i needed. The second description did it. But there one thing left. The GT640 hab to crunsh 1 GPU task, and not 0.5 of it... wooops...i don`t got an "app_info.xml" , bad ? |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Thx, thats what i needed. I didn't notice you were using stock, but not having an app_info.xml isn't a problem. There isn't anything that would be gained if you did. It doesn't offer any extra configuration options to help. I'm not sure if the CUDA app supports the MultiBeam_<vendor>_config.xml. It allows configuration of each device independently. I think it is only available for OpenCL apps as the ReadMe_x41zi.txt doesn't mention it. The CUDA does offer a cuda.cfg for app tuning for each device, but I don't see an option for number of instances. I think someone with more NV experience might know the trick to this. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
Aint´t got a clue right now. But until here thank you very much. The cuda.cfg file gives no options in numbers to run, no MB configfile are given. Then i try google later on. ok, after gooooggleing , nobody is a crunshing nerd like me there, or they only don´t posting it :D no further infos are found. |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
OK , i digged a littlebit and found something , half helpfull. -disable_slot N in the following file: ap_cmdline_7.10_windows_intelx86__opencl_nvidia_100.txt there i can disable the PCIE SLOT/CARD Number ? for the minimum of AstroPulse, then i haven´d wait too long to crunsh , the GT640 will only run cuda3.2 till cuda5.0. Will that be right ? |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
For some reason , the Boinc-Client set back, an uses again 2 wus per Intel-gpu :´( |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
-disable_slot 0 ap_cmdline_7.09_windows_intelx86__opencl_intel_gpu_100.txt It was the wrong file... ok, i might disabled Astropulse on the intel gpu with that, nomore AP have been loaded for this gpu. Cheers ! |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.