Three crunshing cards in one host -> how to config "app_config.xml" file ?

Message boards : Number crunching : Three crunshing cards in one host -> how to config "app_config.xml" file ?
Message board moderation

To post messages, you must log in.

AuthorMessage
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1766481 - Posted: 20 Feb 2016, 14:52:41 UTC
Last modified: 20 Feb 2016, 15:43:17 UTC

Hey, soon i will get a small problem, even i allready got one.

I have configed the app_config.xml file as to use 2 wu each gpu which will be found. All gpus is in cc_config.xml file set. Nows theres the problem.

The thirid card i will install will be much weaker than the other cards, like the Intel gpu i allready "installed"

But how configure i am BOINC to use 1.0 gpus per the third video card and of course the Intel gpu ?

The thrid card will be the nvidia GT 640 i allready got, so it will be two nvidias,one AMD and the Intel gpu.

Answers/Questions ?

Current config file:
<app_config>

<app>
<name>setiathome_v8</name>
<version_num>800</version_num>
<plan_class>cuda50</plan_class>
<max_concurrent>10</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.22</cpu_usage>
</gpu_versions>
</app>

</app_config>



Greetings, hope for help

Edit:

ok its crunshing on 1 Intel gpu !!! Have i add the config for 3.2 until 4.2 apps ?
ID: 1766481 · Report as offensive
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1766491 - Posted: 20 Feb 2016, 16:06:47 UTC - in response to Message 1766481.  
Last modified: 20 Feb 2016, 16:33:43 UTC

Newer config:
<app_config>

<app>
<name>setiathome_v8</name>
<version_num>800</version_num>
<max_concurrent>10</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.22</cpu_usage>
</gpu_versions>
</app>

<app>
<name>setiathome_v8</name>
<plan_class>opencl_ati5_cat132</plan_class>
<max_concurrent>10</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.22</cpu_usage>
</gpu_versions>
</app>

<app>
<name>setiathome_v8</name>
<plan_class>opencl_ati_cat132</plan_class>
<max_concurrent>10</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.22</cpu_usage>
</gpu_versions>
</app>

</app_config>


Hmm, will the <name> option in >gpuversions< apply ? Must be final, dont known nomore options...
ID: 1766491 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1766512 - Posted: 20 Feb 2016, 17:12:57 UTC
Last modified: 20 Feb 2016, 17:20:43 UTC

I don't know where you got <version_num>800</version_num> but tthat isn't an option for the app_config.xml according to the documentation.

I think what you are wanting is something like this. Where each plan_class is defined.
Omitting the iGPU configuration should also work. As it is otherwise defied to run 1 instance in the app_info.xml.
<app_config>
	<app_version>
		<app_name>setiathome_v8</app_name>
		<plan_class>cuda50</plan_class>
		<max_concurrent>10</max_concurrent>
		<gpu_versions>
			<gpu_usage>.5</gpu_usage>
			<cpu_usage>.22</cpu_usage>
		</gpu_versions>
	</app_version>
	<app_version>
		<app_name>setiathome_v8</app_name>
		<plan_class>opencl_ati_cat132</plan_class>
		<max_concurrent>10</max_concurrent>
		<gpu_versions>
			<gpu_usage>.5</gpu_usage>
			<cpu_usage>.22</cpu_usage>
		</gpu_versions>
	</app_version>
	<app_version>
		<app_name>setiathome_v8</app_name>
		<plan_class>opencl_intel_gpu_sah</plan_class>
		<max_concurrent>10</max_concurrent>
		<gpu_versions>
			<gpu_usage>1</gpu_usage>
			<cpu_usage>.22</cpu_usage>
		</gpu_versions>
	</app_version>
</app_config>


However a more simplified versions could be telling all GPUs to run 2 except iGPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<max_concurrent>10</max_concurrent>
		<gpu_versions>
			<gpu_usage>.5</gpu_usage>
			<cpu_usage>.22</cpu_usage>
		</gpu_versions>
	</app>
	<app_version>
		<app_name>setiathome_v8</app_name>
		<plan_class>opencl_intel_gpu_sah</plan_class>
		<max_concurrent>10</max_concurrent>
		<gpu_versions>
			<gpu_usage>1</gpu_usage>
			<cpu_usage>.22</cpu_usage>
		</gpu_versions>
	</app_version>
</app_config>

SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1766512 · Report as offensive
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1766524 - Posted: 20 Feb 2016, 18:26:02 UTC
Last modified: 20 Feb 2016, 18:32:15 UTC

Thx, thats what i needed.

The second description did it.

But there one thing left.

The GT640 hab to crunsh 1 GPU task, and not 0.5 of it...

wooops...i don`t got an "app_info.xml" , bad ?
ID: 1766524 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1766532 - Posted: 20 Feb 2016, 19:16:20 UTC - in response to Message 1766524.  

Thx, thats what i needed.

The second description did it.

But there one thing left.

The GT640 hab to crunsh 1 GPU task, and not 0.5 of it...

wooops...i don`t got an "app_info.xml" , bad ?

I didn't notice you were using stock, but not having an app_info.xml isn't a problem. There isn't anything that would be gained if you did. It doesn't offer any extra configuration options to help.

I'm not sure if the CUDA app supports the MultiBeam_<vendor>_config.xml. It allows configuration of each device independently. I think it is only available for OpenCL apps as the ReadMe_x41zi.txt doesn't mention it.
The CUDA does offer a cuda.cfg for app tuning for each device, but I don't see an option for number of instances.

I think someone with more NV experience might know the trick to this.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1766532 · Report as offensive
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1766542 - Posted: 20 Feb 2016, 19:59:22 UTC
Last modified: 20 Feb 2016, 20:34:32 UTC

Aint´t got a clue right now. But until here thank you very much.

The cuda.cfg file gives no options in numbers to run, no MB configfile are given.

Then i try google later on.

ok, after gooooggleing , nobody is a crunshing nerd like me there, or they only don´t posting it :D no further infos are found.
ID: 1766542 · Report as offensive
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1766552 - Posted: 20 Feb 2016, 20:49:35 UTC
Last modified: 20 Feb 2016, 21:00:45 UTC

OK , i digged a littlebit and found something , half helpfull.

-disable_slot N

in the following file:

ap_cmdline_7.10_windows_intelx86__opencl_nvidia_100.txt

there i can disable the PCIE SLOT/CARD Number ? for the minimum of AstroPulse,

then i haven´d wait too long to crunsh , the GT640 will only run cuda3.2 till cuda5.0.

Will that be right ?
ID: 1766552 · Report as offensive
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1766574 - Posted: 20 Feb 2016, 22:28:34 UTC
Last modified: 20 Feb 2016, 22:28:42 UTC

For some reason , the Boinc-Client set back, an uses again 2 wus per Intel-gpu :´(
ID: 1766574 · Report as offensive
The_Matrix
Volunteer tester

Send message
Joined: 17 Nov 03
Posts: 414
Credit: 5,827,850
RAC: 0
Germany
Message 1767291 - Posted: 24 Feb 2016, 17:18:30 UTC - in response to Message 1766552.  
Last modified: 24 Feb 2016, 17:20:23 UTC

-disable_slot 0

in the following file:


ap_cmdline_7.09_windows_intelx86__opencl_intel_gpu_100.txt

It was the wrong file...

ok, i might disabled Astropulse on the intel gpu with that, nomore AP have been loaded for this gpu.

Cheers !
ID: 1767291 · Report as offensive

Message boards : Number crunching : Three crunshing cards in one host -> how to config "app_config.xml" file ?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.