CUDA Versions

Message boards : Number crunching : CUDA Versions
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 . . . 8 · Next

AuthorMessage
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1551689 - Posted: 3 Aug 2014, 1:43:16 UTC

Bill green i looked at your computers and the 4 that show up do not have the spec you have posted ????

Is that computer you are talking about hidden ????

If so please unhide it so people can help
ID: 1551689 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1551838 - Posted: 3 Aug 2014, 13:27:24 UTC - in response to Message 1549702.  

Have you applied the "sample" CFG's to the empty .CFG files that stock Seti downloads?

It is possible that unless you do, everything will run at the default, lower gpu levels. Read the docs inside the sample file(s) to see what I mean.

I am being fed constant Cuda50 now. I suspect that the Seti scheduler feeds you a variety of Cuda files till if finds your "fastest" one. And then just feeds you that.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1551838 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1551842 - Posted: 3 Aug 2014, 13:45:08 UTC - in response to Message 1551025.  

for some reason I'm receiving 5-10 invalid results daily on that machine; from 10-20 per day have status of 'validation inclusive'.


If none of the other diagnostics pan out I would start by reducing your gpu tasks to one per gpu for say a week. If the invalid results go away then try running 2 per gpu. If the invalid results continue to be mostly non-existent I would stop there.

I am assuming you have setup your mcuda*.cfg files at least with the purposed "defaults" from the sample files?

I am running 2 on my GTX 750Ti but when I tried to go up to 3 the time taken scaled linearly. So there was no advantage. So I reverted to 2.

This is my proposed "baseline" app_config.xml in the project directory if you need to go this far:

<app_config>
<app>
<name>astropulse_v6</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>0.49</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>1.00</gpu_usage>
<cpu_usage>0.49</cpu_usage>
</gpu_versions>
</app>
</app_config>


If your setup behaves like mine, this will not "officially" tie up any of your cpu core while still feeding your GTX GPU's at full tilt. Apparently the Radeon GPU's are a little fussier, needed at least a dedicated cpu to themselves.
A proud member of the OFA (Old Farts Association).
ID: 1551842 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1551917 - Posted: 3 Aug 2014, 18:12:45 UTC - in response to Message 1551627.  

A few Stderr_output results for invalid WU's

Task 3662584452

Stderr output

<core_client_version>7.2.42</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 2 CUDA device(s):
Device 1: GeForce GTX 480, 1536 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
pciBusID = 2, pciSlotID = 0
clockRate = 1401 MHz
Device 2: GeForce GTX 480, 1536 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
pciBusID = 1, pciSlotID = 0
clockRate = 1401 MHz
In cudaAcc_initializeDevice(): Boinc passed DevPref 1
setiathome_CUDA: CUDA Device 1 specified, checking...
Device 1: GeForce GTX 480 is okay
SETI@home using CUDA accelerated device GeForce GTX 480
pulsefind: blocks per SM 4 (Fermi or newer default)
pulsefind: periods per launch 100 (default)
Priority of process set to BELOW_NORMAL (default) successfully
Priority of worker thread set successfully

setiathome enhanced x41zc, Cuda 4.20

Detected setiathome_enhanced_v7 task. Autocorrelations enabled, size 128k elements.
Work Unit Info:
...............
WU true angle range is : 1.478522
re-using dev_GaussFitResults array for dev_AutoCorrIn, 4194304 bytes
re-using dev_GaussFitResults+524288x8 array for dev_AutoCorrOut, 4194304 bytes
Thread call stack limit is: 1k
cudaAcc_free() called...
cudaAcc_free() running...
cudaAcc_free() PulseFind freed...
cudaAcc_free() Gaussfit freed...
cudaAcc_free() AutoCorrelation freed...
cudaAcc_free() DONE.

Flopcounter: 17626900279864.070000

Spike count: 0
Autocorr count: 0
Pulse count: 0
Triplet count: 0
Gaussian count: 0
Worker preemptively acknowledging a normal exit.->
called boinc_finish
Exit Status: 0
boinc_exit(): requesting safe worker shutdown ->
boinc_exit(): received safe worker shutdown acknowledge ->
Cuda threadsafe ExitProcess() initiated, rval 0

</stderr_txt>
]]>

Task 3662388093 (this is really peculiar)

<core_client_version>7.2.42</core_client_version>
<![CDATA[
<stderr_txt>

</stderr_txt>
]]>

Task 3662307680

Stderr output

<core_client_version>7.2.42</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 2 CUDA device(s):
Device 1: GeForce GTX 480, 1536 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
pciBusID = 2, pciSlotID = 0
clockRate = 1401 MHz
Device 2: GeForce GTX 480, 1536 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
pciBusID = 1, pciSlotID = 0
clockRate = 1401 MHz
In cudaAcc_initializeDevice(): Boinc passed DevPref 1
setiathome_CUDA: CUDA Device 1 specified, checking...
Device 1: GeForce GTX 480 is okay
SETI@home using CUDA accelerated device GeForce GTX 480
pulsefind: blocks per SM 4 (Fermi or newer default)
pulsefind: periods per launch 100 (default)
Priority of process set to BELOW_NORMAL (default) successfully
Priority of worker thread set successfully

setiathome enhanced x41zc, Cuda 4.20

Detected setiathome_enhanced_v7 task. Autocorrelations enabled, size 128k elements.
Work Unit Info:
...............
WU true angle range is : 4.326643
re-using dev_GaussFitResults array for dev_AutoCorrIn, 4194304 bytes
re-using dev_GaussFitResults+524288x8 array for dev_AutoCorrOut, 4194304 bytes
Thread call stack limit is: 1k
cudaAcc_free() called...
cudaAcc_free() running...
cudaAcc_free() PulseFind freed...
cudaAcc_free() Gaussfit freed...
cudaAcc_free() AutoCorrelation freed...
cudaAcc_free() DONE.

Flopcounter: 16054554021039.889000

Spike count: 9
Autocorr count: 0
Pulse count: 0
Triplet count: 3
Gaussian count: 0
Worker preemptively acknowledging a normal exit.->
called boinc_finish
Exit Status: 0
boinc_exit(): requesting safe worker shutdown ->
boinc_exit(): received safe worker shutdown acknowledge ->
Cuda threadsafe ExitProcess() initiated, rval 0

</stderr_txt>
]]>

On the surface (for me) the two above that completed appear normal when compared to a valid result. However, I'm not well enough versed to understand specifics. I have no thoughts about the incomplete result above.

Hadn't considered the DPC latency as a potential. Will take a look at that though I remain puzzled about how latency in a deferred call manifests into an invalid result. But as you indicate, all sorts of anomalies might surface so well worth taking a look. Thanks for the suggestion.
ID: 1551917 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1551923 - Posted: 3 Aug 2014, 18:33:24 UTC - in response to Message 1551689.  

Bill green i looked at your computers and the 4 that show up do not have the spec you have posted ????

Is that computer you are talking about hidden ????

If so please unhide it so people can help


Actually, attempting to resolve two issues, one with a dual 480 GPU config. and the other a build under way consisting of dual 780's. Both configs. have SSD's but the 480 config. on which I'm getting (some) invalid results is an AMD based system (see computer 7257197). With a few exceptions, I believe invalid WU results first appear as 'Validation inclusive' before becoming invalid though I haven't traced one to see. Given that and what I see in the Stderr_outputs (again, with few exceptons), I might suspicion the wingman if the wingman is always the same, something I don't know but suspect not the case. Appreciate any comments on the Stderr results in a separate posting just sent.
ID: 1551923 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1551970 - Posted: 3 Aug 2014, 21:23:02 UTC - in response to Message 1551842.  

for some reason I'm receiving 5-10 invalid results daily on that machine; from 10-20 per day have status of 'validation inclusive'.


If none of the other diagnostics pan out I would start by reducing your gpu tasks to one per gpu for say a week. If the invalid results go away then try running 2 per gpu. If the invalid results continue to be mostly non-existent I would stop there.

I am assuming you have setup your mcuda*.cfg files at least with the purposed "defaults" from the sample files?

I am running 2 on my GTX 750Ti but when I tried to go up to 3 the time taken scaled linearly. So there was no advantage. So I reverted to 2.

This is my proposed "baseline" app_config.xml in the project directory if you need to go this far:

<app_config>
<app>
<name>astropulse_v6</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>0.49</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>1.00</gpu_usage>
<cpu_usage>0.49</cpu_usage>
</gpu_versions>
</app>
</app_config>


If your setup behaves like mine, this will not "officially" tie up any of your cpu core while still feeding your GTX GPU's at full tilt. Apparently the Radeon GPU's are a little fussier, needed at least a dedicated cpu to themselves.


I originally reconfigured app_info.xml (as produced by Lunatics) about a year back to place 3 WU's on each of the 480's, Tom. That ran fine without errors or Validation inclusive results until about Feb when I installed the SSD which ran fine for a couple months. Then after an SSD glitche and recover, I began to receive the Validation inclusive and invalid results. Going to the HD did not resolve the results issue so I came back to the SSD and have been living with the errors since early June.

Certainly, I will adjust the count parameter down in app_info.xml to see the effect but would like somehow to get back to a count allowing 3 WU's per GPU as previous. The only .cfg file I see is mbcuda.cfg which seems to contain only text when opened by WordPad, i.e., numbered mbcuda*.cfg's aren't there. I assume this is the empty mbcuda.cfg you referenced. There also does not exist the app_config.xml file you mention. You may assume from this that I've not touched the .cfg files (samples or otherwise) and am not aware of their role. Would appear that I've much more research to do.

Appreciate the leads ...
ID: 1551970 · Report as offensive
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1552046 - Posted: 4 Aug 2014, 2:00:48 UTC

I was wondering did you need to load any drivers for the SSD ?

I had a problem simular to you when i tried to cheat and use a ram drive and it started to give me errors (invalids) not all the time about 1 in 20 . It also slowed the times down .

I reliez'd that the ram drive software was the problem as it had a driver that i think used a usb type interface as the software came with a external drive i have .

So i'm thinking you only got the problem when you installed the SSD right ? maybe there was a problem with the driver when it installed or is buggy so i would try and uninstall any software that you had to install with the drive and stop using it to see what happens

Also most ppl find running 3 units on the GPU gives no advantage and can slow your RAC my GPU is the GTX650 and this is what happens if i do more than 2 i got a 20%+ increase when i stop'd doing 3 and went back to 2 you also put the card under undue stress for no real gain
ID: 1552046 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1552051 - Posted: 4 Aug 2014, 2:20:20 UTC - in response to Message 1551970.  
Last modified: 4 Aug 2014, 2:29:05 UTC

This all assumes a stock Seti setup. I am not sure if these files show up when you use the Lunatics setup.

The only .cfg file I see is mbcuda.cfg which seems to contain only text when opened by WordPad, i.e., numbered mbcuda*.cfg's aren't there. I assume this is the empty mbcuda.cfg you referenced. Appreciate the leads ...


You should look for: mcuda-7.00-cudaXX.cfg where the XX is the cuda type: 22? to 50.

You should look for: mcuda-7.00-cudaXX.cfg.sample which will have examples with documentation of what to do. Using the defaults from the general part of at the top of the file is working fine for me. Just delete a couple three semi colons.

Each time the Seti scheduler sent me a new to me CudaXX file type it would also ship down an empty file with the same cuda# as well as the sample file. If you select the file and then goto properties you can tell it you want to use notepad to edit them. I would NOT use Wordpad since it might add invisible word processing codes to the file.

My impression is you need to set each CudaXX separately. I suspect that the mcuda.txt file you spoke of either works or doesn't work (sorry, that sounded like the "when you see a split in the road, take it" Joke). If it is empty, it can't hurt (I suspect) to paste one of the contents of the sample files in there and delete the appropriate semi-colons as well as the bottom part of the file which is specific to a gpu (but is commented out so shouldn't matter). It wasn't clear from the docs I have read if mcuda.txt still works.

But the individual ones do appear to make a difference. I had some "older" GPU's (pre Fermi)[G210] and they ran a lot faster when I changed the appropriate parms to suit).

HTH,

Tom
A proud member of the OFA (Old Farts Association).
ID: 1552051 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1552496 - Posted: 5 Aug 2014, 5:06:57 UTC - in response to Message 1552046.  

I was wondering did you need to load any drivers for the SSD ?

I had a problem simular to you when i tried to cheat and use a ram drive and it started to give me errors (invalids) not all the time about 1 in 20 . It also slowed the times down .

I reliez'd that the ram drive software was the problem as it had a driver that i think used a usb type interface as the software came with a external drive i have .

So i'm thinking you only got the problem when you installed the SSD right ? maybe there was a problem with the driver when it installed or is buggy so i would try and uninstall any software that you had to install with the drive and stop using it to see what happens

Also most ppl find running 3 units on the GPU gives no advantage and can slow your RAC my GPU is the GTX650 and this is what happens if i do more than 2 i got a 20%+ increase when i stop'd doing 3 and went back to 2 you also put the card under undue stress for no real gain


You may have hit on something there, Glenn. It has been a while and I don't recall if drivers were required but suspect they were. It came with download instructions referencing an online Kingston executable that I had to run to have the SSD recognized and formatted. The new system has a similar SSD so will look into driver requirements there as the build proceeds (parts still coming in). Very possible there may be a conflict somewhere between SETI/BOINC treatment of results and SSD drivers.

But I also want to test the lessor count of 2 to see if that eliminates invalids. While they appear to be executing properly with the count at 3, the GPU's may be bouncing between WU's perhaps creating errors on some. Certainly, I'm creating interrupts when using the computer for other purposes such as this dialog. To repeat, though, the SSD did provide an approximate 5% RAC increase on this system in spite of the invalids (5-10 a day).

Appreciate the thoughts. I'll post results of whatever actions I take.
ID: 1552496 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1552498 - Posted: 5 Aug 2014, 5:13:36 UTC - in response to Message 1552051.  

This all assumes a stock Seti setup. I am not sure if these files show up when you use the Lunatics setup.

The only .cfg file I see is mbcuda.cfg which seems to contain only text when opened by WordPad, i.e., numbered mbcuda*.cfg's aren't there. I assume this is the empty mbcuda.cfg you referenced. Appreciate the leads ...


You should look for: mcuda-7.00-cudaXX.cfg where the XX is the cuda type: 22? to 50.

You should look for: mcuda-7.00-cudaXX.cfg.sample which will have examples with documentation of what to do. Using the defaults from the general part of at the top of the file is working fine for me. Just delete a couple three semi colons.

Each time the Seti scheduler sent me a new to me CudaXX file type it would also ship down an empty file with the same cuda# as well as the sample file. If you select the file and then goto properties you can tell it you want to use notepad to edit them. I would NOT use Wordpad since it might add invisible word processing codes to the file.

My impression is you need to set each CudaXX separately. I suspect that the mcuda.txt file you spoke of either works or doesn't work (sorry, that sounded like the "when you see a split in the road, take it" Joke). If it is empty, it can't hurt (I suspect) to paste one of the contents of the sample files in there and delete the appropriate semi-colons as well as the bottom part of the file which is specific to a gpu (but is commented out so shouldn't matter). It wasn't clear from the docs I have read if mcuda.txt still works.

But the individual ones do appear to make a difference. I had some "older" GPU's (pre Fermi)[G210] and they ran a lot faster when I changed the appropriate parms to suit).

HTH,

Tom


Thanks, Tom. Have been considering going back to the stock SETI and may yet depending on results from a few other tests. Would be nice to know if the stock version also produces errors but your suggestion about modifying the empty mcuda.cfg also has merit. Your notes will definitely come in handy on both/either change. Certainly have plenty to work with given inputs by you and Glenn.
ID: 1552498 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1552716 - Posted: 5 Aug 2014, 22:18:56 UTC - in response to Message 1552498.  

Your welcome. Let me know if you can't find a "sample" file and I will post the info you need to paste into the mcuda.txt file.
A proud member of the OFA (Old Farts Association).
ID: 1552716 · Report as offensive
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1552737 - Posted: 5 Aug 2014, 23:02:36 UTC

No worry's Bill your welcome
ID: 1552737 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1553305 - Posted: 7 Aug 2014, 17:19:29 UTC - in response to Message 1551842.  

for some reason I'm receiving 5-10 invalid results daily on that machine; from 10-20 per day have status of 'validation inclusive'.


If none of the other diagnostics pan out I would start by reducing your gpu tasks to one per gpu for say a week. If the invalid results go away then try running 2 per gpu. If the invalid results continue to be mostly non-existent I would stop there.

I am assuming you have setup your mcuda*.cfg files at least with the purposed "defaults" from the sample files?

I am running 2 on my GTX 750Ti but when I tried to go up to 3 the time taken scaled linearly. So there was no advantage. So I reverted to 2.

This is my proposed "baseline" app_config.xml in the project directory if you need to go this far:

<app_config>
<app>
<name>astropulse_v6</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>0.49</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>1.00</gpu_usage>
<cpu_usage>0.49</cpu_usage>
</gpu_versions>
</app>
</app_config>


If your setup behaves like mine, this will not "officially" tie up any of your cpu core while still feeding your GTX GPU's at full tilt. Apparently the Radeon GPU's are a little fussier, needed at least a dedicated cpu to themselves.


Brought the 780 system online last night with one 780 active. Trying to use stock SETI but some questions. How is the number of GPU WU's increased? I didn't find the app_config.xml file but did see the empty mcuda*.cfg files and samples containing instructions. I'll follow your cut and paste with notepad approach (eliminating semi-colons where appropriate) to engage the proper constructs.

I assume the number of WU's per GPU could be increased by adjusting <gpu_usage> parameter down, i.e., adjusting to 0.5 would assign 2 WU's, etc. I find the 0.49 cpu usage parameter peculiar. If I interpret that correctly, 0.49 equates to approximately 1/2 cpu (core?) assigned to each WU. Seems high especially that you have mentioned that cpu workload is unaffected for streaming GPU WU's. Guess I could use a short tutorial on how to build and engage the app_config.xml file.

In the meantime, I'm going to bring the other 780 on line. Might as well get it to work as well. I've built several machines over the years but this has been other most interesting using the Cooler Master chassis. Impressive way they go about hiding cables.
ID: 1553305 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1553333 - Posted: 7 Aug 2014, 18:29:52 UTC - in response to Message 1553305.  

Brought the 780 system online last night with one 780 active. Trying to use stock SETI but some questions. How is the number of GPU WU's increased? I didn't find the app_config.xml file but did see the empty mcuda*.cfg files and samples containing instructions. I'll follow your cut and paste with notepad approach (eliminating semi-colons where appropriate) to engage the proper constructs.

I assume the number of WU's per GPU could be increased by adjusting <gpu_usage> parameter down, i.e., adjusting to 0.5 would assign 2 WU's, etc. I find the 0.49 cpu usage parameter peculiar. If I interpret that correctly, 0.49 equates to approximately 1/2 cpu (core?) assigned to each WU. Seems high especially that you have mentioned that cpu workload is unaffected for streaming GPU WU's. Guess I could use a short tutorial on how to build and engage the app_config.xml file.

Please would helpers referring new users to optional files give them the link to the documentation too?

http://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration
ID: 1553333 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1553401 - Posted: 7 Aug 2014, 22:55:11 UTC - in response to Message 1553305.  

[I assume the number of WU's per GPU could be increased by adjusting <gpu_usage> parameter down, i.e., adjusting to 0.5 would assign 2 WU's, etc. I find the 0.49 cpu usage parameter peculiar. If I interpret that correctly, 0.49 equates to approximately 1/2 cpu (core?) assigned to each WU. Seems high especially that you have mentioned that cpu workload is unaffected for streaming GPU WU's. Guess I could use a short tutorial on how to build and engage the app_config.xml file.


After you take a look at the documentation link you can read on this.

The GPU ratio is: 1 = 1 task, 0.50 = 2 tasks etc.
The CPU ratio is some number below 1.0 will not tie up a cpu core while feeding the GPU. It has varied all over the place depending on the hardware I am using.

For my mid-grade Xeon (3.2Mhz) with the GTX750Ti it runs very nicely with all 8 "cpus" crunching baseline seti wu (and the occasional Astropulse) while feeding GTX750Ti at full-tilt with the cpu = 0.49

I don't think it will make a difference except tieing up 2 cpus if you go gpu = 0.50 and cpu = 1.0 I don't think it will go any faster.

The 0.49 cpu ratio works for me. Gives me the illusion that I am running 8 cpus (4+hyperthreading) completely un-touched by feeding the GPU. You mileage may vary especially since you have 2 GPU's.

Use the GPU utility to confirm that the GPU's are feeling well fed.

HTH,
Tom
A proud member of the OFA (Old Farts Association).
ID: 1553401 · Report as offensive
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1553573 - Posted: 8 Aug 2014, 8:42:19 UTC - in response to Message 1553401.  

Richard i believe the app_inf.xml installs with Lunitics .

As Tom says no diff in speed
ID: 1553573 · Report as offensive
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1553574 - Posted: 8 Aug 2014, 8:48:02 UTC - in response to Message 1553333.  

If your not shore about app_config.xml load Lunatics .
Sorry can't comment on building a config file others will help there
ID: 1553574 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1553584 - Posted: 8 Aug 2014, 9:21:53 UTC - in response to Message 1553573.  

Richard i believe the app_inf.xml installs with Lunitics .

As Tom says no diff in speed

App_info.xml is certainly created and installed by Lunatics, and is designed to exactly match and support the applications that the user chooses from the range offered and deployed by the installer. I write the installer, and I played a large part in developing that mechanism.

My remarks were about app_config.xml, which users are welcome to write for themselves. app_config.xml (which is much simpler to write and maintain) over-rides the default settings in app_info.xml, so that configurations like the number of tasks to be run on the GPU can be applied safely without the risk of damaging app_info.xml by inexperienced editing. For that reason, we - I - have designed the installer to leave app_config.xml in place when replacing all the other application files and building a fresh app_info.xml
ID: 1553584 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1553619 - Posted: 8 Aug 2014, 12:57:38 UTC - in response to Message 1553584.  

Richard i believe the app_inf.xml installs with Lunitics .

As Tom says no diff in speed

App_info.xml is certainly created and installed by Lunatics, and is designed to exactly match and support the applications that the user chooses from the range offered and deployed by the installer. I write the installer, and I played a large part in developing that mechanism.

My remarks were about app_config.xml, which users are welcome to write for themselves. app_config.xml (which is much simpler to write and maintain) over-rides the default settings in app_info.xml, so that configurations like the number of tasks to be run on the GPU can be applied safely without the risk of damaging app_info.xml by inexperienced editing. For that reason, we - I - have designed the installer to leave app_config.xml in place when replacing all the other application files and building a fresh app_info.xml

One of the risks of making a mistake in the app_info.xml can be BOINC abandoning all of your work or removing the optimized apps. Which can happen even when you think you know what you are doing.
Messing up an app_config.xml I haven't found a way for BOINC do anything bad to me... yet. Possibly leaving out a decimal and running 10 tasks instead of 1.0 might be the worst that could happen.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1553619 · Report as offensive
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1553909 - Posted: 8 Aug 2014, 21:33:55 UTC

Sorry Richard my messages where to BIll got confused with the names .but thanks for explaination
ID: 1553909 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 . . . 8 · Next

Message boards : Number crunching : CUDA Versions


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.