Posts by MadMaC

1) Message boards : Number crunching : Got an AP WU for my NVIDIA GPU! (Message 1279270)
Posted 2 Sep 2012 by Profile MadMaC
Post:
[quote]
Seeing as you are running under anonymous platform you did update your app_info file and get the apps Claggy linked to didn't you? By the way it's an OpenCL app not a cuda one, so nothing to do with the cuda_fermi plan class.



Just checking, doesn't the lunatics installer update the app_info? Wanted to check or it means I have to go off and edit mine...
2) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1279192)
Posted 2 Sep 2012 by Profile MadMaC
Post:
What an absolutely brilliant bit of software - hats off to you my man, and thx for the effort :-)
3) Message boards : Number crunching : Is there an issue with reporting work?? (Message 1276601)
Posted 28 Aug 2012 by Profile MadMaC
Post:
The site is out of date as I upgraded to 7.0.28 this morning when I arrived as I thought that might be the issue.
I downgraded to 6.10.58 a while back when there was the work fetch issue, as 6.10.58 was much better at getting wu's than the newer code.
Im now on the newest version available.

Fred, Richard, the <max_tasks_reported> fix you both suggested has worked just fine and I am reporting as we speak, so I am happy, will see how the new client works out, if it causes issues, then I will run down all my wu's and then install 6.10.58 again, but for now Im willing to give it a go.

Many thanks guys for your prompt replies and help - it is appreciated
4) Message boards : Number crunching : Is there an issue with reporting work?? (Message 1276587)
Posted 28 Aug 2012 by Profile MadMaC
Post:
Just got back from 3 weeks leave and when checking my rig, noticed that it hasn't been reporting work, or at least 90% of my work, as I have nearly 5500 taks completed and ready to report on.

I keep getting
28/08/2012 08:39:18 | SETI@home | Scheduler request failed: HTTP internal server error

Is there an issue reporting work at the moment, as when I check other peoples stats, they seem to be getting work through OK??

Im on the latest client...
5) Message boards : Number crunching : Slow gpu comlpetion times (Message 1069094)
Posted 21 Jan 2011 by Profile MadMaC
Post:
Freds rescheduler app isn't seeing them as vlars, maybe it is just a bunch of slow ones..
As Mark said, someone has to crunch them, just wish it wasn't me...
6) Message boards : Number crunching : Slow gpu comlpetion times (Message 1069036)
Posted 21 Jan 2011 by Profile MadMaC
Post:
Im seeing some really slow completion times on my fermi's, have been since last night.

look at the gpu usage - averaging 10% with 3 wu's per card..
It's the same if I drop to one wu per card



Have I just got a dodgy batch or is anyone seeing this???
7) Message boards : Number crunching : Is the GTX 295 still the best card for CUDA? (Better than a GTX 470?) (Message 1050400)
Posted 22 Nov 2010 by Profile MadMaC
Post:
Yes, the 295 is still the card to beat. Although on some projects, ATI is a killer.

Not here, though.

Two 295's in one rig are a powerful couple of crunchers.

I have 2 465's in another rig, and they are rather lame.

Very dissapointed in them, I be.



Mark, you can run upto 3 wu's per card on fermi - that will bump your RAC up :-)
8) Message boards : Number crunching : Oh no - my PC has gone! (Message 1039163)
Posted 7 Oct 2010 by Profile MadMaC
Post:
We donate our used macines to charity, and if I bought it Id have to take it off site and there is no way I would get any more machines past the missus!
As for moving to another case, nothing is standard about a Dell, so it would be a nightmare job..
I have already asked for my old machine back, but they want lower power draw from all machines, so no hope there, and it is known that I have seti set to use 100% of everything, so Im a prime candidate to save the money :-)

Looks like I will be putting the 250 on the bay, it's been a good little card, and a faithful machine - 3 million is not to be sniffed at..

I will check out the other cards, but I doubt they will be worth it..
9) Message boards : Number crunching : Oh no - my PC has gone! (Message 1039090)
Posted 7 Oct 2010 by Profile MadMaC
Post:
Damn PC upgrades!

Work have replaced my 3yr old desktop with a nice new shiny Dell Optiplex 380
Not only am I dropping from a dual Pentium D to a dual celeron, my nvidia GTS250 no longer fits!
I do have a pci-e slot in the machine, but it will need to be a half height card to fit in the chassis
Do Nvidia do any half decent half height cards suitable for crunching seti

Sad to see the old machine go - it had crunched 3.2 million credits for me and was a trusty workhorse
10) Message boards : Number crunching : Application: Ghost Detector - find out how big your Ghost Army is (Message 1035554)
Posted 24 Sep 2010 by Profile MadMaC
Post:
Im sure they will get it sorted eventually, apart from their server resources, there is no real harm done



Looks like I have a few, just keep on crunching, they will disappear eventually :-)
11) Message boards : Number crunching : Quick fundraiser for SETI's new server (Message 1034449)
Posted 19 Sep 2010 by Profile MadMaC
Post:
I will still try and contribute to the warranty next month, but for now...

Thank you for your gift of $10.00 via Mastercard on 09/19/2010.
You gift was assigned to the following areas:

SETI@home - $10.00

Your confirmation number is:
63018
12) Message boards : Number crunching : Quick fundraiser for SETI's new server (Message 1033955)
Posted 18 Sep 2010 by Profile MadMaC
Post:
If you can hang on until the end of the month (payday for me) I am happy to help...
13) Message boards : Number crunching : GTX460 or 465, what's faster @ S@H CUDA? (Message 1033917)
Posted 18 Sep 2010 by Profile MadMaC
Post:
The point of view 465's are in fact bios limited 470's

http://forums.overclockers.co.uk/showthread.php?t=18181584
14) Message boards : Number crunching : Orbit at Home is now giving out WUs! (Message 1030823)
Posted 4 Sep 2010 by Profile MadMaC
Post:
Nein Sutaru. This is homebase for most Setizens who listen. Have a backup project.


In this house, backup projects are forbidden. Anyone under my roof crunching anything else than SETI, will be thrown out onto the streets, with her/his computer, and other belongings.

It's SETI or nothing, until ET is found, interrogated, and dealt with according to galactic law.



I have

Einstein@home
Milkyway@home
lhc@home
Rosetta

My GPU's crunch seti 24/7, but my cpu's share resources...
Seti is and always will be my primary project, but for all the crunching, there have been no results, and I have no idea if the data has actually achieved anything so I do other projects to try and maximise my contribution to science in case seti draws blank
15) Message boards : Number crunching : Panic Mode On (38) Server problems (Message 1030789)
Posted 4 Sep 2010 by Profile MadMaC
Post:
Ran cache dry to install v0.37 on C2D with GT 240.

Asked for new work and promptly was hit with around 120 ghosts.

Bye bye with a detach/reattach and will try again tomorrow. :(


Got Orbit to keep it busy till then...



How can you tell if you have a ghost unit???
16) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030422)
Posted 3 Sep 2010 by Profile MadMaC
Post:
Might have spoken too soon

Im seeing my gpu usage drop back down for extended periods, even though everything seems to be OK

Is this normal for 3 cards running 3 wu's per card???

17) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030415)
Posted 3 Sep 2010 by Profile MadMaC
Post:
OK, its early days, but I think after 2 days of testing that I might have got to the bottom of my GPU workunits taking 40-50 mins.
I went down the whole reducing clock speeds route, which did diddly squat..
Watching the task manager, all 4 cpu's were maxed out 100%, and the clue came from watching the load time for the cpu to pass a wu to the gpu, on some wu's, it was taking 2 mins before I saw any sign of crunching. This coupled with some earlier comments over gpu usage levels (mine were all over the place!) made me think that the cpu was possibly the cause
Playing around with the <avg_ncpu> value, it would seem that the long workunit times was down to a cpu bottleneck, one cpu couldn't keep the gpu's fed fast enough.
After playing around with multiple wu/card combo's I have set the following

<avg_ncpu>0.223333</avg_ncpu>
This means I have 2 cpu's crunching and 2 feeding the gpu's

I lose an additional processor, but in 12 hrs I have not had a wu take longer than 28:49 :-) at stock clocks
Task manager is showing average cpu usage of around 92-95%, that is with 2 cpu cores crunching and 2 feeding the gpu's which are running 3 wu's each
My gpu usage is also steady at around 90% though it does dip to the 60% mark every now and then

This means that I have slightly more playing around with the <avg_ncpu> value as there is a 5% headroom, which is wasted at the moment.
I will of course be ramping the clocks on the gpu right back up to the 800's as soon as I can..

Thank god for that - it was really bugging me!
18) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030052)
Posted 1 Sep 2010 by Profile MadMaC
Post:
Thanks Tim..

This machine is purely used for crunching..
I dont think they are all from the same range..



I will keep any eye on it for the next day and see how things go...
19) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030044)
Posted 1 Sep 2010 by Profile MadMaC
Post:
Ok - here is another - at least it isn't the 480 this time

result>
<name>01jn10ac.760.68450.16.10.91_1</name>
<final_cpu_time>324.201300</final_cpu_time>
<final_elapsed_time>3667.743784</final_elapsed_time>
<exit_status>0</exit_status>
<state>4</state>
<platform>windows_intelx86</platform>
<version_num>608</version_num>
<plan_class>cuda</plan_class>
<fpops_cumulative>95706580000000.000000</fpops_cumulative>
<stderr_out>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 3 CUDA device(s):
Device 1: GeForce GTX 480, 1503 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
clockRate = 810000
Device 2: GeForce GTX 470, 1248 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 14
clockRate = 810000
Device 3: GeForce GTX 470, 1248 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 14
clockRate = 810000
setiathome_CUDA: CUDA Device 2 specified, checking...
Device 2: GeForce GTX 470 is okay
SETI@home using CUDA accelerated device GeForce GTX 470
Priority of process raised successfully
Priority of worker thread raised successfully
size 8 fft, is a freaky powerspectrum
size 16 fft, is a cufft plan
size 32 fft, is a cufft plan
size 64 fft, is a cufft plan
size 128 fft, is a cufft plan
size 256 fft, is a freaky powerspectrum
size 512 fft, is a freaky powerspectrum
size 1024 fft, is a freaky powerspectrum
size 2048 fft, is a cufft plan
size 4096 fft, is a cufft plan
size 8192 fft, is a cufft plan
size 16384 fft, is a cufft plan
size 32768 fft, is a cufft plan
size 65536 fft, is a cufft plan
size 131072 fft, is a cufft plan

) _ _ _)_ o _ _
(__ (_( ) ) (_( (_ ( (_ (
not bad for a human... _)

Multibeam x32f Preview, Cuda 3.0

Work Unit Info:
...............
WU true angle range is : 0.419844

Flopcounter: 33576273578467.793000

Spike count: 2
Pulse count: 1
Triplet count: 0
Gaussian count: 0
called boinc_finish

I cant see any heavy processes running, but maybe it could be one of two things?

Either I haven't allowed enough cpu to each gpu?
(currently set to <avg_ncpus>0.111120</avg_ncpus>)

I have a quad core phenom II, the gpu's take a total of 1.00008 cpu cores and I crunch a mixture of MW@home, Einstein@home and Rosetta on the remining 3 cores, so the cpu is pretty much maxed out all the time
20) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030037)
Posted 1 Sep 2010 by Profile MadMaC
Post:
Not sure if I am going down the right road here, but taking the name of a task that took 56:53 mins, I then searched the client_state.xml and below is every reference I found for that task....

task = 01jn10ac.760.68450.16.10.141



<file_info>
<name>01jn10ac.760.68450.16.10.141</name>
<nbytes>375194.000000</nbytes>
<max_nbytes>0.000000</max_nbytes>
<md5_cksum>9eba64fc7c47fbe267c18ee2ffcf4c11</md5_cksum>
<status>1</status>
<url>http://boinc2.ssl.berkeley.edu/sah/download_fanout/97/01jn10ac.760.68450.16.10.141</url>
</file_info>


<file_info>
<name>01jn10ac.760.68450.16.10.141_0_0</name>
<nbytes>23643.000000</nbytes>
<max_nbytes>65536.000000</max_nbytes>
<md5_cksum>c8b1eca16e1cb3faa0f36919b32ed4ac</md5_cksum>
<generated_locally/>
<status>1</status>
<upload_when_present/>
<url>http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler</url>
<persistent_file_xfer>
<num_retries>0</num_retries>
<first_request_time>1283370201.441504</first_request_time>
<next_request_time>1283370201.441504</next_request_time>
<time_so_far>0.000000</time_so_far>
<last_bytes_xferred>0.000000</last_bytes_xferred>
</persistent_file_xfer>
<signed_xml>
<name>01jn10ac.760.68450.16.10.141_0_0</name>
<generated_locally/>
<upload_when_present/>
<max_nbytes>65536</max_nbytes>
<url>http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler</url>
</signed_xml>
<xml_signature>
c0700120c10e4b2b270e774536f4ef042e8e6856f68f47729bda526b73ad4c16
6b499873b789a5f25cc9e8d79a85943af1a78a177f05418a23b3abfac5e0a6e2
9cf00bbb0128e3ca2250641f4e3f45e05b9d1366ea2f06d3a2e396cf50b964b6
48e8b94b8a4f13d6551742cbb220b86215531266d4ff82bf066399a5a9fb5152


<workunit>
<name>01jn10ac.760.68450.16.10.141</name>
<app_name>setiathome_enhanced</app_name>
<version_num>608</version_num>
<rsc_fpops_est>4082840316064.331500</rsc_fpops_est>
<rsc_fpops_bound>40828403160643.312000</rsc_fpops_bound>
<rsc_memory_bound>33554432.000000</rsc_memory_bound>
<rsc_disk_bound>33554432.000000</rsc_disk_bound>
<file_ref>
<file_name>01jn10ac.760.68450.16.10.141</file_name>
<open_name>work_unit.sah</open_name>
</file_ref>
</workunit>




/result>
<result>
<name>01jn10ac.760.68450.16.10.141_0</name>
<final_cpu_time>327.508500</final_cpu_time>
<final_elapsed_time>3413.123220</final_elapsed_time>
<exit_status>0</exit_status>
<state>4</state>
<platform>windows_intelx86</platform>
<version_num>608</version_num>
<plan_class>cuda</plan_class>
<fpops_cumulative>95706580000000.000000</fpops_cumulative>
<stderr_out>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 3 CUDA device(s):
Device 1: GeForce GTX 480, 1503 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
clockRate = 810000
Device 2: GeForce GTX 470, 1248 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 14
clockRate = 810000
Device 3: GeForce GTX 470, 1248 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 14
clockRate = 810000
setiathome_CUDA: CUDA Device 1 specified, checking...
Device 1: GeForce GTX 480 is okay
SETI@home using CUDA accelerated device GeForce GTX 480
Priority of process raised successfully
Priority of worker thread raised successfully
size 8 fft, is a freaky powerspectrum
size 16 fft, is a cufft plan
size 32 fft, is a cufft plan
size 64 fft, is a cufft plan
size 128 fft, is a cufft plan
size 256 fft, is a freaky powerspectrum
size 512 fft, is a freaky powerspectrum
size 1024 fft, is a freaky powerspectrum
size 2048 fft, is a cufft plan
size 4096 fft, is a cufft plan
size 8192 fft, is a cufft plan
size 16384 fft, is a cufft plan
size 32768 fft, is a cufft plan
size 65536 fft, is a cufft plan
size 131072 fft, is a cufft plan

) _ _ _)_ o _ _
(__ (_( ) ) (_( (_ ( (_ (
not bad for a human... _)

Multibeam x32f Preview, Cuda 3.0

Work Unit Info:
...............
WU true angle range is : 0.419844

Flopcounter: 33576273578419.793000

Spike count: 0
Pulse count: 0
Triplet count: 0
Gaussian count: 0
called boinc_finish


Next 20


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.