Posts by MadMaC


log in
21) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030029)
Posted 1326 days ago by Profile MadMaC
OK, cheers for that - will see what I can find..

edit

I have 18 slot folders!!

Folder 0 is seti
1 is MW@home
2 is seti

example output from stderr in folder 0

setiathome_CUDA: Found 3 CUDA device(s):
Device 1: GeForce GTX 480, 1503 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 15
clockRate = 810000
Device 2: GeForce GTX 470, 1248 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 14
clockRate = 810000
Device 3: GeForce GTX 470, 1248 MiB, regsPerBlock 32768
computeCap 2.0, multiProcs 14
clockRate = 810000
setiathome_CUDA: CUDA Device 1 specified, checking...
Device 1: GeForce GTX 480 is okay
SETI@home using CUDA accelerated device GeForce GTX 480
Priority of process raised successfully
Priority of worker thread raised successfully
size 8 fft, is a freaky powerspectrum
size 16 fft, is a cufft plan
size 32 fft, is a cufft plan
size 64 fft, is a cufft plan
size 128 fft, is a cufft plan
size 256 fft, is a freaky powerspectrum
size 512 fft, is a freaky powerspectrum
size 1024 fft, is a freaky powerspectrum
size 2048 fft, is a cufft plan
size 4096 fft, is a cufft plan
size 8192 fft, is a cufft plan
size 16384 fft, is a cufft plan
size 32768 fft, is a cufft plan
size 65536 fft, is a cufft plan
size 131072 fft, is a cufft plan

) _ _ _)_ o _ _
(__ (_( ) ) (_( (_ ( (_ (
not bad for a human... _)

Multibeam x32f Preview, Cuda 3.0

Work Unit Info:
...............
WU true angle range is : 0.423779

will carry on looking at the others..

OK - I realise that each slot represents a task in progress - how can I link this to one of the tasks that took 57 mins?
I presume that will be in the client_state xml? What do I need to look for to identify one of the long running tasks in the client_state xml?
22) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030019)
Posted 1326 days ago by Profile MadMaC
You may have been swamped with a bunch of VLAR resends like I was after the prior outage was over. probably ghosts from a while back timing out, since new tasks don't issue VLARs to GPU )well not suppsoed to anyway ).

Here, I clock the 480 @ 801MHz, 'Normal' mid angle ranges take 12-13 mins running 2 at a time. When VLARs hit they were more like more than an hour for two.

I decided to just leave the machine to munch through them, since it could handle them OK ... Not much fun, though I seem to be through the worst of them now ;)

Jason


Using Fred's rescheduler, it is showing me as having 14 VLAR's all assigned to the cpu? I assumed that meant that I had no VLAR's on the gpu?
I had both my 470's clocked to 875 @ 1037mV, but the 480 won't go above 750 without producing errors
23) Message boards : Number crunching : Concerned over cudu wu completion times?? (Message 1030015)
Posted 1326 days ago by Profile MadMaC
Is this normal or a symptom of a problem?

Im seeing some cuda wu's take an hour to complete!
I have no VLAR or VHARs assigned to the gpu
GPU 1 = gtx480
GPU 2&3 = gtx 470

They were clocked alot higher, but I dropped off the clocks as temps were getting a bit high
Is there an easy way to find out which card is crunching the longer units, to see if it the same card each time?
GPU usage is in general around 85-90% on average..

24) Message boards : Number crunching : discount bang-per-buck Seti@home cruncher (Message 1029868)
Posted 1327 days ago by Profile MadMaC
a gtx 460 can do 3 wu's in 30 ish mins when overclocked to 800Mhz, prices are around £140 for the basic model here over the pond from you...
Might be a better idea than the 8800GT's..

Also, even allowing for overclocking, you would be better filling the machine with gpu's rather than going for the i7 as even a lowly 8800GT will beat an i7 for ppd
25) Message boards : Number crunching : Spit gpu's accross projects (Message 1027432)
Posted 1336 days ago by Profile MadMaC
OK - thanks for letting me know...
26) Message boards : Number crunching : Spit gpu's accross projects (Message 1027391)
Posted 1336 days ago by Profile MadMaC
I know about editing cc_config to use one or other gpu's, but is there a way to split the usage

eg.

GPU 1 crunching seti
GPU 2 crunching einstein

I was wondering if this is possible...

27) Message boards : Number crunching : whats your oldest pending task? (Message 1027380)
Posted 1336 days ago by Profile MadMaC
cool - understood

Thx
28) Message boards : Technical News : Mordent (Aug 19 2010) (Message 1027370)
Posted 1336 days ago by Profile MadMaC
Thanks for the update Matt

At least with the 3 day outage you are getting some time to work on these things now, however frustrating they might be!!

good luck
29) Message boards : Number crunching : whats your oldest pending task? (Message 1027366)
Posted 1336 days ago by Profile MadMaC
I was interested to see this on my rigs, but then got fed up clicking through all my pending - when you have 500K of pending, clicking through each page at 20 a time becomes too tiresome..

try to edit number in address bar ;)
You could advance at any rate with this "feature" ;)


Thank you for that - tried playing around with the offset value in the address bar, but still showing 20 tasks at a time

I tried


http://setiathome.berkeley.edu/results.php?userid=79730&offset=220&show_names=0&state=2

then

http://setiathome.berkeley.edu/results.php?userid=79730&offset=1-220&show_names=0&state=2

Which bit should I change?
30) Message boards : Number crunching : New rescheduler (Message 1027327)
Posted 1336 days ago by Profile MadMaC
I use the lunatics rescheduler - v 1.9 for moving vlar's from gpu to cpu and v 1.7 for moving vlars back to gpu - that was when we had the vlar storm and that was all I could get!
Is there any advantage to using Frred's tool over the lunatics version, or am I better off sticking to what I know..
31) Message boards : Number crunching : whats your oldest pending task? (Message 1027320)
Posted 1336 days ago by Profile MadMaC
I was interested to see this on my rigs, but then got fed up clicking through all my pending - when you have 500K of pending, clicking through each page at 20 a time becomes too tiresome..
32) Message boards : Number crunching : Unexplained cpu errors (Message 1025868)
Posted 1341 days ago by Profile MadMaC
Thanks for the input - I dont think it was heat related as the rig is in an air conditioned room
I have rebooted the box and will see how it goes running one task..
33) Message boards : Number crunching : Unexplained cpu errors (Message 1025807)
Posted 1341 days ago by Profile MadMaC
Not sure why this is happening...
I cant check the ones in the image as uploads aren't working but browsing through tasks for the machine I can see errors stating

<core_client_version>6.10.56</core_client_version>
<![CDATA[
<message>
- exit code -12 (0xfffffff4)
</message>
<stderr_txt>






</stderr_txt>
]]>



The machine in question is

http://setiathome.berkeley.edu/results.php?hostid=5424775

Im suspending cpu calculations for now and apologise to my wingmen :-(
34) Message boards : SETI@home Science : SETI - The science behind the search (Message 1025275)
Posted 1343 days ago by Profile MadMaC

Even if we don't find anything, that in itself is interesting in many ways.

Also, we are likely to find many other unexpected things along the way...


Happy crunchin',
Martin




But are we??? Im curious as after nearly ten years of crunching, I have no idea what has been achieved..

http://www.bbc.co.uk/news/science-environment-10959590

Has seti discovered anything by accident while searching for ET? After all these years and millions of wu's crunched, what has been the result?
Im not ranting, my credit shows Im serious, but Im curious as to what they might have found as a by product of the search for ET. If so, where would it be announced? Have they found anything?
35) Message boards : Number crunching : Ghost work units (Message 1023920)
Posted 1349 days ago by Profile MadMaC
Nice one..
Cheers for the explanation..
36) Message boards : Number crunching : Ghost work units (Message 1023909)
Posted 1349 days ago by Profile MadMaC
Im not sure if I have any, but my pending is shooting thrugh the roof so Im guessing that I do. Surely the best thing to do is just wait, and then the units will get resent out and then we will get the credit for them, but just a bit later than usual?
Not quite sure I understand the concept to be honest...
37) Message boards : Number crunching : testing and benchmarking GPU performance (Message 1021269)
Posted 1358 days ago by Profile MadMaC
I want to find out how far I can push my 2 fermi cards (470 & 480). Im interested in finding out which works best 3 wu's a card or 2 wu's a card as its hard to tell at the moment with credit so screwed from the outage.

Are there test wu's I can download and use to test performance..
What is the best way to benchmark performance?
38) Message boards : Number crunching : Running SETI@home on an nVidia Fermi GPU (Message 1021130)
Posted 1358 days ago by Profile MadMaC
How do you configure the flops value if you are running more than 1 type of fermi card - I have a 470 and a 480 in the same system..
Last thing I read, I saw mention advising against using flops for the fermi cards - has this now changed and it is now the way to go??

Thx
39) Message boards : Number crunching : Fermi app claimed credits? (Message 1019807)
Posted 1363 days ago by Profile MadMaC
They have done away with the claimed credit column, what you are seeing is the CPU time.


Im a muppet - I just noticed that - doh

Still a massive difference though between various wu's
Id expect one value range for the shorties and 1 for the longer wu's
40) Message boards : Number crunching : Fermi app claimed credits? (Message 1019801)
Posted 1363 days ago by Profile MadMaC
Just been looking at my results for my fermi card and the claimed and granted credit are way out

http://setiathome.berkeley.edu/results.php?hostid=5424775&offset=0&show_names=0&state=3

Is this normal - do I need to revert back to stock?
Slightly concerned at the differences...

Any one else seeing the same??


Previous 20 · Next 20

Copyright © 2014 University of California