Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again)

Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 32 · Next

AuthorMessage
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9949
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1794188 - Posted: 7 Jun 2016, 8:10:46 UTC

Stephen, rather than take this thread off topic, I have sent you a PM.


Looking at these results, I think I might take a stab at it, but how much configuring does it need? Or is it fairly automated?


I just ran the installer, the only thing I changed was to select the S0G option instead of CUDA.

I have to say that today the graph has started a downturn, it is only one day so we will see.

My second machine is still crunching the CUDA backlog.
ID: 1794188 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1794229 - Posted: 7 Jun 2016, 11:00:32 UTC

Is there an updated list of all cuda-app parameters?
"-poll " for example, is not listed.
Is there an equivalent to "-sbs xxx"?
ID: 1794229 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1794231 - Posted: 7 Jun 2016, 11:12:44 UTC - in response to Message 1794229.  
Last modified: 7 Jun 2016, 11:13:17 UTC

Is there an updated list of all cuda-app parameters?
"-poll " for example, is not listed.
Is there an equivalent to "-sbs xxx"?


No. The 'supported' parameters are given in the readme and sample. The -poll option is a vestigial one that I maintained though never promoted, due to circumstances + tester sentiment when it was made. Most likely since some small amount of people find it useful, it will be exposed in mbcuda.cfg and readmes in due course.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1794231 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1794232 - Posted: 7 Jun 2016, 11:16:54 UTC - in response to Message 1794231.  

Thanks,jason-gee.
ID: 1794232 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1794621 - Posted: 9 Jun 2016, 4:47:05 UTC - in response to Message 1793903.  

. . . Hello Richard,

. . . You haven't mentioned it but is there any chance that SSE4.1 support has been added in 0.45 Beta? I need to deploy it on my system with the GT730 which is Core2 Duo based. It might be helpful if SSE4.1 is available to help things along.

Stephen
ID: 1794621 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 64943
Credit: 55,293,173
RAC: 49
United States
Message 1794631 - Posted: 9 Jun 2016, 5:15:11 UTC
Last modified: 9 Jun 2016, 5:23:42 UTC

My main problem, even though I have the sleep command running, is that I have a gpu wu 'running HP' and a gpu wu in 'waiting to run', it's probably nothing, but I thought I'd mention it.

I have 3 cpu and 3 gpu wu's running, plus some SoG has been downloaded.

I'm also getting this, note the days figure:

SETI@home 8.00 setiathome_v8 (cuda42) 25se10ad.23501.12750.7.34.2_1 00:49:30 (00:00:10) 0.35 0.001 3372d,22:39:51 71.0 °C 0.04C + 0.33NV Running Pegasus


This one is going to go Waiting to run, how can I stop this from happening?

Help...

The days figure is growing...
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1794631 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6324
Credit: 106,370,077
RAC: 121
Russia
Message 1794633 - Posted: 9 Jun 2016, 5:24:56 UTC - in response to Message 1794631.  

Re-read ReadMe and set correctly -instances_per_device N param for few tasks per GPU operation.
ID: 1794633 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 64943
Credit: 55,293,173
RAC: 49
United States
Message 1794645 - Posted: 9 Jun 2016, 6:41:38 UTC - in response to Message 1794633.  

Like so?

-use_sleep -instances_per_device N : 3
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1794645 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14509
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1794655 - Posted: 9 Jun 2016, 7:27:06 UTC - in response to Message 1794621.  

. . . Hello Richard,

. . . You haven't mentioned it but is there any chance that SSE4.1 support has been added in 0.45 Beta?

No, it hasn't - no developer has supplied me with any updated CPU applications since the v0.44 launch to support SaH v8.

I need to deploy it on my system with the GT730 which is Core2 Duo based. It might be helpful if SSE4.1 is available to help things along.

Stephen

You don't strictly 'need' it. SIMD hardware support is cumulative - extra capabilities are added to newer CPU designs, but the old ones are never removed. There are one or two gaps where Intel and AMD followed different pathways for a while, but during that phase of development, the incremental steps were relatively small.

Sure, SSSE3 and SSE4.1 would be 'nice to have', but your Core2 Duo will get along pretty well with SSE3 until the developers can catch their breath and regroup.
ID: 1794655 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 64943
Credit: 55,293,173
RAC: 49
United States
Message 1794656 - Posted: 9 Jun 2016, 7:28:32 UTC

I'm still getting a wu running HP, plus 2 waiting to run, one waiting to run is at 1824 days, I did increase instances to 6 after I saw this.

SETI@home 8.00 setiathome_v8 (cuda42) 25se10ad.23501.18885.7.34.238_0 00:15:36 (00:02:38) 16.88 49.103 00:12:24 0.04C + 0.33NV Waiting to run
SETI@home 8.00 setiathome_v8 (cuda42) 28jl10ad.25081.1712.4.31.22_0 00:16:52 (00:00:09) 0.92 44.008 00:21:28 47.0 °C 0.04C + 0.33NV Running High P.
SETI@home 8.00 setiathome_v8 31mr10ac.9315.18897.13.40.150_0 00:09:57 (00:09:56) 99.84 10.399 01:32:12 53.3 °C Running
SETI@home 8.00 setiathome_v8 (cuda42) 27au10af.11161.25016.3.30.252_0 00:26:23 (00:01:02) 3.94 0.001 1824d,22:37:36 0.04C + 0.33NV Waiting to run

The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1794656 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1794660 - Posted: 9 Jun 2016, 7:55:50 UTC

I have noticed, that when running 2 instances of sog, it runs both tasks for a while and then one makes no progress any more.It finishes the other and starts a new one which it continues to process. The first one is still making no progress, but the elapsed time keeps going.If i suspend all other tasks, it finally makes progress again and eventually finishes.
I have used -instances_per_per_device 2.
Any suggestions?
ID: 1794660 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 64943
Credit: 55,293,173
RAC: 49
United States
Message 1794665 - Posted: 9 Jun 2016, 8:18:06 UTC

Wait for something that works better, I get the same thing, only time to completion becomes days, instead of minutes, I tried 3, then 6, then 33, same result, I can't figure this out, I just went back to cuda42, I give up.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1794665 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1794666 - Posted: 9 Jun 2016, 8:48:01 UTC - in response to Message 1794660.  

Hey Rasputin42,

Are those bc5 tasks? I have noticed that those run more than 3 times slower, so it appears they are stalled, but they are still running.

And GPU temps are really low when running them.
ID: 1794666 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1794680 - Posted: 9 Jun 2016, 11:08:26 UTC - in response to Message 1794666.  
Last modified: 9 Jun 2016, 11:09:43 UTC

Hey Brent,
They are not running slower, they do not run at all (after the initial about 30%).
Elapsed time is progressing, but the percentage stays exactly the same.
ID: 1794680 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 33451
Credit: 79,922,639
RAC: 80
Germany
Message 1794688 - Posted: 9 Jun 2016, 12:10:11 UTC
Last modified: 9 Jun 2016, 12:11:20 UTC

The 720M is simply to slow running multiple instances.
LowPerformancePath is active so use sleep is activated as well.

Try only one instance.
With each crime and every kindness we birth our future.
ID: 1794688 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 21019
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1794690 - Posted: 9 Jun 2016, 12:22:57 UTC

With any "new" GPU installation it is worth running with only one task for a few days just to see what the thing will do in the base situation, then step it up to two for a few more days, finally if that is OK, push up to three. As Mike says I very much doubt that a GTX720M is up to running more than one task at a time.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1794690 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 21019
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1794692 - Posted: 9 Jun 2016, 12:35:01 UTC - in response to Message 1794631.  

It sounds as though system has gone into "trash mode" - it is trying to run too many tasks at a time on the GPU and is failing, so BOINC thinks tasks are getting near their deadline, BOINC pushes them up the priority tree and others get stuck in waiting mode. Trying to run too many tasks ata time will cause the GPU's task scheduling will struggle, particularly with the current crop of low angle tasks. You would probably do better to drop back to one task at a time on the GPU, whereas you are used to running two or three higher angle tasks.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1794692 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1794712 - Posted: 9 Jun 2016, 13:53:11 UTC

Well, the are all "fresh" tasks. That card runs 3 cuda50 tasks with no problem, but i was only running 2 at once. It runs one (SOG task) in about 26min.

As far as i know, if there is enough memory, it will run multiple instances, but less efficient, if you run too many.
ID: 1794712 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 21019
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1794723 - Posted: 9 Jun 2016, 14:17:38 UTC

How many times must this be said - the critical thing with GPUs is not MEMORY, but the number of GPU "cores" and their management.
There is probably enough memory to support half a dozen tasks, but trying to run more than a couple of tasks (particularly SoG) the GPU's internal task manager will be seriously struggling long before you reach that number.

Another thing to consider is that the current data from the servers is dominated by guppi (from the GBT) for which CUDA is not best suited - my GTX960 rig would quite happily run three "normal" Arecibo MB tasks, but try running three guppi at once and it started to sweat, it is much happier running two of them - that is quite a hit!
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1794723 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1794730 - Posted: 9 Jun 2016, 15:12:57 UTC - in response to Message 1794631.  

My main problem, even though I have the sleep command running, is that I have a gpu wu 'running HP' and a gpu wu in 'waiting to run', it's probably nothing, but I thought I'd mention it.

I have 3 cpu and 3 gpu wu's running, plus some SoG has been downloaded.

I'm also getting this, note the days figure:

SETI@home 8.00 setiathome_v8 (cuda42) 25se10ad.23501.12750.7.34.2_1 00:49:30 (00:00:10) 0.35 0.001 3372d,22:39:51 71.0 °C 0.04C + 0.33NV Running Pegasus


This one is going to go Waiting to run, how can I stop this from happening?

Help...

The days figure is growing...



. . Can you monitor the memory usage on your GPU card? If there is insufficient memory it can exit a task leaving it in the "Waiting to run" state.
ID: 1794730 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 32 · Next

Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again)


 
©2022 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.