RTX 2080

Message boards : Number crunching : RTX 2080
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 5 · Next

AuthorMessage
Chris Oliver
Avatar

Send message
Joined: 4 Jul 99
Posts: 69
Credit: 123,082,767
RAC: 73,758
United Kingdom
Message 1984968 - Posted: 13 Mar 2019, 18:38:21 UTC

Hello, can anybody recommend any good mb cmdline suggestions for an RTX 2080 ??

Thanks.
ID: 1984968 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 9849
Credit: 924,673,033
RAC: 1,507,677
United States
Message 1984973 - Posted: 13 Mar 2019, 19:37:54 UTC - in response to Message 1984968.  

I would say push it to the max. Any command line parameters for the 1080Ti would be a good match for the RTX 2080.

-sbs 1024 -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64

was the command line I used on my 1080Ti. I experimented with -sbs 2048 but never could definitively state it was beneficial and I think Raistmer commented that 1024 was enough to push the array to max performance. You could probably add -hp high priority to the mix too.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1984973 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 1795
Credit: 760,664,733
RAC: 2,547,642
United States
Message 1984981 - Posted: 13 Mar 2019, 20:29:10 UTC - in response to Message 1984973.  

I would say push it to the max. Any command line parameters for the 1080Ti would be a good match for the RTX 2080.

-sbs 1024 -period_iterations_num 1 -tt 1500 -high_perf -high_prec_timer -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64

was the command line I used on my 1080Ti. I experimented with -sbs 2048 but never could definitively state it was beneficial and I think Raistmer commented that 1024 was enough to push the array to max performance. You could probably add -hp high priority to the mix too.


+1, a 1080ti setup would work wonders. I looked for my old 1080ti SoG config to post, but i must have deleted it. it was pretty much the same as what Keith has posted.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1984981 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 3545
Credit: 209,531,096
RAC: 526,408
United States
Message 1984983 - Posted: 13 Mar 2019, 20:38:23 UTC
Last modified: 13 Mar 2019, 20:39:30 UTC

The fastest Windows box on the Leaderboard is getting gpu processing times in the same ballpark that you are:

https://setiathome.berkeley.edu/show_host_detail.php?hostid=5613876

While I would hope that 2 rtx 2080's would process faster than 2 gtx 1080 Ti's. Under Windows, I am not certain.

If you are running any kind of BOINC cpu processing, you need to set it to 90% of available cpus. Since you have a lot of cpu cores/threads available, I am presuming you are using them for some kind of processing, even if it is not Seti. You can use your "local configuration" options so you don't change the options for your other processor.

Tom
Oh NO.... I lost my tagline....
ID: 1984983 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 8124
Credit: 495,462,140
RAC: 382,562
Panama
Message 1984989 - Posted: 13 Mar 2019, 21:35:03 UTC
Last modified: 13 Mar 2019, 21:40:45 UTC

As a suggestion you must limit the power usage of the 2080 to a lower level (180-200W Vyper's could give you what he use), you loose a little of processing power but will preserve the VRMs and the rest of the GPU components.
Remember SETI stresses the GPU a lot more than gaming.
On windows with SoG you must consider to run 2 or maybe even 3 GPU WU at a time.
Test is needed to find what is the best option on your particular host.
Not forget this baby needs a core of the CPU to feed his hungry!
ID: 1984989 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 9849
Credit: 924,673,033
RAC: 1,507,677
United States
Message 1984997 - Posted: 13 Mar 2019, 22:24:26 UTC - in response to Message 1984989.  

It wouldn't hurt to reduce the power because I see he is running two tasks per gpu. So the one task I looked at running for 6 minutes in reality ran for an adjusted time of 3 minutes which is very respectable for the Windows SoG app. The card certainly has the horsepower and can crank the daily production rate with 2 tasks per gpu.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1984997 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 3545
Credit: 209,531,096
RAC: 526,408
United States
Message 1985004 - Posted: 13 Mar 2019, 23:17:48 UTC - in response to Message 1984997.  

It wouldn't hurt to reduce the power because I see he is running two tasks per gpu. So the one task I looked at running for 6 minutes in reality ran for an adjusted time of 3 minutes which is very respectable for the Windows SoG app. The card certainly has the horsepower and can crank the daily production rate with 2 tasks per gpu.


Sorry, missed he was running 2 per gpu. Where exactly do you read that? I have looked at tasks and it isn't jumping out at me.

Tom
Oh NO.... I lost my tagline....
ID: 1985004 · Report as offensive
Chris Oliver
Avatar

Send message
Joined: 4 Jul 99
Posts: 69
Credit: 123,082,767
RAC: 73,758
United Kingdom
Message 1985005 - Posted: 13 Mar 2019, 23:19:06 UTC

Thanks guys for the input I will experiment with the advice you have posted and see if I can get any more out of the 2080 setup.
ID: 1985005 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 9849
Credit: 924,673,033
RAC: 1,507,677
United States
Message 1985011 - Posted: 13 Mar 2019, 23:43:02 UTC - in response to Message 1985004.  

It wouldn't hurt to reduce the power because I see he is running two tasks per gpu. So the one task I looked at running for 6 minutes in reality ran for an adjusted time of 3 minutes which is very respectable for the Windows SoG app. The card certainly has the horsepower and can crank the daily production rate with 2 tasks per gpu.


Sorry, missed he was running 2 per gpu. Where exactly do you read that? I have looked at tasks and it isn't jumping out at me.

Tom

From any of his reported tasks stderr.txt outputs. One of his parameters is:

Number of app instances per device set to:2

Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1985011 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 9849
Credit: 924,673,033
RAC: 1,507,677
United States
Message 1985012 - Posted: 13 Mar 2019, 23:46:52 UTC - in response to Message 1985005.  

Thanks guys for the input I will experiment with the advice you have posted and see if I can get any more out of the 2080 setup.

The problem is pairing the aggressive settings for the 2080 with the 1060. Might be too much. From what I have gleaned from your parameter set you have it pretty well configured already.

I wish there was a way to input separate parameter command lines for specific devices. But sadly, no.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1985012 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 8124
Credit: 495,462,140
RAC: 382,562
Panama
Message 1985014 - Posted: 14 Mar 2019, 0:06:18 UTC - in response to Message 1985012.  
Last modified: 14 Mar 2019, 0:10:02 UTC

I wish there was a way to input separate parameter command lines for specific devices. But sadly, no.

Unless you run 2 separate clients on the same host. Works perfect and allow you to optimize each client for each GPU but sure is something for advanced users only.
ID: 1985014 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 1795
Credit: 760,664,733
RAC: 2,547,642
United States
Message 1985016 - Posted: 14 Mar 2019, 0:19:33 UTC - in response to Message 1985014.  

I wish there was a way to input separate parameter command lines for specific devices. But sadly, no.

Unless you run 2 separate clients on the same host. Works perfect and allow you to optimize each client for each GPU but sure is something for advanced users only.


Just curious. If you run 2 separate instances of Boinc, does it then count as 2 separate hosts? Or does it sum the work together into a single host?
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1985016 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2766
Credit: 569,452,386
RAC: 884,868
Canada
Message 1985018 - Posted: 14 Mar 2019, 0:41:48 UTC - in response to Message 1985016.  

2 separate hosts. You can't run the same host ID simultaneously.
ID: 1985018 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 8124
Credit: 495,462,140
RAC: 382,562
Panama
Message 1985019 - Posted: 14 Mar 2019, 0:53:41 UTC - in response to Message 1985018.  
Last modified: 14 Mar 2019, 1:16:55 UTC

2 separate hosts. You can't run the same host ID simultaneously.

What you could do since the 2 ID are similar is from time to time merge both host in one and create a new to continue working.

This is a tedious work and requires beside a lot of babysitting a well trained user to avoid mess with your crunching totals.

I work in that way few years ago when no rescheduler was available to DL more than 100 WU per GPU by creating 4 Fake hosts ID on each real host who allow me to DL up to 400 WU per fake host or 1600 WU for the real host.

You could use the same trick to make each client run a different configuration for each one of the GPU available on the host since each one of the clients has his complete own set of data files.

<edit> I was looking at my old msg box and i was running in that way around Aug 2014 then i have 18CPU & 42 GPU´s using the fake ID trick not remember how many real hosts i have at that time (maybe 4 or 5) ... few 670, 690 & 780 each one running with it's own client & configuration. That's was a nice fleet who has a RAC of about 500K. LOL
ID: 1985019 · Report as offensive
Chris Oliver
Avatar

Send message
Joined: 4 Jul 99
Posts: 69
Credit: 123,082,767
RAC: 73,758
United Kingdom
Message 1985166 - Posted: 14 Mar 2019, 20:54:02 UTC - in response to Message 1985012.  


The problem is pairing the aggressive settings for the 2080 with the 1060. Might be too much. From what I have gleaned from your parameter set you have it pretty well configured already.

I wish there was a way to input separate parameter command lines for specific devices. But sadly, no.


Hello Keith, the cmdline settings that you posted do seem to be pushing the 2080 to do more and the 1060 is coping as long as I use no more than 2 wu's per GPU at once.

Thanks again guys for all your input.
ID: 1985166 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 9849
Credit: 924,673,033
RAC: 1,507,677
United States
Message 1985172 - Posted: 14 Mar 2019, 21:28:01 UTC - in response to Message 1985166.  

Good to hear Chris. I always thought that two per was a sweet spot for the SoG app. I like you didn't see much issues with that command line when I was running a 1060 paired with 1080's and a 1080 Ti.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1985172 · Report as offensive
Profile Freewill Project Donor
Avatar

Send message
Joined: 19 May 99
Posts: 88
Credit: 199,255,876
RAC: 557,592
United States
Message 1985272 - Posted: 15 Mar 2019, 10:29:14 UTC

Hi All,

For the new cards, such as GTX 2080, can anyone confirm if they run under the special sauce Linux app? How about the GTX 1660 Ti? I'm looking to upgrade one card in my main cruncher, so they would be running with 3 others in the GTX 10x0 family.

Thanks!
Roger
ID: 1985272 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 1795
Credit: 760,664,733
RAC: 2,547,642
United States
Message 1985278 - Posted: 15 Mar 2019, 11:11:37 UTC - in response to Message 1985272.  

Hi All,

For the new cards, such as GTX 2080, can anyone confirm if they run under the special sauce Linux app? How about the GTX 1660 Ti? I'm looking to upgrade one card in my main cruncher, so they would be running with 3 others in the GTX 10x0 family.

Thanks!
Roger


Yes confirmed. I have some 2070s running. And Keith and Vyper have 2080s running. Vyper also has a 1660ti I believe.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1985278 · Report as offensive
Profile Freewill Project Donor
Avatar

Send message
Joined: 19 May 99
Posts: 88
Credit: 199,255,876
RAC: 557,592
United States
Message 1985282 - Posted: 15 Mar 2019, 11:37:58 UTC - in response to Message 1985278.  

Hi All,

For the new cards, such as GTX 2080, can anyone confirm if they run under the special sauce Linux app? How about the GTX 1660 Ti? I'm looking to upgrade one card in my main cruncher, so they would be running with 3 others in the GTX 10x0 family.

Thanks!
Roger


Yes confirmed. I have some 2070s running. And Keith and Vyper have 2080s running. Vyper also has a 1660ti I believe.


That's great! Thanks for the quick reply.
ID: 1985282 · Report as offensive
Profile Freewill Project Donor
Avatar

Send message
Joined: 19 May 99
Posts: 88
Credit: 199,255,876
RAC: 557,592
United States
Message 1985318 - Posted: 15 Mar 2019, 15:55:21 UTC - in response to Message 1985282.  

How does the run time of a 2070 or 2080 compare to a 1080 ti with special app? Does it just scale with the streaming multiprocessor number?
ID: 1985318 · Report as offensive
1 · 2 · 3 · 4 . . . 5 · Next

Message boards : Number crunching : RTX 2080


 
©2019 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.