GPU FLOPS: Theory vs Reality

Message boards : Number crunching : GPU FLOPS: Theory vs Reality
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 20 · Next

AuthorMessage
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34363
Credit: 79,922,639
RAC: 80
Germany
Message 1963474 - Posted: 5 Nov 2018, 17:57:14 UTC - in response to Message 1963461.  
Last modified: 5 Nov 2018, 18:03:20 UTC

Thanks Mike,

Applied these settings, now blc01 tasks are completed in about 210sec.
Without this tweak, completion times were around 300sec. Great improvement.

Edit: the completion time above is true only if 1 GPU task is running and there are no CPU tasks.
If the CPU starts to crunch, GPU completion time increases and overall GPU utilization decreases. It looks like the other threads are also using cpu core0 which is used by the GPU.

Regards


How many instances are running on CPU ?

Add -no_cpu_lock


With each crime and every kindness we birth our future.
ID: 1963474 · Report as offensive
Profile StFreddy
Avatar

Send message
Joined: 4 Feb 01
Posts: 35
Credit: 14,080,356
RAC: 26
Hungary
Message 1963480 - Posted: 5 Nov 2018, 18:28:45 UTC - in response to Message 1963474.  

currently 8 threads (of 16) can use Boinc. 1 thread is dedicated to the gpu task.
added -no_cpu_lock, lets see if it helps.
ID: 1963480 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34363
Credit: 79,922,639
RAC: 80
Germany
Message 1963622 - Posted: 6 Nov 2018, 13:28:55 UTC - in response to Message 1963480.  

currently 8 threads (of 16) can use Boinc. 1 thread is dedicated to the gpu task.
added -no_cpu_lock, lets see if it helps.


Let it run a few hours so i can check the results.


With each crime and every kindness we birth our future.
ID: 1963622 · Report as offensive
Profile StFreddy
Avatar

Send message
Joined: 4 Feb 01
Posts: 35
Credit: 14,080,356
RAC: 26
Hungary
Message 1963767 - Posted: 7 Nov 2018, 18:36:53 UTC - in response to Message 1963622.  
Last modified: 7 Nov 2018, 18:41:15 UTC

Now completion times stabilize around 220 seconds for blc01 tasks.
If I get ~50 credits for a single task, then it is roughly 800 credits/hour using 140W/h power on average.
Not as bad as I thought.
I didn't expect such throughput/efficiency based on the diagram that Shaggie76 posted on the previous page.
Thanks Mike and Tom for the tips.
ID: 1963767 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1963865 - Posted: 8 Nov 2018, 6:14:52 UTC

I was wondering. Is there a good source of reviews that would let me get at the different levels of performance under the basic "gtx 1060" or whatever?

An example might be: Gigabyte gtx 1060 6GB? that a gamer review claimed had higher OC possibilities (or automagically, I don't remember) than other brands reviewed at the same time.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1963865 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1963891 - Posted: 8 Nov 2018, 11:14:05 UTC - in response to Message 1963865.  

I was wondering. Is there a good source of reviews that would let me get at the different levels of performance under the basic "gtx 1060" or whatever?

An example might be: Gigabyte gtx 1060 6GB? that a gamer review claimed had higher OC possibilities (or automagically, I don't remember) than other brands reviewed at the same time.

Tom


Look here. Click on a card to see variations etc..

https://www.techpowerup.com/gpu-specs/?sort=name#1060

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1963891 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1965803 - Posted: 17 Nov 2018, 12:32:21 UTC

I have been wondering about my Gtx 1060 3GB cards. They have 1,152 cores.

The Gtx 1070's have 1,920 cores, the Gtx 1070 ti's have 2432 cores. (At least that is what google came up with).

Would it be fair to guess that two Gtx 1060 3GB cards have the same Seti production of at least a Gtx 1070?

That is what I think I am seeing.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1965803 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13841
Credit: 208,696,464
RAC: 304
Australia
Message 1965939 - Posted: 17 Nov 2018, 23:13:38 UTC - in response to Message 1963891.  

I was wondering. Is there a good source of reviews that would let me get at the different levels of performance under the basic "gtx 1060" or whatever?

For Seti work, Shaggie's graphs are the best indicators available.
Grant
Darwin NT
ID: 1965939 · Report as offensive
Oddbjornik Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 220
Credit: 349,610,548
RAC: 1,728
Norway
Message 1966754 - Posted: 23 Nov 2018, 14:10:30 UTC

I have been waiting for a new scan, with data for the RTX 20xx cards. Are we getting there?
ID: 1966754 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1966762 - Posted: 23 Nov 2018, 15:48:03 UTC - in response to Message 1966754.  

Last time a new card appeared in the project it took half a year before there were enough of them for a statistical analysis by Shaggie76.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1966762 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1966781 - Posted: 23 Nov 2018, 18:21:40 UTC - in response to Message 1966754.  

I have been waiting for a new scan, with data for the RTX 20xx cards. Are we getting there?


2070 = 1080ti

2080 = even faster

2080ti = even fasterer

:)

IMO the 2070 is the best bang for the buck. 2080 if you want to splurge a bit. 2080ti too expensive for what increase you get.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1966781 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1966783 - Posted: 23 Nov 2018, 18:25:23 UTC - in response to Message 1966754.  

I have been waiting for a new scan, with data for the RTX 20xx cards. Are we getting there?


Are you looking specifically for the "watts/efficiency" (sp) graph or the "how the heck fast is it" graph?

Tom
A proud member of the OFA (Old Farts Association).
ID: 1966783 · Report as offensive
Oddbjornik Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 220
Credit: 349,610,548
RAC: 1,728
Norway
Message 1966799 - Posted: 23 Nov 2018, 20:00:05 UTC - in response to Message 1966783.  

I have been waiting for a new scan, with data for the RTX 20xx cards. Are we getting there?


Are you looking specifically for the "watts/efficiency" (sp) graph or the "how the heck fast is it" graph?

Tom

Well, I guess the underlying question is how well a 2070 would perform compared to a GTX 1080.

I read Ian&Steve's post (above), took a look at their 2070 host, and then I just ordered one, so we'll see.

The first step is to put linux on the old host that currently runs the GTX 680.
ID: 1966799 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1966843 - Posted: 24 Nov 2018, 5:40:22 UTC - in response to Message 1966799.  

I have been waiting for a new scan, with data for the RTX 20xx cards. Are we getting there?


Are you looking specifically for the "watts/efficiency" (sp) graph or the "how the heck fast is it" graph?

Tom

Well, I guess the underlying question is how well a Rtx 2070 would perform compared to a GTX 1080.

I read Ian&Steve's post (above), took a look at their 2070 host, and then I just ordered one, so we'll see.

The first step is to put Linux on the old host that currently runs the GTX 680.


Ian&SteveC has both in production and the Rtx 2070 is running nearly neck in neck with his Gtx 1080Ti's on the same OS and apps. The 1080Ti tasks I have looked at (a limited sample) seem to be showing less variability. The Rtx 2070 shows the same top speeds but some slower tasks than the 1080Ti.

However, the more data across more different MB/CPU combinations the better. :)

Tom
A proud member of the OFA (Old Farts Association).
ID: 1966843 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1967234 - Posted: 26 Nov 2018, 9:53:56 UTC

This is my host.

https://setiathome.berkeley.edu/show_host_detail.php?hostid=8600449

It's a host that only runs work on the GPU (No cpu tasks whatsoever) and i've lowered the maximum powerdraw with nvidia-smi to 185W.
Why? Because if i let it all go to 260W+ it will only give about 2-3 seconds better runtime and in my opinion it's not worth it, heat/noise/wattwise and the longevity of the card might decrease faster aswell.

For the moment it's still going upwards albeit slowly. The only things that stops the host from getting more is that play some games on it in Windows and then reboots to Linux but over 100K from a single gpu is not too shabby anyway.

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1967234 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1967240 - Posted: 26 Nov 2018, 11:07:23 UTC - in response to Message 1967234.  

This is my host.

https://setiathome.berkeley.edu/show_host_detail.php?hostid=8600449

It's a host that only runs work on the GPU (No cpu tasks whatsoever) and i've lowered the maximum powerdraw with nvidia-smi to 185W.
Why? Because if i let it all go to 260W+ it will only give about 2-3 seconds better runtime and in my opinion it's not worth it, heat/noise/wattwise and the longevity of the card might decrease faster aswell.

For the moment it's still going upwards albeit slowly. The only things that stops the host from getting more is that play some games on it in Windows and then reboots to Linux but over 100K from a single gpu is not too shabby anyway.


So if you are not using the system under Linux for anything besides Seti crunching are the CPU cores just idling?

Tom
A proud member of the OFA (Old Farts Association).
ID: 1967240 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1967247 - Posted: 26 Nov 2018, 13:47:50 UTC

For example using Linux CUDA 10 builds and the 1070 as a base unit to compare:

A) A RX2080 is about 2x faster than a 1070 and uses about only 25% of electric power more, as posted by Vyper's if you limit it to 185W.

The only problem from my POV is the initial cost, a new RTX2080 is in the range of 800 US$ and you could buy an used 1070 for around 250 us$ on e-bay.

B) A 1080Ti produces 50% more than a the 1070 but uses about 50% more power to do that. And you could buy one used for around 500-600 on e-bay.

C) The 1060 produces about the 1/2 of the 1070 and uses around 70% of the power (hard to tell exactly there are so many models) and you could buy for around 150 US$ in e-bay (the 3 GB model at least).

D) Or others GPU's like the 750Ti, etc. are clear winners in the cost x production equation but them not runs with the latest CUDA10 builds, so IMHO i prefer not to invest on them in a new system.

So you just need to make the Math: cost x production x electric bill x initial cost and choose the one who is the best for your particular needs.

My 0.02 cents.
ID: 1967247 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1967249 - Posted: 26 Nov 2018, 14:16:46 UTC - in response to Message 1967240.  
Last modified: 26 Nov 2018, 14:17:37 UTC



So if you are not using the system under Linux for anything besides Seti crunching are the CPU cores just idling?

Tom


Thats correct. Besides some easy websurfing etc. I only use the Linux part to do S@H crunching!

I only use the cpu to feed the gpu and -nobs doesn't improve times that much either so i think i'm gonna remove that option aswell and let the cpu idle as much as it can reducing Power (and it works for what i can see in the invoice from our energycompany.
From my perspective i tend to keep my running costs as low as possible. Many of my Machines are placed at other locations which is ok from their behalf, I only have two Machines at home due to silly powerprices here in Sweden.

For the time beeing the kWh price is $0.25 or 2.27SEK with all sorts of taxes etc etc and in the wintertime it only gets higher too. Prefer to Shell out cash on potent stuff and adjust it accordingly because the last few percentage of performance on GPUs cost alot of wattage/hour.

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1967249 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 1967250 - Posted: 26 Nov 2018, 15:26:45 UTC - in response to Message 1967247.  


D) Or others GPU's like the 750Ti, etc. are clear winners in the cost x production equation but them not runs with the latest CUDA10 builds, so IMHO i prefer not to invest on them in a new system.


I agree about not planning on using gtx 750Ti's on a new system. However, a 750Ti will run on Linux/CUDA91. So if you already have them and unused slots :)

Tom
A proud member of the OFA (Old Farts Association).
ID: 1967250 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1967254 - Posted: 26 Nov 2018, 16:31:18 UTC - in response to Message 1967247.  

For example using Linux CUDA 10 builds and the 1070 as a base unit to compare:

A) A RX2080 is about 2x faster than a 1070 and uses about only 25% of electric power more, as posted by Vyper's if you limit it to 185W.

The only problem from my POV is the initial cost, a new RTX2080 is in the range of 800 US$ and you could buy an used 1070 for around 250 us$ on e-bay.

B) A 1080Ti produces 50% more than a the 1070 but uses about 50% more power to do that. And you could buy one used for around 500-600 on e-bay.

C) The 1060 produces about the 1/2 of the 1070 and uses around 70% of the power (hard to tell exactly there are so many models) and you could buy for around 150 US$ in e-bay (the 3 GB model at least).

D) Or others GPU's like the 750Ti, etc. are clear winners in the cost x production equation but them not runs with the latest CUDA10 builds, so IMHO i prefer not to invest on them in a new system.

So you just need to make the Math: cost x production x electric bill x initial cost and choose the one who is the best for your particular needs.

My 0.02 cents.


Vyper has a 2080ti, not a 2080. just FYI. its much more expensive than a 2080. 2080ti = ~1200USD at most places.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1967254 · Report as offensive
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 20 · Next

Message boards : Number crunching : GPU FLOPS: Theory vs Reality


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.