Posts by Shaggie76

1) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1909892)
Posted 18 days ago by Profile Shaggie76
Post:
I wonder how badly you think mismatched cards in a system affect your numbers.
For instance my 4x1070 computer is actually a 1070/980/1060/1060.

The script omits all multi-GPU systems for exactly this reason.
2) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1909711)
Posted 19 days ago by Profile Shaggie76
Post:
Average of Median 60% Credit/Hour from my last run:

NVIDIA GeForce GTX 1070: 908.8006478
NVIDIA GeForce GTX 1070 Ti: 941.2174884

So approximately 3.6% more credit from a wide sampling of computers and tasks.

From your data below, average power:

NVIDIA GeForce GTX 1070: 111.5 W
NVIDIA GeForce GTX 1070 Ti: 117.7 W

So approximately 5.6% more power in this specific case.

I noticed your 1070's were rated for 166W not the stock 150W -- so maybe clocked a bit higher and drawing a bit more power too?
3) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1909613)
Posted 20 days ago by Profile Shaggie76
Post:
New scan shows the 1070 Ti's doing a tiny bit more work than regular 1070's but probably use more power than it's worth.

4) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1905934)
Posted 9 Dec 2017 by Profile Shaggie76
Post:
After all the server problems last week I didn't want to put strain on them running my script to crawl through the database. Maybe if they stay up this week I'll try again.
5) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1903225)
Posted 28 Nov 2017 by Profile Shaggie76
Post:
I ran another scan today and there still aren't enough 1070 Ti's in circulation yet.
6) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1900619)
Posted 12 Nov 2017 by Profile Shaggie76
Post:
I ran a few scans over the last two weeks and combined the data together to give a picture of some rarer cards that might not otherwise have enough valid results to get a picture from. There aren't enough Pascal Titans yet but this scan has some RX Vega parts in mix (sadly it does not seem well optimized for SETI by default). There aren't enough 1070 Ti hosts yet either (2 by my count).

I took a stab at hooking up data for Intel IGPs -- the TDP values are probably wrong so don't pay too much attention to the CPWH chart for them -- I was mostly curious about throughput and it's pretty abysmal as you'd expect. I probably won't include those again since the CPWH is misleading.

To try to keep the charts manageable I've started omitting some of the older generation cards; I assume that since you're probably looking at this data to help guide setting up machines it's unlikely that you'll be worried about vintage parts but I could be wrong.

7) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1899570)
Posted 6 Nov 2017 by Profile Shaggie76
Post:
Still not interested, sorry.
8) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1899198)
Posted 4 Nov 2017 by Profile Shaggie76
Post:
I don't know for sure but I suspect that mining tasks use integer arithmetic but SETI@home uses floating-point math; AMD cards might offer better integer performance per watt which could explain the miners' enthusiasm.
9) Message boards : Number crunching : GPU Advice 980 vs RX 480 vs 1070 (Message 1898682)
Posted 2 Nov 2017 by Profile Shaggie76
Post:
Me latest benchmarks might help you decide -- the RX 480 has been disappointing since it shipped -- I'd hoped the performance would improve but it hasn't really.

I'm keen to measure but I'm guessing the 1070 Ti won't perform much better than the 1070 because the memory bandwidth per core is lower -- even the 1080 isn't that much faster than a stock 1070.
10) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1895859)
Posted 17 Oct 2017 by Profile Shaggie76
Post:
I was hoping for more Vega parts in circulation by now but we aren't quite there yet (I require a certain number of completed work units per card to qualify and then enough separate computers for the card to show up on the charts. It's close, but not quite there yet:

C:\SETI>grep -i Vega GPUs.csv
7626762,Radeon RX Vega
7842719,Radeon RX Vega
7854642,Radeon RX Vega
8081803,Radeon RX Vega
8103729,Radeon RX Vega
8230810,Radeon RX Vega
8243334,Radeon RX Vega
8249242,Radeon RX Vega
8261851,Radeon RX Vega
8307472,Radeon RX Vega
8330537,Radeon RX Vega
8334662,Radeon RX Vega
8341269,Radeon Vega Frontier Edition
8344100,Radeon RX Vega
8344505,Radeon RX Vega

I also fixed a bug in my code that was mixing up the two types of Titan X cards but even now there aren't a lot of the Pascal parts crunching for SETI yet either:

C:\SETI>grep -i Pascal GPUs.csv
4693382,TITAN X (Pascal)
6987408,TITAN X (Pascal)
7978195,TITAN X (Pascal)
8008690,TITAN X (Pascal)
8076145,TITAN X (Pascal)
8107587,TITAN X (Pascal)
8163371,TITAN X (Pascal)
8184679,TITAN X (Pascal)
8286999,TITAN X (Pascal)
8312198,TITAN X (Pascal)
8333199,TITAN X (Pascal)
11) Message boards : Number crunching : Vega Frontier Edition - MB Options Tuning (Message 1894832)
Posted 12 Oct 2017 by Profile Shaggie76
Post:
Fascinating analysis! Thank you!
12) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1893499)
Posted 6 Oct 2017 by Profile Shaggie76
Post:
Host: 8341269

gfx901 (Anonymous)
    1052 Credit / Hour
     37% Core / Task
     403 Tasks
I'd have expected it to be a bit higher but maybe it's too soon to tell?
13) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1891126)
Posted 21 Sep 2017 by Profile Shaggie76
Post:
One possible explanation occurred to me for the new RX 580 scores being lower than expected: if these cards recently replaced a older and slower card my scripts might misconstrue older results from the old card as from the RX 580 -- to ease pressure on the SETI servers I get host information and only the summary of the task stats (for me to dig into each task to handle this would be 20x more server queries).

I might also dig through and see if I can find some Vega parts for some preliminary results since I'm curious.
14) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1891052)
Posted 21 Sep 2017 by Profile Shaggie76
Post:
Evidently I haven't updated my old thread in so long that it had to be locked so here's a fresh thread.

I ran another scan today and a few things are new: the RX 570 and RX 580s are on the charts now -- surprisingly they aren't running quite as fast the RX 480s on the chart -- it might be luck but there are over a dozen hosts and over 2000 tasks counted for each so I'm not sure that this is just sampling error (in contrast the RX 480 stats in this scan covered almost 150 hosts and over 19000 tasks so I'm pretty confident).

There aren't enough Vega parts in circulation for them to qualify for stats - I'll run another scan in a month or so and see if there are enough then.

15) Message boards : Number crunching : Vega Frontier Edition (Message 1889454)
Posted 13 Sep 2017 by Profile Shaggie76
Post:
I just analyzed that host with my script and it showed
Hawaii (Anonymous)
    1004 Credit / Hour
     57% Core / Task

How many tasks are you running concurrently on that card? I can't tell from the stdout.
16) Message boards : Number crunching : So who is going to be a guinea-pig this time?? (Message 1883059)
Posted 10 Aug 2017 by Profile Shaggie76
Post:
I have a Haswell-E 8/16 5960X for my daily duties and the only dev benchmarks I've seen so far (Anandtech Chrome builds) were barely faster on ThreadRipper despite 2x the threads. Our builds regularly saturate all 16 threads and I so I assume Chrome does too so I'm suddenly much less interested in being an early adopter.
17) Message boards : Number crunching : For the betterment of BOINC (Message 1880007)
Posted 24 Jul 2017 by Profile Shaggie76
Post:
I'm not sure if it's a BOINC problem or a SETI@Home problem but the Tuesday blackouts would be painless if high-throughput hosts were allowed to buffer more than 100 tasks. This topic has been flogged plenty and there's been some good ideas suggested -- would be nice to pick one and get it done.

I don't know about anyone else but I've had problems with CPU-sharing with some applications and have resorted to using the exclusive list to pause BOINC when needed; if there was a separation of the exclusive application list to allow GPU tasks to continue and only pause CPU tasks in this situation you might get more work done on some systems.

FWIW I'm not personally concerned about the credit system; I'll admit that from the data I've analyzed to compare GPU throughput for different types of task it seems like there's some serious inconsistencies but this doesn't seem to matter to the science getting done. Never the less since there seems to be a lot of people upset about it this might be better for the public-relations of the project to find something to make people happy.

I would volunteer code effort but frankly after I farted around with some of the SETI code I was extremely discouraged by how my patches were received. I know the projects are separate but I must admit a healthy amount of apprehension.
18) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1873223)
Posted 15 Jun 2017 by Profile Shaggie76
Post:
I'd like to stay as a developer/experimenter/propel hat/tin foil hat escapee/a man; and let the others do the political decisions. I release my code and you can do what ever you want to.

This is totally fine (and appreciated!) -- I'm just a little vexed at the people's enthusiasm for the glory of more internet points rather than getting your version finished and certified to be in the stock set by checking their inconclusives and getting diagnostics to make it conform.
19) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1873154)
Posted 15 Jun 2017 by Profile Shaggie76
Post:
Hey Shaggie, is there by chance anyway to pull Linux cuda results out of your dataset for a chart?

I'm guessing you mean Petri's special app and want to know just how much faster it is.

As I've said before including the anonymous platform would defeat the purpose of this comparison; I deliberately filter for only the stock app running one job at a time so that you can make meaningful comparisons and get a sense of the relative performance and power consumption for each.

The other problem with the anonymous platform is that not clear how many jobs are being run concurrently per card; the regular CUDA app only really performs if you double or triple-job it it but the data I have to work with can't see the concurrency so I can't tell if it's 'really slow' (because concurrent), really fast (because Petri's app), or just a normal (Lunatics build). People running stock tend not to mess around with multiple jobs so those that do are eliminated as outliers by the median-window (plus there's a clue in the output from the OpenCL app that I can use to sometimes detect when they're doubling up so I can reject them).

I'm also opposed to encouraging what I see as basically cheating -- if Petri's app isn't accurate enough for everybody to use then the extra credit it awards those that use it comes at the extra validation cost of those of us running stock who have to double (and possibly triple) check the work that it does.

When it's part of the stock app set I'll be happy to report on the relative performance of the OpenCL SoG vs CUDA apps (as I've done before).
20) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1872353)
Posted 11 Jun 2017 by Profile Shaggie76
Post:
It's been a while since I ran a scan and there are enough 1080 Ti's in circulation now to get a sense of how fast they are:



Next 20


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.