SETI@home performance over time

Message boards : Number crunching : SETI@home performance over time
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile W

Send message
Joined: 23 May 16
Posts: 3
Credit: 2,566,777
RAC: 0
United States
Message 1892023 - Posted: 27 Sep 2017, 18:10:14 UTC
Last modified: 27 Sep 2017, 18:10:55 UTC

Is there any way to track the performance of the SETI@home network over time? What I'm looking for is basically a graph of average SETI@home performance in GigaFLOPS over the time of its existence. I'm especially curious to see how this has varied over recent years.

My memory may be deceiving me, but it seems the average performance increased significantly in the past couple of years?
ID: 1892023 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1892026 - Posted: 27 Sep 2017, 18:50:09 UTC - in response to Message 1892023.  

Don't know of any. My other projects Einstein and Milkyway do show the number of teraflops and gigaflops that each application has crunched on their servers status pages. Seti doesn't show any metric about flops , just the amount of tasks processed.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1892026 · Report as offensive
Harri Liljeroos
Avatar

Send message
Joined: 29 May 99
Posts: 4087
Credit: 85,281,665
RAC: 126
Finland
Message 1892115 - Posted: 28 Sep 2017, 6:16:58 UTC

There is this: https://boincstats.com/en/stats/0/project/detail/overview

Not directly what you are asking but credit and host numbers give you some idea.
ID: 1892115 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1892118 - Posted: 28 Sep 2017, 6:56:57 UTC - in response to Message 1892115.  

No those pages actually do show the flops used for each project. Good find.
SETI@home
Average floating point operations per second	977,968.3 GigaFLOPS / 977.968 TeraFLOPS
Active hosts	155,233 (86.99%)

Einstein@Home
Average floating point operations per second	2,487,919.8 GigaFLOPS / 2,487.920 TeraFLOPS
Active hosts	51,210 (76.78%)


Interesting that Einstein with 1/3 the number of active hosts generates 2.5 X the number of flops. Guess the apps used there are a lot more efficient than SETI's.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1892118 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1892122 - Posted: 28 Sep 2017, 7:34:26 UTC - in response to Message 1892118.  

Oh no, not you too Keith...

The OpenCl apps on Einstein are not very efficient. They run hotter and use a lot more of the GPU than Seti. They also stuck with a full CPU core for each work unit. Raistmer worked very hard to get the CPU usage down and improved the efficency so that we could run more than 1 at a time without the systems crashing.

I know for a fact that my GPUs run 10C hotter at a minimun than they do while doing similar amount of work there vs here. The roughness of the application also means I can not pause any work without a system locking up or crashing. Overclock is much less there as well since if I try to match the setting, the GPUs crash after the first series of work.

You can't compare the 2 applications. The work units are different and the refinement of the OpenCl is as well.
ID: 1892122 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1892125 - Posted: 28 Sep 2017, 7:46:20 UTC - in response to Message 1892122.  
Last modified: 28 Sep 2017, 7:59:58 UTC

The OpenCl apps on Einstein are not very efficient. They run hotter and use a lot more of the GPU than Seti.

?
Running hotter and using more of the computing resource is usually an indication of increased efficiency of the application.
The AVX application for the CPU resulted in me having to use an aftermarket cooler due to the increase in power & heat from the extra work done. The SoG application results in higher power usage & temperatures (for a given fan speed) compared to CUDA due to the extra work done. ie, the applications are more efficient.


EDIT- I actually consider power usage to be a better indicator of work done the GPU & Memory controller load combined. There have been a few early application versions that had very high GPU load, but low power usage- and they didn't actually crunch much work.
You can also run a lot of WUs all at the same time, maxing out GPU utilisation, but the amount of work actually done per hour drops off- and the power usage reflects that by dropping off as well (yes, there are bits of code that are designed just to max out a particular processor's power usage & not actually do anything useful; but they're a different kettle of fish altogether).
Grant
Darwin NT
ID: 1892125 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1892163 - Posted: 28 Sep 2017, 13:52:37 UTC - in response to Message 1892125.  

EDIT- I actually consider power usage to be a better indicator of work done the GPU & Memory controller load combined.


So, because an application has to use more cycles and transfer data back and forth between it and the CPU thereby using more Energy and running hotter, it's more efficient?

I'm sorry but that sounds wrong to me. If a application can do of the processing on the GPU, without the constant need to transfer the data back and forth across the bus to the CPU and do it with less cycles on the GPU. That sounds more efficient to me.

I don't think Raistmer really wants to get involved in this discussion but one can never tell.
ID: 1892163 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1892304 - Posted: 29 Sep 2017, 4:24:04 UTC - in response to Message 1892163.  
Last modified: 29 Sep 2017, 4:24:33 UTC

EDIT- I actually consider power usage to be a better indicator of work done the GPU & Memory controller load combined.

So, because an application has to use more cycles and transfer data back and forth between it and the CPU thereby using more Energy and running hotter, it's more efficient?

???
Not sure where you're going with this.
I was talking about the efficiency of the GPU application. The more work it does, the more power the GPU will use. GPU utilisation & memory controller load are only so-so indicators- the power consumption of the video card is the best indicator. Generally the higher the GPU utilisation and the higher the Memory controller load, then the more work the GPU is doing- this being reflected by the increased power used by the card and the improved crunching times.

CPU-GPU bus utilisation with the SoG application is way, way higher than it was with the CUDA application because the CPU is needed to keep the GPU fed with data.
Grant
Darwin NT
ID: 1892304 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1892313 - Posted: 29 Sep 2017, 5:33:53 UTC - in response to Message 1892304.  


I was talking about the efficiency of the GPU application. The more work it does, the more power the GPU will use.

this being reflected by the increased power used by the card and the improved crunching times.


I agree with improved crunching times. I judge the efficency of the app based on the improvements made from one iteration to the next. Increasing throughput in the same amount of time is what should be the basis of efficiency. Being able to run more in parallel and cut down the average time vs 1 work unit per card is better efficiency. Power consumption is a necessary side effect.

Running more than 1 work unit per card will definitely increase the power requirement but doesn't mean that the card is running more efficient. I could run 5 work units at a time and vastly increase the power requirements and peg the GPU utilization at 100% but the times to complete would be horrible.

Power usage shouldn't be used a measure of efficiency as I can say the Nvidia 900 series has better efficient than the 10x0s series because they require more power to crunch the same amount of work. See where I'm going with that?

Lastly , I'm going to steal this from Gary. Someone I don't always agree with but in this case he states what I was thinking as well.

the problems to be solved are different, with different data sets, different algorithms, different memory requirements and different degrees of parallelism that can be taken advantage of, etc


So we shouldn't be trying to compare the 2 OpenCl apps because it's like comparing apples to oranges.
ID: 1892313 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1892315 - Posted: 29 Sep 2017, 5:47:54 UTC - in response to Message 1892313.  
Last modified: 29 Sep 2017, 5:49:31 UTC

Power usage shouldn't be used a measure of efficiency as I can say the Nvidia 900 series has better efficient than the 10x0s series because they require more power to crunch the same amount of work. See where I'm going with that?

Yep, and not getting what was being talked about.
For a given card- the more work it does the more power it will use. You yourself agreed that you can run heaps of WUs, but throughput will plummet, as does the power consumption.
There have been times where the GPU utilisation has been high, but throughput low.
But for every application I've used- the more WUs per hour it does, the more power the video card uses. For a given card.
Between different cards, all you can do is compare the number of WUs done per hour, and the power used to provide that output.
But for a given card- the application that results in the greatest power usage, also produces the most work.

Lastly , I'm going to steal this from Gary. Someone I don't always agree with but in this case he states what I was thinking as well.
the problems to be solved are different, with different data sets, different algorithms, different memory requirements and different degrees of parallelism that can be taken advantage of, etc

So we shouldn't be trying to compare the 2 OpenCl apps because it's like comparing apples to oranges.

True, but the fact remains that the more power a video card uses, the more work it is doing (as long as the programmer hasn't screwed up and made a power virus of course).
No, you can't directly compare applications that are processing different types of data in different ways. However, if one application results in higher power consumption on a given video card than another application, it's a pretty fair bet it's doing more work. ie, more efficient.
Grant
Darwin NT
ID: 1892315 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1892317 - Posted: 29 Sep 2017, 6:16:02 UTC - in response to Message 1892315.  

However, if one application results in higher power consumption on a given video card than another application, it's a pretty fair bet it's doing more work. ie, more efficient.


Again, this is a false assumption. You assume higher power consumption = efficiency. This isn't the case. Higher power consumption = Higher power consumption. It isn't a measure of efficency. An highly inefficient app can use just as much or MORE than an efficient one.
ID: 1892317 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1892319 - Posted: 29 Sep 2017, 6:23:02 UTC - in response to Message 1892317.  

However, if one application results in higher power consumption on a given video card than another application, it's a pretty fair bet it's doing more work. ie, more efficient.


Again, this is a false assumption. You assume higher power consumption = efficiency. This isn't the case. Higher power consumption = Higher power consumption. It isn't a measure of efficency. An highly inefficient app can use just as much or MORE than an efficient one.

Hence my point about work being done being important as well.
Grant
Darwin NT
ID: 1892319 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1892320 - Posted: 29 Sep 2017, 6:57:31 UTC

And I want to bring the discussion back to the OP question. How many GigaFlops has SETI done over time? I believe FLOPS is an industry standard measurement of compute performance. So if Einstein has done 2487 TeraFLOPS of compute with only 51K hosts AND SETI has done 977 TeraFLOPs of compute with 155K hosts, doesn't that infer that Einstein is the more efficient project and has the more efficient apps. They have done more work with less hosts.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1892320 · Report as offensive
Harri Liljeroos
Avatar

Send message
Joined: 29 May 99
Posts: 4087
Credit: 85,281,665
RAC: 126
Finland
Message 1892322 - Posted: 29 Sep 2017, 7:13:29 UTC - in response to Message 1892118.  
Last modified: 29 Sep 2017, 7:14:27 UTC

No those pages actually do show the flops used for each project. Good find.
SETI@home
Average floating point operations per second	977,968.3 GigaFLOPS / 977.968 TeraFLOPS
Active hosts	155,233 (86.99%)

Einstein@Home
Average floating point operations per second	2,487,919.8 GigaFLOPS / 2,487.920 TeraFLOPS
Active hosts	51,210 (76.78%)


Interesting that Einstein with 1/3 the number of active hosts generates 2.5 X the number of flops. Guess the apps used there are a lot more efficient than SETI's.

What I meant was that the FLOPS value do not have history on that page, so it doesn't answer the question OP asked.

I don't know how the FLOPS value is derived from the tasks on Einstein and here, but it might have something to do with the given credit/crunching time which is a lot higher in Einstein than in here.
ID: 1892322 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1892328 - Posted: 29 Sep 2017, 8:05:13 UTC - in response to Message 1892322.  

No those pages actually do show the flops used for each project. Good find.
SETI@home
Average floating point operations per second	977,968.3 GigaFLOPS / 977.968 TeraFLOPS
Active hosts	155,233 (86.99%)

Einstein@Home
Average floating point operations per second	2,487,919.8 GigaFLOPS / 2,487.920 TeraFLOPS
Active hosts	51,210 (76.78%)


Interesting that Einstein with 1/3 the number of active hosts generates 2.5 X the number of flops. Guess the apps used there are a lot more efficient than SETI's.

What I meant was that the FLOPS value do not have history on that page, so it doesn't answer the question OP asked.

I don't know how the FLOPS value is derived from the tasks on Einstein and here, but it might have something to do with the given credit/crunching time which is a lot higher in Einstein than in here.

No, FLOPS is an industry standard that has no other definition than floating point operation per second. It has nothing to do with credit or crunching time. It is a measure of the amount of work done.
Wikipedia entry for FLOPS
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1892328 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1892329 - Posted: 29 Sep 2017, 8:14:38 UTC

Read a little further in the Wikipedia entry.
Distributed computing records[edit]
Distributed computing uses the Internet to link personal computers to achieve more FLOPS:

As of October 2016, the Folding@home network has over 100 petaFLOPS of total computing power.[38][39] It was the first computing project of any kind to cross the 1, 2, 3, 4, and 5 native petaFLOPS milestones. This level of performance is primarily enabled by the cumulative effort of a vast array of powerful GPU and CPU units.[40]
As of July 2014, the entire BOINC network averages about 5.6 petaFLOPS.[41]
As of July 2014, SETI@Home, employing the BOINC software platform, averages 681 teraFLOPS.[42]
As of July 2014, Einstein@Home, a project using the BOINC network, is crunching at 492 teraFLOPS.[43]
As of July 2014, MilkyWay@Home, using the BOINC infrastructure, computes at 471 teraFLOPS.[44]
As of January 2017, GIMPS, is searching for Mersenne primes and sustaining 300 teraFLOPS.[45]


So this entry for July 2014 gives a single point baseline for the current teraflops entries that I copied from BoincStats project statistics. Can't figure out a proper trend line without a third datapoint.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1892329 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1892332 - Posted: 29 Sep 2017, 9:12:52 UTC - in response to Message 1892328.  

No, FLOPS is an industry standard that has no other definition than floating point operation per second. It has nothing to do with credit or crunching time. It is a measure of the amount of work done.
Wikipedia entry for FLOPS
True - FLOPS is an industry standard measurement.

But false in context: BOINC doesn't follow industry standard measurement techniques. In fact, BOINC doesn't measure FLOPS at all, ever.

Any value you see anywhere for the FLOPS performance of a BOINC project is reverse-engineered from the credit awarded - whether that's on a project web site, an independent statistics site like BOINCstats, or on BOINC's own front page.

If you want an accurate estimate of SETI's FLOPS performance, listen to Eric Korpela giving an on-the-record talk to an industry standard technical audience like NASA.
ID: 1892332 · Report as offensive
Harri Liljeroos
Avatar

Send message
Joined: 29 May 99
Posts: 4087
Credit: 85,281,665
RAC: 126
Finland
Message 1892344 - Posted: 29 Sep 2017, 11:34:46 UTC - in response to Message 1892332.  

No, FLOPS is an industry standard that has no other definition than floating point operation per second. It has nothing to do with credit or crunching time. It is a measure of the amount of work done.
Wikipedia entry for FLOPS
True - FLOPS is an industry standard measurement.

But false in context: BOINC doesn't follow industry standard measurement techniques. In fact, BOINC doesn't measure FLOPS at all, ever.

Any value you see anywhere for the FLOPS performance of a BOINC project is reverse-engineered from the credit awarded - whether that's on a project web site, an independent statistics site like BOINCstats, or on BOINC's own front page.

If you want an accurate estimate of SETI's FLOPS performance, listen to Eric Korpela giving an on-the-record talk to an industry standard technical audience like NASA.

That's how I remembered it.
ID: 1892344 · Report as offensive
Profile W

Send message
Joined: 23 May 16
Posts: 3
Credit: 2,566,777
RAC: 0
United States
Message 1892496 - Posted: 29 Sep 2017, 21:38:51 UTC - in response to Message 1892329.  

Read a little further in the Wikipedia entry.
Distributed computing records[edit]
Distributed computing uses the Internet to link personal computers to achieve more FLOPS:

As of October 2016, the Folding@home network has over 100 petaFLOPS of total computing power.[38][39] It was the first computing project of any kind to cross the 1, 2, 3, 4, and 5 native petaFLOPS milestones. This level of performance is primarily enabled by the cumulative effort of a vast array of powerful GPU and CPU units.[40]
As of July 2014, the entire BOINC network averages about 5.6 petaFLOPS.[41]
As of July 2014, SETI@Home, employing the BOINC software platform, averages 681 teraFLOPS.[42]
As of July 2014, Einstein@Home, a project using the BOINC network, is crunching at 492 teraFLOPS.[43]
As of July 2014, MilkyWay@Home, using the BOINC infrastructure, computes at 471 teraFLOPS.[44]
As of January 2017, GIMPS, is searching for Mersenne primes and sustaining 300 teraFLOPS.[45]


So this entry for July 2014 gives a single point baseline for the current teraflops entries that I copied from BoincStats project statistics. Can't figure out a proper trend line without a third datapoint.


Thanks! This and the boincstats page are definitely useful -- at the very least they do show a decent increase over time. Unless boincstats keeps old readouts in an xml file, though, it's probably not possible to find the whole history. I mean, someone maybe could figure it out using the boincstats algorithms and applying them to old records from seti@home. But this would require seti@home kept all those records... not sure it'd be worth the effort anyways.

And right, I wouldn't be too surprised if the estimate from boincstats were off by a little bit from the true FLOPS. But that's ok, I'm really just looking for an estimate.
ID: 1892496 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1892520 - Posted: 29 Sep 2017, 23:20:52 UTC - in response to Message 1892496.  

I'm really just looking for an estimate.

Given that the FLOPs are calculated back from the Credit, and Credit New is beyond screwed, it's going to be a very, very, very poor estimate.
Grant
Darwin NT
ID: 1892520 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : SETI@home performance over time


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.