Message boards :
Number crunching :
SETI@home performance over time
Message board moderation
Author | Message |
---|---|
W Send message Joined: 23 May 16 Posts: 3 Credit: 2,566,777 RAC: 0 |
Is there any way to track the performance of the SETI@home network over time? What I'm looking for is basically a graph of average SETI@home performance in GigaFLOPS over the time of its existence. I'm especially curious to see how this has varied over recent years. My memory may be deceiving me, but it seems the average performance increased significantly in the past couple of years? |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Don't know of any. My other projects Einstein and Milkyway do show the number of teraflops and gigaflops that each application has crunched on their servers status pages. Seti doesn't show any metric about flops , just the amount of tasks processed. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Harri Liljeroos Send message Joined: 29 May 99 Posts: 4090 Credit: 85,281,665 RAC: 126 |
There is this: https://boincstats.com/en/stats/0/project/detail/overview Not directly what you are asking but credit and host numbers give you some idea. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
No those pages actually do show the flops used for each project. Good find. SETI@home Average floating point operations per second 977,968.3 GigaFLOPS / 977.968 TeraFLOPS Active hosts 155,233 (86.99%) Einstein@Home Average floating point operations per second 2,487,919.8 GigaFLOPS / 2,487.920 TeraFLOPS Active hosts 51,210 (76.78%) Interesting that Einstein with 1/3 the number of active hosts generates 2.5 X the number of flops. Guess the apps used there are a lot more efficient than SETI's. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Oh no, not you too Keith... The OpenCl apps on Einstein are not very efficient. They run hotter and use a lot more of the GPU than Seti. They also stuck with a full CPU core for each work unit. Raistmer worked very hard to get the CPU usage down and improved the efficency so that we could run more than 1 at a time without the systems crashing. I know for a fact that my GPUs run 10C hotter at a minimun than they do while doing similar amount of work there vs here. The roughness of the application also means I can not pause any work without a system locking up or crashing. Overclock is much less there as well since if I try to match the setting, the GPUs crash after the first series of work. You can't compare the 2 applications. The work units are different and the refinement of the OpenCl is as well. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
The OpenCl apps on Einstein are not very efficient. They run hotter and use a lot more of the GPU than Seti. ? Running hotter and using more of the computing resource is usually an indication of increased efficiency of the application. The AVX application for the CPU resulted in me having to use an aftermarket cooler due to the increase in power & heat from the extra work done. The SoG application results in higher power usage & temperatures (for a given fan speed) compared to CUDA due to the extra work done. ie, the applications are more efficient. EDIT- I actually consider power usage to be a better indicator of work done the GPU & Memory controller load combined. There have been a few early application versions that had very high GPU load, but low power usage- and they didn't actually crunch much work. You can also run a lot of WUs all at the same time, maxing out GPU utilisation, but the amount of work actually done per hour drops off- and the power usage reflects that by dropping off as well (yes, there are bits of code that are designed just to max out a particular processor's power usage & not actually do anything useful; but they're a different kettle of fish altogether). Grant Darwin NT |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
EDIT- I actually consider power usage to be a better indicator of work done the GPU & Memory controller load combined. So, because an application has to use more cycles and transfer data back and forth between it and the CPU thereby using more Energy and running hotter, it's more efficient? I'm sorry but that sounds wrong to me. If a application can do of the processing on the GPU, without the constant need to transfer the data back and forth across the bus to the CPU and do it with less cycles on the GPU. That sounds more efficient to me. I don't think Raistmer really wants to get involved in this discussion but one can never tell. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
EDIT- I actually consider power usage to be a better indicator of work done the GPU & Memory controller load combined. ??? Not sure where you're going with this. I was talking about the efficiency of the GPU application. The more work it does, the more power the GPU will use. GPU utilisation & memory controller load are only so-so indicators- the power consumption of the video card is the best indicator. Generally the higher the GPU utilisation and the higher the Memory controller load, then the more work the GPU is doing- this being reflected by the increased power used by the card and the improved crunching times. CPU-GPU bus utilisation with the SoG application is way, way higher than it was with the CUDA application because the CPU is needed to keep the GPU fed with data. Grant Darwin NT |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
this being reflected by the increased power used by the card and the improved crunching times. I agree with improved crunching times. I judge the efficency of the app based on the improvements made from one iteration to the next. Increasing throughput in the same amount of time is what should be the basis of efficiency. Being able to run more in parallel and cut down the average time vs 1 work unit per card is better efficiency. Power consumption is a necessary side effect. Running more than 1 work unit per card will definitely increase the power requirement but doesn't mean that the card is running more efficient. I could run 5 work units at a time and vastly increase the power requirements and peg the GPU utilization at 100% but the times to complete would be horrible. Power usage shouldn't be used a measure of efficiency as I can say the Nvidia 900 series has better efficient than the 10x0s series because they require more power to crunch the same amount of work. See where I'm going with that? Lastly , I'm going to steal this from Gary. Someone I don't always agree with but in this case he states what I was thinking as well. the problems to be solved are different, with different data sets, different algorithms, different memory requirements and different degrees of parallelism that can be taken advantage of, etc So we shouldn't be trying to compare the 2 OpenCl apps because it's like comparing apples to oranges. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Power usage shouldn't be used a measure of efficiency as I can say the Nvidia 900 series has better efficient than the 10x0s series because they require more power to crunch the same amount of work. See where I'm going with that? Yep, and not getting what was being talked about. For a given card- the more work it does the more power it will use. You yourself agreed that you can run heaps of WUs, but throughput will plummet, as does the power consumption. There have been times where the GPU utilisation has been high, but throughput low. But for every application I've used- the more WUs per hour it does, the more power the video card uses. For a given card. Between different cards, all you can do is compare the number of WUs done per hour, and the power used to provide that output. But for a given card- the application that results in the greatest power usage, also produces the most work. Lastly , I'm going to steal this from Gary. Someone I don't always agree with but in this case he states what I was thinking as well. True, but the fact remains that the more power a video card uses, the more work it is doing (as long as the programmer hasn't screwed up and made a power virus of course). No, you can't directly compare applications that are processing different types of data in different ways. However, if one application results in higher power consumption on a given video card than another application, it's a pretty fair bet it's doing more work. ie, more efficient. Grant Darwin NT |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
However, if one application results in higher power consumption on a given video card than another application, it's a pretty fair bet it's doing more work. ie, more efficient. Again, this is a false assumption. You assume higher power consumption = efficiency. This isn't the case. Higher power consumption = Higher power consumption. It isn't a measure of efficency. An highly inefficient app can use just as much or MORE than an efficient one. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
However, if one application results in higher power consumption on a given video card than another application, it's a pretty fair bet it's doing more work. ie, more efficient. Hence my point about work being done being important as well. Grant Darwin NT |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
And I want to bring the discussion back to the OP question. How many GigaFlops has SETI done over time? I believe FLOPS is an industry standard measurement of compute performance. So if Einstein has done 2487 TeraFLOPS of compute with only 51K hosts AND SETI has done 977 TeraFLOPs of compute with 155K hosts, doesn't that infer that Einstein is the more efficient project and has the more efficient apps. They have done more work with less hosts. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Harri Liljeroos Send message Joined: 29 May 99 Posts: 4090 Credit: 85,281,665 RAC: 126 |
No those pages actually do show the flops used for each project. Good find. What I meant was that the FLOPS value do not have history on that page, so it doesn't answer the question OP asked. I don't know how the FLOPS value is derived from the tasks on Einstein and here, but it might have something to do with the given credit/crunching time which is a lot higher in Einstein than in here. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
No those pages actually do show the flops used for each project. Good find. No, FLOPS is an industry standard that has no other definition than floating point operation per second. It has nothing to do with credit or crunching time. It is a measure of the amount of work done. Wikipedia entry for FLOPS Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Read a little further in the Wikipedia entry. Distributed computing records[edit] Distributed computing uses the Internet to link personal computers to achieve more FLOPS: As of October 2016, the Folding@home network has over 100 petaFLOPS of total computing power.[38][39] It was the first computing project of any kind to cross the 1, 2, 3, 4, and 5 native petaFLOPS milestones. This level of performance is primarily enabled by the cumulative effort of a vast array of powerful GPU and CPU units.[40] As of July 2014, the entire BOINC network averages about 5.6 petaFLOPS.[41] As of July 2014, SETI@Home, employing the BOINC software platform, averages 681 teraFLOPS.[42] As of July 2014, Einstein@Home, a project using the BOINC network, is crunching at 492 teraFLOPS.[43] As of July 2014, MilkyWay@Home, using the BOINC infrastructure, computes at 471 teraFLOPS.[44] As of January 2017, GIMPS, is searching for Mersenne primes and sustaining 300 teraFLOPS.[45] So this entry for July 2014 gives a single point baseline for the current teraflops entries that I copied from BoincStats project statistics. Can't figure out a proper trend line without a third datapoint. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
No, FLOPS is an industry standard that has no other definition than floating point operation per second. It has nothing to do with credit or crunching time. It is a measure of the amount of work done.True - FLOPS is an industry standard measurement. But false in context: BOINC doesn't follow industry standard measurement techniques. In fact, BOINC doesn't measure FLOPS at all, ever. Any value you see anywhere for the FLOPS performance of a BOINC project is reverse-engineered from the credit awarded - whether that's on a project web site, an independent statistics site like BOINCstats, or on BOINC's own front page. If you want an accurate estimate of SETI's FLOPS performance, listen to Eric Korpela giving an on-the-record talk to an industry standard technical audience like NASA. |
Harri Liljeroos Send message Joined: 29 May 99 Posts: 4090 Credit: 85,281,665 RAC: 126 |
No, FLOPS is an industry standard that has no other definition than floating point operation per second. It has nothing to do with credit or crunching time. It is a measure of the amount of work done.True - FLOPS is an industry standard measurement. That's how I remembered it. |
W Send message Joined: 23 May 16 Posts: 3 Credit: 2,566,777 RAC: 0 |
Read a little further in the Wikipedia entry. Thanks! This and the boincstats page are definitely useful -- at the very least they do show a decent increase over time. Unless boincstats keeps old readouts in an xml file, though, it's probably not possible to find the whole history. I mean, someone maybe could figure it out using the boincstats algorithms and applying them to old records from seti@home. But this would require seti@home kept all those records... not sure it'd be worth the effort anyways. And right, I wouldn't be too surprised if the estimate from boincstats were off by a little bit from the true FLOPS. But that's ok, I'm really just looking for an estimate. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
I'm really just looking for an estimate. Given that the FLOPs are calculated back from the Credit, and Credit New is beyond screwed, it's going to be a very, very, very poor estimate. Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.