Questions and Answers :
GPU applications :
Is there a way to automagically get average clock time for each Seti task?
Message board moderation
Author | Message |
---|---|
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
When I am analyzing the assorted Seti tasks I try to take the "end to end" seconds that it sat on a cpu/gpu and get an average for that particular named task on that particular named PC. The Seti website stats appear to take into account the time from download to upload (turn around) which is confounded by the size of your backlog. So the numbers are always higher than the "wall clock time" that I can see. So is there a website that provides task averages by task name (eg. cuda50, cuda22, opencl_ati_??? etc) by computer for the "time spent processing" (basically the amount of time it spent "on the cpu"? Thanks, Tom A proud member of the OFA (Old Farts Association). |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
I'm not sure what you are asking, but you may try if this is it: http://wuprop.boinc-af.org/results/delai.py  - ALF - "Find out what you don't do well ..... then don't do it!" :)  |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
That is an idea but I am not being clear. Consider: http://setiathome.berkeley.edu/results.php?hostid=7354511&offset=40&show_names=0&state=0&appid= which should be a page of one of my computers showing: "Runtime" in column 6 from the left side of the page. I want to get an average of that number by the application in column 9. Currently I am doing guestimating using a calculator. I would be happier if I could get those numbers automagically. And I need it one pc at a time or the results are useless for judging if I have that pc setup for the best it can be(I have some really clunky ones). Thanks for the reply. Tom A proud member of the OFA (Old Farts Association). |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
You may instead look here: http://setiathome.berkeley.edu/host_app_versions.php?hostid=7354511 Average processing rate XX GFLOPS If N is the number of simultaneously running tasks of this type: GFLOPS * N = total 'power' of this app on this system (averaging the Runtime column will put in average also the 'wrong' tasks, e.g. those that finish in 5 seconds 'Average processing rate' (APR) only puts in average the 'good' tasks ) Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
(averaging the Runtime column will put in average also the 'wrong' tasks, e.g. those that finish in 5 seconds Sorry. I don't see how that answers my original question. I want an average "runtime" per application per computer. If I was having a lot of "outliers" "eg. those that finish in 5 seconds" then I would understand that it is skewing the average results. I am not having very many of those. The Gflops is a lovely number but it isn't measuring the average runtime which is what I am trying to get at without doing it manually. Thanks for the reply. Tom A proud member of the OFA (Old Farts Association). |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
Are you assuming that all workunits are created closely the same and that there's never a variation of workunits sent out that can have a runtime of 5 seconds? |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
Nope, I am not assuming 5 seconds run time is reasonable. For my first approximation of gauging how things are running I wanted an average for the processing time for each application. I have been using Windows trusty calculator but if I want an average for 40+ of an application it would be a whole lot nicer if someone had come up with a website that does that. I agree that once you have enough of a track record looking at the Megaflops being used on each application gives you an excellent idea of which ones are being processed "faster". I just wanted average elapsed time. And since nothing was based on it (like whatever we are "earning") I don't really care if it is slightly skewed by outliers. If I was getting a lot errors/outliers that would be different. I'm not so I don't really worry about it. Thanks for talking about this with me. Tom A proud member of the OFA (Old Farts Association). |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
5 second runtime workunits are not outliers. 5 second runtime workunits can be quite common depending on the angle range of the dish while recording. They come and go in batches, as do workunits of various runtime lengths. This is why runtime is a bad measurement when there is so much variance. [Edit] At the least, if you're going to record runtime lengths, also record the Angle Range so when you compare runtimes, you're also comparing them to similar Angle Ranges. Otherwise the runtimes are meaningless. |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
The only thing I am using "run time" for is to get an early read (preferably) how fast my GPU is processing work units. And like any self-respecting user I prefer automated calculations to doing it by hand. I am looking for "rough" guidance that doesn't take 2-3 weeks to show up and stabilize like the "flops" on the Seti at home website does. So I was asking. I don't need really precise results. I need moderately precise results, fast, in an automated way. Apparently not going to get them though. Tom |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
I thought it was obvious from my responses you're not going to get what you're asking for because no one has spent time developing such a meaningless gauge such as runtime without Angle Range. I understand you're looking for more immediate gratification stats to help you in your quest for optimization, and you want to gauge which changes you make have the most effect, and you don't want to wait weeks for stabilization. (BTW - RAC is not flops counting. It used to be, but that all went out the window with something call CreditNew.) The problem with the instant gratification approach such as yours is that it misses the bigger picture of the performance problem. I pose this scenario to you: Say you build a couple systems, and gauge their performance in the short term with runtime as your litmus test. You find that one particular build performs solidly better than the others, so you decide to build another. You take the same measurements on the new build but the runtimes are off from your first build, so you think something is wrong with the new build. You spend countless hours trying to figure out what is wrong. After many hair pulling moments and lots of posts asking questions on the forums, you see one repeated theme, and that it all comes back to: did you measure Angle Range with your runtimes? If not, then your initial runtimes, and nearly any other runtime you use will be meaningless. Gotta take a step back from the instant gratification measurements a bit. Take your time optimizing. Real science takes time and lots of observation. You'll thank me later. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.