Message boards :
Number crunching :
Flops, fpops, and all that jazz!
Message board moderation
Author | Message |
---|---|
gizbar Send message Joined: 7 Jan 01 Posts: 586 Credit: 21,087,774 RAC: 0 |
Hi everybody, Been reading with interest the debate over the Vlar killing, rescheduling, and cherry-picking over two threads now, and Richard Haslegrove said in Cherry-picking + Mass VLARkill usage. Remind me - have you set up a proper flop-balanced working app_info, along the lines of MarkJ's app_info for AP503, AP505, MB603 and MB608? (Don't just blindly copy the figures in that thread - much water, and many CUDA v2.3 DLLs, have flowed under the bridge since July). But if you do that, and get it well-matched to your own hardware, most times BOINC will feel the need for another task in its cache pretty much every time it finishes an old one. Coupled with an automatic rescheduler or script, you can achieve most of what you want: my current list, with ten tasks ready to report across three CUDA-enabled hosts, is pretty typical - and no abused buttons in sight! My question is this: I want to update my app-info file to include this information, is the information in this thread still valid, even though we've had new versions of Boinc with a different way of calculating GPU flops? And has the slight bug with the rescheduler app been resolved? ie. the one where Boinc doesn't restart properly? I want to automate it, but I need to be sure that it can be safely left on it's own to get on with it. I currently run it manually to ensure a good restart. All and any information gratefully received. regards, Gizbar. A proud GPU User Server Donor! |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I have this flops entries in the app_info.xml on both machines. But only CPU MB & CUDA MB. No AP. I used the GFLOPS of BOINC V6.6.x . If you use the GFLOPS of BOINC V6.10.x, you would have a GPU with more 'performance'. IIRC, you have also the GIGABYTE GTX260(-216) SOC, I had 117 GFLOPS (with V6.6.x) = 117,000,000,000 flops. Message 950396 Because of CUDA_V2.3 you need to calculate the flops of your GPU with x 0.5 . Message 926518 in your upper mentioned thread. So I have 58500000000 in my app_info.xml . Have a small look in my profile also because of the nice prog (eFMer Priority) of Fred, which can increase the priority of the CUDA tasks ('set high' and refresh rate '1' sec.). I let run on both machines this prog with this settings and everything is well. ____________ [Optimized project applications, for to increase your PC performance (double RAC)!][Overview of abbreviations, which are used often in forum and their meaning.] |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Ahh.. I remember.. I use nVIDIA_driver_190.x , this version show other (little higher) GFLOPS than .._191.x and .._195.x . Also the MB_6.08_CUDA_V12_app show 1512 shader speed, although they are at 1500. So for to be accurate, you should make a benchmark with V6.6.x . ____________ [Optimized project applications, for to increase your PC performance (double RAC)!][Overview of abbreviations, which are used often in forum and their meaning.] |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14649 Credit: 200,643,578 RAC: 874 |
The basic principle is still sound. But here are some update notes - complete, as far as I know. 1) Astropulse: the thread I linked contained updated details for Astropulse v5.05/optimisation r168. These are still the most up-to-date available, so no change needed. 2) CUDA: If you are using the CUDA v2.3 DLLs (and if you're using optimised applications, and nVidia driver v190.38 or later, there's no reason why you shouldn't), you'll find it is faster than this guideline suggests - much faster. A reasonable starting guess would be twice as fast, so the last line of point 8 should read Multibeam 608 = Est.Gflops x 0.4 3) BOINC: If you are running BOINC v6.10.14 or later, your GPU speed will be reported in 'GFLOPS peak' instead of 'est. GFLOPS'. The difference is significant: the speed of the card is actually unchanged, but the figures reported have been multiplied by (exactly) 5.6 - you need to reverse out that change by dividing by 5.6 The new line becomes Multibeam 608 = Gflops peak x 0.072 (rounded up - but this isn't an exact science. so x 0.07 would do to get you started) Addendum: Sutaru's suggestion of a 0.5 multiplier for BOINC below v6.10.14 (est.GFLOPS), or 0.09 for BOINC v6.10.14 (GFLOPS peak), is also a perfectly reasonable starting point. You may find that this works better when there is a high preponderance of VHAR in the mix (CUDA 2.3 works particularly well at these ARs), and 0.4 is closer for mid-AR. The acid test: Allow enough time for everything to settle down, and a stable DCF to be established. Then, watch the estimates shown in BOINC Manager for unstarted tasks as a routine, boring, nothing-special sort of task reaches 100% and exits. If the estimates don't change, nirvana: you've cracked it. Small changes don't matter, but if there's a big jump, it's head-scratching time. Addendum 2: talking of benchmarks - in MarkJ's point 7, you need the value of <p_fpops>, the floating point benchmark. If you have any doubt that your current benchmark is accurate (I've seen some dodgy ones when the benchmark is run automatically immediately after upgrading BOINC), I suggest you do a manual benchmark run at a quiet time before using the <p_fpops> figure. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Haa.. or.. Take the GFLOPS of V6.10.x = X X (x 1,000,000,000) / 5.58 x 0.5 = this should be the flops value entry like with V6.6.x . I calculated with my values. V6.10.x - 653 GFLOPS V6.6.x - 117 GFLOPS V6.10.x show x 5.58 higher GFLOPS than V6.6.x . EDIT: Or follow Richard's hint, he posted in the meantime.. ;-) ____________ [Optimized project applications, for to increase your PC performance (double RAC)!][Overview of abbreviations, which are used often in forum and their meaning.] |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14649 Credit: 200,643,578 RAC: 874 |
V6.10.x show x 5.58 higher GFLOPS than V6.6.x . Only v6.10.14 onwards. And the factor is exactly 5.6, from the source code (changeset [trac]changeset:19310[/trac]). |
gizbar Send message Joined: 7 Jan 01 Posts: 586 Credit: 21,087,774 RAC: 0 |
Sorry, should have been more specific in my op. Thanks for all the replies so far. I'm running Boinc 6.10.25 64-bit, Lunatics v0.2 with Vlar kill, Rescheduler 1.9 (run manually) and NVidia 191.07 64-bit. I run all versions of data. MB, AP, Cuda. Gotta go to work now, so I will read this thread properly later, and see where that leads me then. regards, Gizbar. A proud GPU User Server Donor! |
Lint trap Send message Joined: 30 May 03 Posts: 871 Credit: 28,092,319 RAC: 0 |
And has the slight bug with the rescheduler app been resolved? ie. the one where Boinc doesn't restart properly? I've been using ReScheduler 1.9 since it came out. Mostly it is used manually, but very occasionally in auto mode and I don't recall ever having any problems with this version. I don't know if it makes any diff, but I'm still running BOINC 6.6.41. Martin |
gizbar Send message Joined: 7 Jan 01 Posts: 586 Credit: 21,087,774 RAC: 0 |
Update: It's the weekend, and now I have time to do some investigating. Except: For some reason, Boinc has gone mental! What has happened is that Boinc has downloaded work, and then thrown a wobbly? a) It has gone into high-priority mode, and is running tasks with a deadline of 22/01/10. However, there are tasks on the list with an earlier deadline. b) It's pre-empting tasks again. By this I mean that it has started tasks, then paused them and swapped to new tasks. It did the same on cuda, although that has seemingly resolved itself now. There are approximately 26 waiting to run that have been started, 4 cpu running and 1 cuda running. I don't want to start inserting flops and fpops before I'm happy that Boinc is behaving itself. I did run the benchmarks this morning, but then noticed that this 'high-priority' mode had started. Is it best just to leave Boinc to do it's thing? Or can I try to update the flops and fpops? Advice, thoughts, and answers please. regards, Gizbar. A proud GPU User Server Donor! |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Haa.. maybe you found now a new BUG in BOINC V6.10.25.. ;-) I would take the last recommended version.. I let run V6.10.18 and it's fine. http://boinc.berkeley.edu/download_all.php You can start to insert the flops entries. Then you will be surprised, all estimate/remaining times of all WUs are different than before. BOINC need to calculate some CUDA WUs (it's the fastest) and after a short time you will see well estimate/remaining times. ____________ [Optimized project applications, for to increase your PC performance (double RAC)!][Overview of abbreviations, which are used often in forum and their meaning.] |
gizbar Send message Joined: 7 Jan 01 Posts: 586 Credit: 21,087,774 RAC: 0 |
Not sure it's the Boinc version. Just checked the other rig downstairs, and that has been doing the same thing, but to a lesser extent. It is a lot slower than this one, so therefore has a correspondingly smaller cache. Only got a few pushed to high priority, but was throwing a wobbly on cuda units. Had to reschedule a couple of vlars, and reboot system to clear. regards Gizbar. A proud GPU User Server Donor! |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Hi everybody, The 'bug' depends on the stability of your rig, as best I can determine. I run it manually myself. It has not been a problem for some time now....... I noticed the tick when I first started running resched.......but I have not had any problems since. So I might try the auto setting again. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Not sure it's the Boinc version. Just checked the other rig downstairs, and that has been doing the same thing, but to a lesser extent. It is a lot slower than this one, so therefore has a correspondingly smaller cache. Only got a few pushed to high priority, but was throwing a wobbly on cuda units. Had to reschedule a couple of vlars, and reboot system to clear. It's up to you.. ;-) You could (un- &)install to the current recommended version.. Sure, this version could be also 'buggy' - if not, why they published a new DEV-V..? ;-) But I have this version (V6.10.18) since few weeks.. maybe months and it's well. You should not overload BOINC with too much WUs. I have only 3 day WU cache on my PCs. This are ~ 2,000 WUs on my GPU cruncher.. (~ 700 MB). If I would go higher, BOINC go 'crazy' and do also this what you described. ____________ [Optimized project applications, for to increase your PC performance (double RAC)!][Overview of abbreviations, which are used often in forum and their meaning.] |
hiamps Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 |
Not sure it's the Boinc version. Just checked the other rig downstairs, and that has been doing the same thing, but to a lesser extent. It is a lot slower than this one, so therefore has a correspondingly smaller cache. Only got a few pushed to high priority, but was throwing a wobbly on cuda units. Had to reschedule a couple of vlars, and reboot system to clear. I had that happen with the current, that was what made me try the .24 and .25 hoping it was fixed. It still happens once in awhile about as often as with the .18. Official Abuser of Boinc Buttons... And no good credit hound! |
Lint trap Send message Joined: 30 May 03 Posts: 871 Credit: 28,092,319 RAC: 0 |
The 'bug' depends on the stability of your rig, as best I can determine. If a system can pass Intel's Burnin test it should be good. Warning: Don't run the max test until you've passed easier configs. And don't run the burnin test at all if you are not sure of your cpu cooling. This pgm ran much warmer than Prime95 ever did on my oc'd cpu. Google "Intelburntest" (no quotes). Martin [edited...] |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.