Questions and Answers :
Getting started :
How can dont_use_dcf be set to TRUE?
Message board moderation
Author | Message |
---|---|
red-ray Send message Joined: 24 Jun 99 Posts: 308 Credit: 9,029,848 RAC: 0 |
Since the Short estimated runtimes - don't panic change on this system the DCF has been even more erratic than usual. I just checked the latest source and noted it is now: void PROJECT::update_duration_correction_factor(ACTIVE_TASK* atp) { Setting dont_use_dcf to TRUE looks like just what is needed given the range of GPU speeds this system has. Thus far I have failed to find a config option to set dont_use_dcf to TRUE, is there one? After looking at struct SCHEDULER_REPLY I suspect not and that it needs to be set by the server. Assuming this is the case will SETI@home be setting dont_use_dcf to TRUE and if so when? The sooner the better as far as I am concerned as at the moment the DCF is so high on this system that it has stopped asking for new GPU tasks. It will process what it has cached in 3 days at most and is configured to cache 20 days of work. I guess I could edit and recompile BOINC but would prefer not to do so. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
Since the Short estimated runtimes - don't panic change on this system the DCF has been even more erratic than usual. I just checked the latest source and noted it is now: Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project. BOINC WIKI |
red-ray Send message Joined: 24 Jun 99 Posts: 308 Credit: 9,029,848 RAC: 0 |
Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project. That as may well be true, but that is not what I asked about. If it is true why does dont_use_dcf exist? It would be good if the questions raised were answerd. At the moment I typically get the following. 13/04/2012 01:13:57 | SETI@home | [dcf] DCF: 0.895822->5.451914, raw_ratio 5.451914, adj_ratio 6.085934 13/04/2012 01:13:58 | SETI@home | Starting task 06ja12ab.20391.15609.15.10.64_1 using setiathome_enhanced version 610 (cuda_fermi) in slot 3 13/04/2012 01:13:58 | SETI@home | Starting task 06ja12ab.20391.15609.15.10.68_1 using setiathome_enhanced version 610 (cuda_fermi) in slot 8 13/04/2012 01:24:32 | SETI@home | Sending scheduler request: To report completed tasks. 13/04/2012 01:24:32 | SETI@home | Reporting 1 completed tasks, requesting new tasks for CPU and no new GPU tasks. Then as the DCF is much too high and I get: 13/04/2012 01:48:27 | SETI@home | [dcf] DCF: 4.766372->4.722570, raw_ratio 0.386168, adj_ratio 0.081019 13/04/2012 01:52:03 | SETI@home | [dcf] DCF: 4.722570->4.677359, raw_ratio 0.201464, adj_ratio 0.042660 so the DCF takes forever and day to correct itself. |
red-ray Send message Joined: 24 Jun 99 Posts: 308 Credit: 9,029,848 RAC: 0 |
Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project. From http://setiathome.berkeley.edu/forum_thread.php?id=67685&nowrap=true#1217891 clearly others have different views. Currently as soon as one of my slow GPUs finishes I typically get 16-Apr-2012 09:55:32 [SETI@home] Computation for task 06ja12ab.624.15754.5.10.197_0 finished This means I can't keep AP WUs in my cache at all. Is there any way I can set dont_use_dcf be set to TRUE? My GPUs speeds are [GPU CUDA Information] <- SIV64X - System Information Viewer V4.29 Beta-00 RED::ray nVidia CUDA V4.10 Peak Compute CPUs and Memory Compute Level 2 Threads Registers CUDA Cache Current Total Stack FIFO Heap |Bus-Numb-Fun| Device Name GFLOPS Capability Threads Clock Clock Cache per Block per Block API Config Memory Memory Size Size Size [3 - 00 - 0] GeForce GTX 460 1025.5 V2.01 7 1,536 1.90GHz 1.53GHz 512KB 1,024 32,768 V3.20 None 411MB 1.00GB 1KB 1MB 8MB [4 - 00 - 0] GeForce GTX 460 1025.5 V2.01 7 1,536 1.90GHz 1.53GHz 512KB 1,024 32,768 V3.20 None 391MB 1.00GB 1KB 1MB 8MB [5 - 00 - 0] GeForce GT 430 268.8 V2.01 2 1,536 700MHz 1.40GHz 128KB 1,024 32,768 V3.20 None 359MB 512MB 1KB 1MB 8MB [7 - 00 - 0] GeForce GT 520 155.5 V2.01 1 1,536 667MHz 1.62GHz 64KB 1,024 32,768 V3.20 None 358MB 512MB 1KB 1MB 8MB NVIDIA CUDA Driver, Version 285.62 C:\Windows\system32\nvcuda.dll V5.02.3790.1830 [ OK ] [ Copy] [Windows] [Machine] [USB Bus] [Network] [H/W STS] [Volumes] [ Wi-Fi] [PCI Bus] [Graphics] [GPU Info] [GPU SLI] [GPU MAP] [GPU CUDA] |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project. You really don't want to do that as all tasks then have a very bad initial estimate with no attempt at correction. BOINC WIKI |
red-ray Send message Joined: 24 Jun 99 Posts: 308 Credit: 9,029,848 RAC: 0 |
You really don't want to do that as all tasks then have a very bad initial estimate with no attempt at correction. No. All the estimates are quite good 'till the DCF jumps from about one to five or more. Clearly you have never run a system with multiple GPUs that have rather different speeds. I feel you would do well to read http://boinc.berkeley.edu/svn/trunk/boinc/checkin_notes in particulat the section that says as below. Espically client-side mechanism is counterproductive.. David 22 Mar 2012 - client/server: add optional <dont_use_dcf/> to schedule reply. If set, client won't use DCF for this project. Make this the default in server code; we now do runtime estimation entirely on the server side, and the client-side mechanism is counterproductive. sched/ sched_types.cpp,h client/ client_types.cpp,h scheduler_op.cpp,h work_fetch.cpp cs_scheduler.cpp cpu_sched.cpp |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
You really don't want to do that as all tasks then have a very bad initial estimate with no attempt at correction. I do know that the DCF values for different projects range from near zero to near infinity. Even for S@H they run from .1 to 10 depending on the machine. Not what I would consider a great estimate. BOINC WIKI |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.