How can dont_use_dcf be set to TRUE?

Questions and Answers : Getting started : How can dont_use_dcf be set to TRUE?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile red-ray
Avatar

Send message
Joined: 24 Jun 99
Posts: 308
Credit: 9,029,848
RAC: 0
United Kingdom
Message 1217444 - Posted: 12 Apr 2012, 23:21:37 UTC
Last modified: 12 Apr 2012, 23:50:01 UTC

Since the Short estimated runtimes - don't panic change on this system the DCF has been even more erratic than usual. I just checked the latest source and noted it is now:

void PROJECT::update_duration_correction_factor(ACTIVE_TASK* atp) {
if (dont_use_dcf) return;

Setting dont_use_dcf to TRUE looks like just what is needed given the range of GPU speeds this system has. Thus far I have failed to find a config option to set dont_use_dcf to TRUE, is there one? After looking at struct SCHEDULER_REPLY I suspect not and that it needs to be set by the server.

Assuming this is the case will SETI@home be setting dont_use_dcf to TRUE and if so when?

The sooner the better as far as I am concerned as at the moment the DCF is so high on this system that it has stopped asking for new GPU tasks. It will process what it has cached in 3 days at most and is configured to cache 20 days of work.

I guess I could edit and recompile BOINC but would prefer not to do so.
ID: 1217444 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1217453 - Posted: 13 Apr 2012, 0:07:52 UTC - in response to Message 1217444.  

Since the Short estimated runtimes - don't panic change on this system the DCF has been even more erratic than usual. I just checked the latest source and noted it is now:

void PROJECT::update_duration_correction_factor(ACTIVE_TASK* atp) {
if (dont_use_dcf) return;

Setting dont_use_dcf to TRUE looks like just what is needed given the range of GPU speeds this system has. Thus far I have failed to find a config option to set dont_use_dcf to TRUE, is there one? After looking at struct SCHEDULER_REPLY I suspect not and that it needs to be set by the server.

Assuming this is the case will SETI@home be setting dont_use_dcf to TRUE and if so when?

The sooner the better as far as I am concerned as at the moment the DCF is so high on this system that it has stopped asking for new GPU tasks. It will process what it has cached in 3 days at most and is configured to cache 20 days of work.

I guess I could edit and recompile BOINC but would prefer not to do so.

Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project.


BOINC WIKI
ID: 1217453 · Report as offensive
Profile red-ray
Avatar

Send message
Joined: 24 Jun 99
Posts: 308
Credit: 9,029,848
RAC: 0
United Kingdom
Message 1217458 - Posted: 13 Apr 2012, 0:19:33 UTC - in response to Message 1217453.  
Last modified: 13 Apr 2012, 0:55:21 UTC

Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project.

That as may well be true, but that is not what I asked about. If it is true why does dont_use_dcf exist?

It would be good if the questions raised were answerd.

At the moment I typically get the following.

13/04/2012 01:13:57 | SETI@home | [dcf] DCF: 0.895822->5.451914, raw_ratio 5.451914, adj_ratio 6.085934
13/04/2012 01:13:58 | SETI@home | Starting task 06ja12ab.20391.15609.15.10.64_1 using setiathome_enhanced version 610 (cuda_fermi) in slot 3
13/04/2012 01:13:58 | SETI@home | Starting task 06ja12ab.20391.15609.15.10.68_1 using setiathome_enhanced version 610 (cuda_fermi) in slot 8
13/04/2012 01:24:32 | SETI@home | Sending scheduler request: To report completed tasks.
13/04/2012 01:24:32 | SETI@home | Reporting 1 completed tasks, requesting new tasks for CPU

and no new GPU tasks. Then as the DCF is much too high and I get:

13/04/2012 01:48:27 | SETI@home | [dcf] DCF: 4.766372->4.722570, raw_ratio 0.386168, adj_ratio 0.081019
13/04/2012 01:52:03 | SETI@home | [dcf] DCF: 4.722570->4.677359, raw_ratio 0.201464, adj_ratio 0.042660

so the DCF takes forever and day to correct itself.
ID: 1217458 · Report as offensive
Profile red-ray
Avatar

Send message
Joined: 24 Jun 99
Posts: 308
Credit: 9,029,848
RAC: 0
United Kingdom
Message 1219088 - Posted: 16 Apr 2012, 10:26:21 UTC - in response to Message 1217453.  

Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project.

From http://setiathome.berkeley.edu/forum_thread.php?id=67685&nowrap=true#1217891 clearly others have different views.

Currently as soon as one of my slow GPUs finishes I typically get

16-Apr-2012 09:55:32 [SETI@home] Computation for task 06ja12ab.624.15754.5.10.197_0 finished
16-Apr-2012 09:55:32 [SETI@home] [dcf] DCF: 0.858760->5.460562, raw_ratio 5.460562, adj_ratio 6.358659
16-Apr-2012 09:55:32 [SETI@home] Restarting task ap_23my11ae_B5_P0_00252_20120415_13800.wu_0 using astropulse_v6 version 601 in slot 6
16-Apr-2012 09:55:32 [SETI@home] Restarting task ap_23my11ae_B6_P1_00072_20120415_20338.wu_0 using astropulse_v6 version 601 in slot 3
16-Apr-2012 09:55:32 [SETI@home] Restarting task ap_23my11ae_B6_P1_00144_20120415_20338.wu_0 using astropulse_v6 version 601 in slot 5

This means I can't keep AP WUs in my cache at all. Is there any way I can set dont_use_dcf be set to TRUE?

My GPUs speeds are
[GPU CUDA Information] <- SIV64X - System Information Viewer V4.29 Beta-00 RED::ray

              nVidia CUDA V4.10    Peak  Compute   CPUs and    Memory   Compute   Level 2   Threads  Registers CUDA   Cache    Current    Total   Stack   FIFO   Heap
|Bus-Numb-Fun| Device Name       GFLOPS  Capability Threads     Clock     Clock     Cache  per Block per Block API    Config    Memory   Memory    Size   Size   Size

[3 - 00 - 0]  GeForce GTX 460    1025.5  V2.01      7 1,536   1.90GHz   1.53GHz     512KB     1,024    32,768  V3.20  None       411MB   1.00GB     1KB    1MB    8MB
[4 - 00 - 0]  GeForce GTX 460    1025.5  V2.01      7 1,536   1.90GHz   1.53GHz     512KB     1,024    32,768  V3.20  None       391MB   1.00GB     1KB    1MB    8MB
[5 - 00 - 0]  GeForce GT 430      268.8  V2.01      2 1,536    700MHz   1.40GHz     128KB     1,024    32,768  V3.20  None       359MB    512MB     1KB    1MB    8MB
[7 - 00 - 0]  GeForce GT 520      155.5  V2.01      1 1,536    667MHz   1.62GHz      64KB     1,024    32,768  V3.20  None       358MB    512MB     1KB    1MB    8MB

        NVIDIA CUDA Driver, Version 285.62  C:\Windows\system32\nvcuda.dll V5.02.3790.1830

[  OK ]  [ Copy]   [Windows]  [Machine]  [USB Bus]  [Network]   [H/W STS]  [Volumes]  [ Wi-Fi]  [PCI Bus]   [Graphics] [GPU Info] [GPU SLI]  [GPU MAP]   [GPU CUDA]
ID: 1219088 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1219103 - Posted: 16 Apr 2012, 11:03:20 UTC - in response to Message 1219088.  

Unfortunately, not using DCF has the problem that the runtimes will never get corrected from the (usually bad) initial estimates by the project.

From http://setiathome.berkeley.edu/forum_thread.php?id=67685&nowrap=true#1217891 clearly others have different views.

Currently as soon as one of my slow GPUs finishes I typically get

16-Apr-2012 09:55:32 [SETI@home] Computation for task 06ja12ab.624.15754.5.10.197_0 finished
16-Apr-2012 09:55:32 [SETI@home] [dcf] DCF: 0.858760->5.460562, raw_ratio 5.460562, adj_ratio 6.358659
16-Apr-2012 09:55:32 [SETI@home] Restarting task ap_23my11ae_B5_P0_00252_20120415_13800.wu_0 using astropulse_v6 version 601 in slot 6
16-Apr-2012 09:55:32 [SETI@home] Restarting task ap_23my11ae_B6_P1_00072_20120415_20338.wu_0 using astropulse_v6 version 601 in slot 3
16-Apr-2012 09:55:32 [SETI@home] Restarting task ap_23my11ae_B6_P1_00144_20120415_20338.wu_0 using astropulse_v6 version 601 in slot 5

This means I can't keep AP WUs in my cache at all. Is there any way I can set dont_use_dcf be set to TRUE?

My GPUs speeds are
[GPU CUDA Information] <- SIV64X - System Information Viewer V4.29 Beta-00 RED::ray

              nVidia CUDA V4.10    Peak  Compute   CPUs and    Memory   Compute   Level 2   Threads  Registers CUDA   Cache    Current    Total   Stack   FIFO   Heap
|Bus-Numb-Fun| Device Name       GFLOPS  Capability Threads     Clock     Clock     Cache  per Block per Block API    Config    Memory   Memory    Size   Size   Size

[3 - 00 - 0]  GeForce GTX 460    1025.5  V2.01      7 1,536   1.90GHz   1.53GHz     512KB     1,024    32,768  V3.20  None       411MB   1.00GB     1KB    1MB    8MB
[4 - 00 - 0]  GeForce GTX 460    1025.5  V2.01      7 1,536   1.90GHz   1.53GHz     512KB     1,024    32,768  V3.20  None       391MB   1.00GB     1KB    1MB    8MB
[5 - 00 - 0]  GeForce GT 430      268.8  V2.01      2 1,536    700MHz   1.40GHz     128KB     1,024    32,768  V3.20  None       359MB    512MB     1KB    1MB    8MB
[7 - 00 - 0]  GeForce GT 520      155.5  V2.01      1 1,536    667MHz   1.62GHz      64KB     1,024    32,768  V3.20  None       358MB    512MB     1KB    1MB    8MB

        NVIDIA CUDA Driver, Version 285.62  C:\Windows\system32\nvcuda.dll V5.02.3790.1830

[  OK ]  [ Copy]   [Windows]  [Machine]  [USB Bus]  [Network]   [H/W STS]  [Volumes]  [ Wi-Fi]  [PCI Bus]   [Graphics] [GPU Info] [GPU SLI]  [GPU MAP]   [GPU CUDA]

You really don't want to do that as all tasks then have a very bad initial estimate with no attempt at correction.


BOINC WIKI
ID: 1219103 · Report as offensive
Profile red-ray
Avatar

Send message
Joined: 24 Jun 99
Posts: 308
Credit: 9,029,848
RAC: 0
United Kingdom
Message 1219112 - Posted: 16 Apr 2012, 11:22:16 UTC - in response to Message 1219103.  
Last modified: 16 Apr 2012, 11:49:03 UTC

You really don't want to do that as all tasks then have a very bad initial estimate with no attempt at correction.

No. All the estimates are quite good 'till the DCF jumps from about one to five or more.

Clearly you have never run a system with multiple GPUs that have rather different speeds.

I feel you would do well to read http://boinc.berkeley.edu/svn/trunk/boinc/checkin_notes in particulat the section that says as below. Espically client-side mechanism is counterproductive..

David  22 Mar 2012
    - client/server: add optional <dont_use_dcf/> to schedule reply.
        If set, client won't use DCF for this project.
        Make this the default in server code;
        we now do runtime estimation entirely on the server side,
        and the client-side mechanism is counterproductive.

    sched/
        sched_types.cpp,h
    client/
        client_types.cpp,h
        scheduler_op.cpp,h
        work_fetch.cpp
        cs_scheduler.cpp
        cpu_sched.cpp

ID: 1219112 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1219332 - Posted: 16 Apr 2012, 22:01:28 UTC - in response to Message 1219112.  

You really don't want to do that as all tasks then have a very bad initial estimate with no attempt at correction.

No. All the estimates are quite good 'till the DCF jumps from about one to five or more.

Clearly you have never run a system with multiple GPUs that have rather different speeds.

I feel you would do well to read http://boinc.berkeley.edu/svn/trunk/boinc/checkin_notes in particulat the section that says as below. Espically client-side mechanism is counterproductive..

David  22 Mar 2012
    - client/server: add optional <dont_use_dcf/> to schedule reply.
        If set, client won't use DCF for this project.
        Make this the default in server code;
        we now do runtime estimation entirely on the server side,
        and the client-side mechanism is counterproductive.

    sched/
        sched_types.cpp,h
    client/
        client_types.cpp,h
        scheduler_op.cpp,h
        work_fetch.cpp
        cs_scheduler.cpp
        cpu_sched.cpp


I do know that the DCF values for different projects range from near zero to near infinity. Even for S@H they run from .1 to 10 depending on the machine. Not what I would consider a great estimate.


BOINC WIKI
ID: 1219332 · Report as offensive

Questions and Answers : Getting started : How can dont_use_dcf be set to TRUE?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.