Message boards :
Number crunching :
How do I get AP and/or MB WU for CPU?
Message board moderation
Author | Message |
---|---|
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
I have ...
|
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
The system isn't very intelligent as of yet. Here's a simplistic breakdown: If a user supports CUDA (enabled in their preferences and has the appropriate driver and CUDA hardware), then all MultiBeam work will be sent as SETI v6.08 to be crunched on the GPU. In that case, the only thing that can run on the CPU is non-CUDA applications, meaning that standard MultiBeam v6.03 cannot run, so AstroPulse or another project must be run on the CPU. The problem lies in the type of work (i.e. MultiBeam or AstroPulse) and not the version of the application. The solution would require SETI@Home to release a third type of application: SETI@Home MultiBeam, AstroPulse and CUDA. Each application would only be able to validate with similar applications (CUDA with CUDA, AP with AP, and MB with MB). The problem with this solution is that the CUDA app overclaims on credit because BOINC/the OS can't count GPU time accurately, and credit is a deriviation of FPOPs (which is a unit of time), therefore all CUDA apps would be receiving far more credit than they should. At least with the current setup, CUDA validating with CPU work decreases the odds of CUDA validating with CUDA, thus correcting the claimed/granted credit problem. If you do find a solution, I'm sure David Anderson would love to implement the idea into the default BOINC code so that everyone can benefit from the much-asked-for "feature". At current, there is no easy solution for making MultiBeam workunits run on both the CPU and/or the GPU while CUDA is in progress. To keep your other cores busy, you'll have to run AstroPulse, another project or come up with a very handy idea that every CUDA user would be thankful for. |
![]() ![]() Send message Joined: 4 May 08 Posts: 417 Credit: 6,440,287 RAC: 0 ![]() |
If you want some AP, do you have them allowed in your Resource Share and Graphics Preferences Note: if you change your preferences you must: 1) save the changes 2) click Update in Boinc Manager / Projects to activate the changes (or wait until the next time your client contacts the servers) |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
Ozz, I don't think that's quite right. I think you're describing the v6.4.5 / v6.4.7 class of BOINC clients, which did indeed have that restriction: but Steven is running v6.6.20 v6.6.20 has been the 'recommended' version (which new users will receive if they don't go searching for an alternative) for almost three weeks. Whilst it still has some rough edges, and development work is continuing, one feature which it certainly does have is the ability to fetch and run MB tasks (setiathome_enhanced) for both the CPU and the GPU. Steven, I'm not wuite clear from your post whether you have 'not much' work for the CPU, or 'no work at all'. If it's 'not much', then I suggest you monitor it for 24 hours, and post again: the servers seem a bit sluggish and unwilling to send work before todays maintenance. If you're getting no work at all, check your preferences and project options, both here on the website and in BOINC manager. Also, have a look at the new 'properties' page in BOINC Manager (command button, left side of 'Projects' tab). Look in particular at any lines relating to 'backoff' (should be able to clear those with a manual project update), or negative work fetch priority (the equivalent of the old 'Long Term Debt'). As for converting GPU work for CPU use - yes, that's perfectly possible, and people are working on experimental scripts to do exactly that - see the discussion threads about VLAR. But it's tricky, and you can easily lose whatever little work you have already! Since you say you have scripting skills, you might like to join the development push to get a safe, reliable script for all to use? The basic premise is simple: you only have to modify one file. But it's a biggie - client_state.xml - and you have to safely shut down the BOINC core client (whether it's running as a service or under a user account) while you do it. Each WU you want to 'flip' has entries for both the <workunit> and the <result>. In both sections (matched by the WU name), you have to change the <version_num> from 608 to 603: and additionally, in the <result> section, you have to remove the entire line <plan_class> CUDA</plan_class>. Oh, and do make sure you've got the v6.03 CPU application loaded and ready to run! |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
The system isn't very intelligent as of yet. Here's a simplistic breakdown: ... This work unit is one of my pending ones that I processed on my CUDA host, but my wingman's host is not cuda-capable. So unless this situation is an error on the part of the scheduler when it sent this work unit to these two incompatible hosts, then the work units can be validated on different types of hosts and thus on different types of applications. I think the solution is that the scheduler needs to be smart enough to give some AP work units to hosts with multiple CPU plus GPU processors, and the host is allowed to do work on the CPUs. See this post where I describe the situation with my two hosts. The faster, CUDA capable host gets only MB (cuda) WU, while the slower, non-CUDA host gets only AP WU. Also, if I were to set options to disallow use of the GPU, and disallow AstroPulse, I would be able to process MultiBeam in the CPU. So why can't I get some non-cuda MB WU to keep my CPUs busy without disallowing use of the GPU? If you want some AP, do you have them allowed in your Resource Share and Graphics Preferences Yes, I have set "Yes" for all three types of applications. (See this post where I describe the options that I have in place.) |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
Of course it is possible for a a setiathome_enhanced task processed by a CPU application to validate against a setiathome_enhanced task processed by a CUDA application on a different computer. At Beta, it's even possible to do both halves on the same host - once with the CPU and once with CUDA! As to why your machines aren't fetching CPU work, I'm not sure. We might need to have a look at the host and project sections of your client_state.xml I did start having a look at your tasks in progress, to see if I could find any AP, but gave up after 800 - all downloaded in the last four days. 800? That's at least 10 days work on an 8800GT - my 9800GTs do about 75 per day. And most of the tasks seem to be downloaded between 07:00 UTC and 08:00 UTC - isn't that when the daily quota is reset? Are you having 'reached daily quota' messages in your log? With a 10-day cache, and even one 7-day 'shorty' in the list, you will be running in 'High Priority' mode - and BOINC won't fetch any new work from a project in deadline danger. I <think> CUDA deadline pressure would inhibit CPU fetch as well, which isn't strictly logical - that may be something else we need to ask them to look at in the debugging cycle for v6.6.20 In the meantime, have a read of my *** WARNING *** and turn it down, man, turn it down! |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
... I do have some work for the CPUs, all AP, which were fetched a while ago before I installed 6.6.20 and I was using a non-cuda-capable version of BOINC. Since I updated BOINC to 6.6.20, I have not been sent any AP work at all. ...check your preferences and project options, both here on the website and in BOINC manager. Also, have a look at the new 'properties' page in BOINC Manager (command button, left side of 'Projects' tab). Look in particular at any lines relating to 'backoff' (should be able to clear those with a manual project update), or negative work fetch priority (the equivalent of the old 'Long Term Debt'). All three types of WU from SETI are enabled on the web site, and I didn't see any local options in the BOINC manager that would seem to be to prevent non-cuda WU from being accepted. The properties page has...
As for converting GPU work for CPU use - yes, that's perfectly possible, and people are working on experimental scripts to do exactly that - see the discussion threads about VLAR. But it's tricky, and you can easily lose whatever little work you have already! Yes, yes, yes. 8^D This I can do and am willing to try on at least a few of the cuda WU to see if it works. Since you say you have scripting skills, you might like to join the development push to get a safe, reliable script for all to use? My scripting skills, though great, are in a language (Rexx) that sadly has fallen into disuse :-((, and so I can write a script for myself, and for those two or three of you out there with a Rexx interpreter that you can dust off. :-/ I am learning Python and expect to be up to speed in a short time, but perhaps there is another language that is in preference this week :^D? The basic premise is simple: you only have to modify one file. But it's a biggie - client_state.xml - and you have to safely shut down the BOINC core client (whether it's running as a service or under a user account) while you do it. Each WU you want to 'flip' has entries for both the <workunit> and the <result>. In both sections (matched by the WU name), you have to change the <version_num> from 608 to 603: and additionally, in the <result> section, you have to remove the entire line <plan_class> CUDA</plan_class>. Sounds good. Will give it a try and let you know what happens. Oh, and do make sure you've got the v6.03 CPU application loaded and ready to run! Ratz. Thought I still had the 6.03 app, but just looked in C:\Documents and Settings\All Users\Application Data\BOINC\projects\setiathome.berkeley.eduand found these application files: ap_graphics_5.00_windows_intelx86.exe ap_graphics_5.03_windows_intelx86.exe astropulse_5.00_windows_intelx86.exe astropulse_5.03_windows_intelx86.exe setiathome_6.08_windows_intelx86__cuda.exeHow do I get the 6.03 app loaded, locked, cocked, and ready to fire? |
![]() ![]() Send message Joined: 16 Jan 06 Posts: 1145 Credit: 3,936,993 RAC: 0 ![]() |
If you're just looking for the 603 exe, you can get it at setiathome_6.03_windows_intelx86.exe though it's been a long time since I've just used stock I think you might need setigraphics_6.03_windows_intelx86.exe setiathome-6.03_AUTHORS setiathome-6.03_COPYING setiathome-6.03_README seti_603.jpg as well. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
Byron, did you just delete your reply and post again? I got a 'wrong thread' error when I tried to post a reply. Never mind. Unfortunately, your technique won't work - with the stock app, you need to get the file signature block into client_state.xml too. The best way will be to turn down the cache, deselect AP, and wait for it to download naturally with all the client_state references (and the .jpg file) naturally when needed. Don't lets even think about the app_info alternative for this one yet - it's complicated enough already. And I'd like to check my hunch about an excessive CUDA cache inhibiting CPU fetch - that might be a bug. |
![]() ![]() Send message Joined: 16 Jan 06 Posts: 1145 Credit: 3,936,993 RAC: 0 ![]() |
Byron, did you just delete your reply and post again? I got a 'wrong thread' error when I tried to post a reply. No, I just noticed I screwed up the links and was trying to fix them, while you were writing this. Walked away for a minute and didn't get back with the edit till you'd written that it wouldn't work. |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
Of course it is possible for a a setiathome_enhanced task processed by a CPU application to validate against a setiathome_enhanced task processed by a CUDA application on a different computer. At Beta, it's even possible to do both halves on the same host - once with the CPU and once with CUDA! As it turns out, there are exactly 800 work unit files in C:\Documents and Settings\All Users\Application Data\BOINC\projects\setiathome.berkeley.edu14 of which are AP. I don't expect any of these to be returned past their deadlines. Four days ago I was updating the NVIDIA control panel, and in the process, all of the cuda WU on my comp disappeared. It is likely that some of the "In Progress" WU that are shown for my host on the web site will not be returned by me due to the data being MIA. Yep, I did read your warning, and did turn down the cache to 1 day using... Maintain enough work for an additional 1 dayshowever I haven't noticed a large reduction in the amount of work being requested. I have 4 AP tasks and one cuda running right now, and none are in 'High Priority' mode. Yesterday I got a bunch of cuda WU at 11pm local time (PDT), and none are AP, and none have been requested since 4/27/2009 11:12:20 PM. 4/27/2009 10:57:01 PM SETI@home Sending scheduler request: To fetch work. 4/27/2009 10:57:01 PM SETI@home Reporting 17 completed tasks, requesting new tasks 4/27/2009 10:57:03 PM SETI@home Started upload of 06fe09ae.4024.5798.14.8.255_0_0 4/27/2009 10:57:05 PM SETI@home Finished upload of 06fe09ae.4024.5798.14.8.255_0_0 4/27/2009 10:57:11 PM SETI@home Scheduler request completed: got 14 new tasks 4/27/2009 10:57:13 PM SETI@home Started download of 01fe09ab.888.2526.11.8.116 4/27/2009 10:57:13 PM SETI@home Started download of 01fe09ae.8114.1708.5.8.26 ... 4/27/2009 10:57:26 PM SETI@home Sending scheduler request: To fetch work. 4/27/2009 10:57:26 PM SETI@home Reporting 1 completed tasks, requesting new tasks 4/27/2009 10:57:29 PM SETI@home Finished download of 01fe09ae.8114.1708.5.8.16 4/27/2009 10:57:29 PM SETI@home Finished download of 01fe09ab.888.2526.11.8.147 ... 4/27/2009 10:57:31 PM SETI@home Scheduler request completed: got 19 new tasks 4/27/2009 10:57:34 PM SETI@home Started download of 01fe09ab.888.2526.11.8.145 4/27/2009 10:57:34 PM SETI@home Started download of 01fe09ac.2197.1299.6.8.69 ... 4/27/2009 10:57:47 PM SETI@home Sending scheduler request: To fetch work. 4/27/2009 10:57:47 PM SETI@home Requesting new tasks ... 4/27/2009 10:57:53 PM SETI@home Scheduler request completed: got 1 new tasks 4/27/2009 10:57:54 PM SETI@home Finished download of 01fe09ae.8114.1708.5.8.33 4/27/2009 10:57:54 PM SETI@home Finished download of 01fe09ac.2197.1299.6.8.68 ... 4/27/2009 11:12:20 PM SETI@home Sending scheduler request: To fetch work. 4/27/2009 11:12:20 PM SETI@home Requesting new tasks 4/27/2009 11:12:26 PM SETI@home Scheduler request completed: got 1 new tasks 4/27/2009 11:12:28 PM SETI@home Started download of 01fe09ae.8114.2526.5.8.42 4/27/2009 11:12:30 PM SETI@home Finished download of 01fe09ae.8114.2526.5.8.42 ... |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
As to why your machines aren't fetching CPU work, I'm not sure. We might need to have a look at the host and project sections of your client_state.xml Do you want me to post those sections here? |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
As to why your machines aren't fetching CPU work, I'm not sure. We might need to have a look at the host and project sections of your client_state.xml I don't think we've quite got to that stage yet. What might be useful would be to create a cc_config.xml file with <cc_config> <log_flags> <work_fetch_debug>1</work_fetch_debug> </log_flags> </cc_config> (instructions on linked page - plain text, .xml extension, in BOINC Data folder, can load while BOINC is running from 'Advanced' menu) From the new figures about cache contents and left-over work, I doubt your rig will need any new work at all for a while. But when it does, Work Fetch Debug (or [wfd], as it will show in your message log) will help to sort out what's going wrong (or right). For information: BOINC v6.6.20 manages your cache in two separate calculations: one for CUDA, and one for everything else. You'll see from [wfd] that BOINC will ask for CUDA work, CPU work, or both togther. What it won't do is to request AP/CPU or MB/CPU separately: that decision is up to the project, governed by your preferences. |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
The system isn't very intelligent as of yet. Here's a simplistic breakdown: ... I'd just like to point out that I was talking about a theoretical solution which would require different apps that would not be able to cross-validate - but as it currently stands, all CUDA workunits can validate with CPU units because the theoretical scenario I suggested is not in effect. However, my theoretical solution is not necessary as I was unaware that they fixed the problem with v6.6.20. I'm using v6.6.23 myself, but I don't use CUDA. |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
...What might be useful would be to create a cc_config.xml file with I'll give that a try next time I sit down at that host and post here what I get. |
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0 ![]() |
...What might be useful would be to create a cc_config.xml file with Notepad. F. ![]() |
![]() Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 ![]() |
Yep, I did read your warning, and did turn down the cache to 1 day using...Maintain enough work for an additional 1 dayshowever I haven't noticed a large reduction in the amount of work being requested. Check that you're editing the preferences in the correct venue, the same one the computer runs in. |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
What might be useful would be to create a cc_config.xml file with Created the cc_config.xml file: <cc_config> <log_flags> <work_fetch_debug>1</work_fetch_debug> <task>0</task> </log_flags> <options> <ncpus>4</ncpus> </options> </cc_config> Requested read of the file, and update for the S@H project, got these messages. 4/28/2009 6:14:52 PM Re-reading cc_config.xml 4/28/2009 6:14:52 PM [work_fetch_debug] Request work fetch: Core client configuration 4/28/2009 6:14:54 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:14:54 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:14:54 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799263.62 RS fetchable 110.00 runnable 110.00 4/28/2009 6:14:54 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:14:54 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799263.62 RS fetchable 110.00 runnable 110.00 4/28/2009 6:14:54 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:14:54 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:14:54 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:14:54 PM [wfd] No project chosen for work fetch 4/28/2009 6:14:59 PM [work_fetch_debug] Request work fetch: project updated by user 4/28/2009 6:14:59 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:14:59 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:14:59 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799262.90 RS fetchable 110.00 runnable 110.00 4/28/2009 6:14:59 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:14:59 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799262.90 RS fetchable 110.00 runnable 110.00 4/28/2009 6:14:59 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:14:59 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:14:59 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:14:59 PM [wfd] No project chosen for work fetch 4/28/2009 6:14:59 PM SETI@home Sending scheduler request: Requested by user. 4/28/2009 6:14:59 PM SETI@home Reporting 13 completed tasks, not requesting new tasks 4/28/2009 6:15:04 PM SETI@home Scheduler request completed: got 0 new tasks 4/28/2009 6:15:04 PM [work_fetch_debug] Request work fetch: RPC complete 4/28/2009 6:15:10 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:15:10 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:15:10 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799263.20 RS fetchable 0.00 runnable 110.00 4/28/2009 6:15:10 PM SETI@home [wfd] CPU: fetch share 0.00 debt 0.00 backoff dt 0.00 int 0.00 (comm deferred) 4/28/2009 6:15:10 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799263.20 RS fetchable 0.00 runnable 110.00 4/28/2009 6:15:10 PM SETI@home [wfd] CUDA: fetch share 0.00 debt 0.00 backoff dt 0.00 int 0.00 (comm deferred) 4/28/2009 6:15:10 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:15:10 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:15:10 PM [wfd] No project chosen for work fetch 4/28/2009 6:15:16 PM [work_fetch_debug] Request work fetch: Project backoff ended 4/28/2009 6:15:20 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:15:20 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:15:20 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799263.79 RS fetchable 110.00 runnable 110.00 4/28/2009 6:15:20 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:15:20 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799263.79 RS fetchable 110.00 runnable 110.00 4/28/2009 6:15:20 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:15:20 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:15:20 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:15:20 PM [wfd] No project chosen for work fetch 4/28/2009 6:16:20 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:16:20 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:16:20 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799257.96 RS fetchable 110.00 runnable 110.00 4/28/2009 6:16:20 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:16:20 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799257.96 RS fetchable 110.00 runnable 110.00 4/28/2009 6:16:20 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:16:20 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:16:20 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:16:20 PM [wfd] No project chosen for work fetch 4/28/2009 6:17:21 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:17:21 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:17:21 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799246.88 RS fetchable 110.00 runnable 110.00 4/28/2009 6:17:21 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:17:21 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799246.88 RS fetchable 110.00 runnable 110.00 4/28/2009 6:17:21 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:17:21 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:17:21 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:17:21 PM [wfd] No project chosen for work fetch 4/28/2009 6:18:22 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:18:22 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:18:22 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799209.15 RS fetchable 110.00 runnable 110.00 4/28/2009 6:18:22 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:18:22 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799209.15 RS fetchable 110.00 runnable 110.00 4/28/2009 6:18:22 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:18:22 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:18:22 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:18:22 PM [wfd] No project chosen for work fetch 4/28/2009 6:19:22 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:19:22 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:19:22 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799203.31 RS fetchable 110.00 runnable 110.00 4/28/2009 6:19:22 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:19:22 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799203.31 RS fetchable 110.00 runnable 110.00 4/28/2009 6:19:22 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:19:22 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:19:22 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:19:22 PM [wfd] No project chosen for work fetch 4/28/2009 6:20:22 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:20:22 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:20:22 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799144.49 RS fetchable 110.00 runnable 110.00 4/28/2009 6:20:22 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:20:22 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799144.49 RS fetchable 110.00 runnable 110.00 4/28/2009 6:20:22 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:20:22 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:20:22 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:20:22 PM [wfd] No project chosen for work fetch 4/28/2009 6:21:23 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:21:23 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:21:23 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799056.86 RS fetchable 110.00 runnable 110.00 4/28/2009 6:21:23 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:21:23 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799056.86 RS fetchable 110.00 runnable 110.00 4/28/2009 6:21:23 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:21:23 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:21:23 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:21:23 PM [wfd] No project chosen for work fetch 4/28/2009 6:22:24 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:22:24 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:22:24 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 798909.47 RS fetchable 110.00 runnable 110.00 4/28/2009 6:22:24 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:22:24 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 798909.47 RS fetchable 110.00 runnable 110.00 4/28/2009 6:22:24 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:22:24 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:22:24 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:22:24 PM [wfd] No project chosen for work fetch 4/28/2009 6:23:26 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:23:26 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:23:26 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 798751.20 RS fetchable 110.00 runnable 110.00 4/28/2009 6:23:26 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:23:26 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 798751.20 RS fetchable 110.00 runnable 110.00 4/28/2009 6:23:26 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:23:26 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:23:26 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:23:26 PM [wfd] No project chosen for work fetch 4/28/2009 6:24:26 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:24:26 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:24:26 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 798499.83 RS fetchable 110.00 runnable 110.00 4/28/2009 6:24:26 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:24:26 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 798499.83 RS fetchable 110.00 runnable 110.00 4/28/2009 6:24:26 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:24:26 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:24:26 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:24:26 PM [wfd] No project chosen for work fetch 4/28/2009 6:25:27 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:25:27 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:25:27 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 798243.08 RS fetchable 110.00 runnable 110.00 4/28/2009 6:25:27 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:25:27 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 798243.08 RS fetchable 110.00 runnable 110.00 4/28/2009 6:25:27 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:25:27 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:25:27 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:25:27 PM [wfd] No project chosen for work fetch 4/28/2009 6:26:16 PM [work_fetch_debug] Request work fetch: application exited 4/28/2009 6:26:19 PM SETI@home Started upload of 27dc08aa.16271.2935.9.8.27_0_0 4/28/2009 6:26:19 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:26:19 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:26:19 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 800252.79 RS fetchable 110.00 runnable 110.00 4/28/2009 6:26:19 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:26:19 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 800252.79 RS fetchable 110.00 runnable 110.00 4/28/2009 6:26:19 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:26:19 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:26:19 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:26:19 PM [wfd] No project chosen for work fetch 4/28/2009 6:26:21 PM SETI@home Finished upload of 27dc08aa.16271.2935.9.8.27_0_0 |
![]() ![]() Send message Joined: 24 Mar 08 Posts: 2333 Credit: 3,428,296 RAC: 0 ![]() |
Yep, I did read your warning, and did turn down the cache to 1 day using...Maintain enough work for an additional 1 dayshowever I haven't noticed a large reduction in the amount of work being requested. No using separate venues. Only one venue is defined. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
4/28/2009 6:14:54 PM [wfd] ------- start work fetch state ------- 4/28/2009 6:14:54 PM [wfd] target work buffer: 777600.86 sec 4/28/2009 6:14:54 PM [wfd] CPU: shortfall 0.00 nidle 0.00 est. delay 799263.62 RS fetchable 110.00 runnable 110.00 4/28/2009 6:14:54 PM SETI@home [wfd] CPU: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:14:54 PM [wfd] CUDA: shortfall 0.00 nidle 0.00 est. delay 799263.62 RS fetchable 110.00 runnable 110.00 4/28/2009 6:14:54 PM SETI@home [wfd] CUDA: fetch share 1.00 debt 0.00 backoff dt 0.00 int 0.00 4/28/2009 6:14:54 PM SETI@home [wfd] overall_debt 0 4/28/2009 6:14:54 PM [wfd] ------- end work fetch state ------- 4/28/2009 6:14:54 PM [wfd] No project chosen for work fetch Yes, that's working as it should. At the moment, "shortfall" is zero for both CPU and CUDA: you've got enough work in your cache to at least match your requested cache size. Now we just have to wait while you work through the backlog until it starts to feel hungry again, and see what happens. |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.