Message boards :
Technical News :
Chaos at the Greasy Spoon (May 24 2007)
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next
Author | Message |
---|---|
Odysseus Send message Joined: 26 Jul 99 Posts: 1808 Credit: 6,701,347 RAC: 6 |
More specifically, the minimum deadline for any project should be 10 days + the shortest WU deadline length. Sorry, I don’t follow you. What’s the difference between “(minimum) deadline†and “(shortest) WU deadline length� Having a minimum deadline of slightly over ten days would make sense, ensuring that it would fall outside every client’s connection interval (‘hacked’ preferences aside, if that’s even possible). I don’t see why the excess needs to be any more than the time it takes to download a batch of work, though: something like 868,000 seconds ought to be plenty. From the project’s POV shorter deadlines have the advantage of quicker turnaround of WUs that need to be reissued, getting them out of the working database sooner. That has to be balanced against the increased requesting/reporting traffic from clients that are unable to fill large caches. Are any statistics available on the distribution of CI settings in the crunching population? Doubtless quite a few are still using the default of 0.1 day. |
zombie67 [MM] Send message Joined: 22 Apr 04 Posts: 758 Credit: 27,771,894 RAC: 0 |
More specifically, the minimum deadline for any project should be 10 days + the shortest WU deadline length. No difference. I don't understand what you are asking. Having a minimum deadline of slightly over ten days would make sense, ensuring that it would fall outside every client’s connection interval (‘hacked’ preferences aside, if that’s even possible). I don’t see why the excess needs to be any more than the time it takes to download a batch of work, though: something like 868,000 seconds ought to be plenty. Here is the logic BOINC uses: http://boinc-wiki.ath.cx/index.php?title=Work_Buffer A new work-request scheduler was introduced in v5.8.xx, using: Computational deadline = report deadline - (Work Buffer size + 1 day + "switch between projects every N hours") A Task is in deadline trouble if Computational deadline < 0.9 * report deadline A Project even if only one task is in deadline trouble, is blocked from asking for work until not in deadline trouble any longer, or, atleast 1 cpu is out of work. So I had it wrong. The minimum needs to be even longer. With a deadline of 14 days, the longest setting you can use for "connect" is just over 6 days. You still cannot get anything larger to work properly. Dublin, California Team: SETI.USA |
B0BHILL Send message Joined: 19 Jul 03 Posts: 23 Credit: 203,166 RAC: 0 |
More chaos has arrived! Been getting a report that no work is available for some time now. I guess once again when the work units I am processing now are finished then I will be idle until sometime tomorrow. Is anyone else experiencing this? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
More chaos has arrived! Been getting a report that no work is available for some time now. I guess once again when the work units I am processing now are finished then I will be idle until sometime tomorrow. Is anyone else experiencing this? Hang in there, one of my rigs just downloaded new work a few minutes ago. I think it may have to do with some of the feeder issues they have had from time to time as of late, because there is a plenty of work available. If you getting an error message when Boinc requests work, you may want to reboot to reset anytbing that may be hung up. If it is connecting to the server OK, but just says 'no work from project', just keep trying the update button once in a while and I think you will get some eventually. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Kim Vater Send message Joined: 27 May 99 Posts: 227 Credit: 22,743,307 RAC: 0 |
Somekind of network/server problem shows at the Gigabit graph ?? http://fragment1.berkeley.edu/newcricket/grapher.cgi?target=%2Frouter-interfaces%2Finr-250%2Fgigabitethernet2_3;view=Octets;ranges=d Kiva Greetings from Norway Crunch3er & AK-V8 Inside |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Somekind of network/server problem shows at the Gigabit graph ?? Yeah, the ol' Cricket is jumpin' around a bit again. Sumpthin's afoot or afoul. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
kazman Send message Joined: 23 Jul 99 Posts: 58 Credit: 24,873,897 RAC: 2 |
All my rigs just got through about 5 minutes ago. May just be a small hiccup. |
kazman Send message Joined: 23 Jul 99 Posts: 58 Credit: 24,873,897 RAC: 2 |
Correction on my last. Uploads and reporting OK, No new downloads- Getting-No new work messages. |
Odysseus Send message Joined: 26 Jul 99 Posts: 1808 Credit: 6,701,347 RAC: 6 |
More specifically, the minimum deadline for any project should be 10 days + the shortest WU deadline length. If they’re the same, you seem to be saying D = 10 + D above, which doesn’t make sense. Computational deadline = report deadline - (Work Buffer size + 1 day + "switch between projects every N hours") (Here “deadline†must mean the time interval allowed rather than the epoch of cutoff.) So I had it wrong. The minimum needs to be even longer. With a deadline of 14 days, the longest setting you can use for "connect" is just over 6 days. You still cannot get anything larger to work properly. I was wrong as well (although I believe what I said would have been true of some older BOINC clients). But I don’t see where the [[Work_Buffer]] article got that 6 days (or any of the other figures in the table). Assuming “Work Buffer size†indeed means the CI, with a two-week deadline and project-switching every hour, in order to avoid “deadline trouble†I get Dc ≥ 0.9·Dr 14 d – (CI + 1.04 d) ≥ 12.6 d CI ≤ 0.36 d Where have I gone wrong? With a CI of ten days, we have Dc ≥ 0.9·Dr Dr – 11.04 d ≥ 0.9·Dr Dr ≥ 110.4 d implying that users with a ten-day CI will only stay out of EDF on projects with deadlines of nearly four months or longer! Again, have I screwed up my derivation somewhere, or is the Wiki description a bit garbled? |
zombie67 [MM] Send message Joined: 22 Apr 04 Posts: 758 Credit: 27,771,894 RAC: 0 |
If they’re the same, you seem to be saying D = 10 + D above, which doesn’t make sense. Ah, I see now. Yes, I wrote gibberish. I meant to write 10 days + the length of time to crunch a WU. But like I said, that was wrong. Needs to be longer. I was wrong as well (although I believe what I said would have been true of some older BOINC clients). But I don’t see where the [[Work_Buffer]] article got that 6 days (or any of the other figures in the table). Assuming “Work Buffer size†indeed means the CI, with a two-week deadline and project-switching every hour, in order to avoid “deadline trouble†I get This is what JM7 wrote on the mail list: "Computation deadline = report deadline - ( connect every X + switch projects every X, + 1 day). If a task cannot be completed within 90% of the time between now and the computation deadline it is in deadline trouble. S@H has tasks that have about a 4 day deadline. 4 - (2 + 1 + .05) = 0.95 days. If the time before getting started + the time to compute > 0.95 days, the task is in deadline trouble." This seems to match the wiki. And I think you're right. Something's not right with the formula, but I believe it is what BOINC uses. Dublin, California Team: SETI.USA |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30651 Credit: 53,134,872 RAC: 32 |
Correction on my last. Uploads and reporting OK, No new downloads- Getting-No new work messages. Same here - no work from project |
Steve Gladden Send message Joined: 22 Jul 02 Posts: 1 Credit: 3,181,428 RAC: 0 |
Sadness!!! The cores are cooling down. I'm starting to shivv'r |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30651 Credit: 53,134,872 RAC: 32 |
Correction on my last. Uploads and reporting OK, No new downloads- Getting-No new work messages. Ah, getting data now. Whatever the glitch was, seems to be over. |
B0BHILL Send message Joined: 19 Jul 03 Posts: 23 Credit: 203,166 RAC: 0 |
FOR WHAT IT IS WORTH i AM PROCESSING A FULL SIX PACK SO THE PROBLEM WAS FLEETING. |
David Emigh Send message Joined: 13 Mar 06 Posts: 7 Credit: 36,459 RAC: 0 |
FOR WHAT IT IS WORTH i AM PROCESSING A FULL SIX PACK SO THE PROBLEM WAS FLEETING. LOL I know that feeling :D |
W-K 666 Send message Joined: 18 May 99 Posts: 19062 Credit: 40,757,560 RAC: 67 |
If they’re the same, you seem to be saying D = 10 + D above, which doesn’t make sense. A 3.0 GHz P4 HT computer down loads an AR = 3.12 unit at 16:35 on Thursday afternoon. The report deadline is 4 days 8 hrs 10 mins away. The computer is crunching two units; AR=0.394 00:31:20 30.4% 01:11:40 AR=0.443 01:11:50 75.6% 00:23:10 There are already two other units in work cache with 15 and 21 day reporting deadlines The computer is in an office, and is only on Monday to Thurday from 08:00 to 17:00 and on Friday 08:00 to 14:00. The last unit of this AR took approx 4,000sec. Connect to network is 1 days. % of time BOINC client is running - 96.8792 % While BOINC running, % of time work is allowed - 68.8305 % Average CPU efficiency - 0.893249 What formula would you use to ensure the unit is returned on time? |
Ingleside Send message Joined: 4 Feb 03 Posts: 1546 Credit: 15,832,022 RAC: 13 |
This is what JM7 wrote on the mail list: Well, I wrote the WIKI-description, so it's possibly I've mis-interpreted JM7's post, or I'm not very good at explaining things... Still, let's try to use same example as JM7, with 4 days deadline and 2 days cache-size: Computational deadline = report deadline - (Work Buffer size + 1 day + "switch between projects every N hours") = 4 days - (2 days + 1 day + 1/24 days) = 0.958 days So far, so good... So for the 2nd. part... If client isn't blocked from asking for work, or project doesn't manage to supply enough work, the "time to compute" >= "Work Buffer size". Since server can't give-out a fraction of a wu, most of the time "time to compute" is larger than cache-size... Meaning you have in this example: 0 days "time before getting started" + 2 days "time to compute" > 0.958 days => you're in deadline-trouble => Client blocked from asking for work until not in deadline-trouble any longer, or idle cpu... If haven't made a mistake by my usage of 90%, after some re-formatting I've got this formula: "Max Cache-size" = 0.9 * "deadline" / 1.9 - 0.9 / (24*1.9) - 0.9/1.9 = 0.473684 * "deadline" - 0.493421 From this, it's easy to calculate that with 4.3 days deadline, max cache-size is... 1.54 days. Also, cache-size 10 days => "deadline" = 22.153 days "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
arr25b Send message Joined: 19 Nov 05 Posts: 16 Credit: 14,839,632 RAC: 0 |
FOR WHAT IT IS WORTH i AM PROCESSING A FULL SIX PACK SO THE PROBLEM WAS FLEETING. Wish i was, can only d/l one WU per machine,whereas 2days ago I could have two running on each PC and I ready to start. Any Ideas?? Have tried the usual suspects detaching and reattching flushing dns but to no avail |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 |
Wish i was, can only d/l one WU per machine,whereas 2days ago I could have two running on each PC and I ready to start.Today (Monday, May 28, 2007) is a major holiday in the USA. Based on the appearance of the Cricket Graph I imagine waiting until the working day tomorrow in California is your best option. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
Wish i was, can only d/l one WU per machine,whereas 2days ago I could have two running on each PC and I ready to start.Today (Monday, May 28, 2007) is a major holiday in the USA. Based on the appearance of the Cricket Graph I imagine waiting until the working day tomorrow in California is your best option. The database server was down and back up again today. I'm guessing they came in and fixed the problem. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.