Message boards :
Number crunching :
Panic Mode On (35) Server problems
Message board moderation
Author | Message |
---|---|
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
|
nemesis Send message Joined: 12 Oct 99 Posts: 1408 Credit: 35,074,350 RAC: 0 |
kind of empty around here Arkayn... we need a disaster to liven this place up!!! |
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
Yeah, too bad crunching for SETI is so dull, never anything to panic over! :-) PROUD MEMBER OF Team Starfire World BOINC |
kittyman Send message Joined: 9 Jul 00 Posts: 51469 Credit: 1,018,363,574 RAC: 1,004 |
Yeah, too bad crunching for SETI is so dull, never anything to panic over! :-) The kitties usually only panic when the kibble bowl gets empty...and I got a couple of them empty right now. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65786 Credit: 55,293,173 RAC: 49 |
|
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
Cool pic |
kittyman Send message Joined: 9 Jul 00 Posts: 51469 Credit: 1,018,363,574 RAC: 1,004 |
Anybody have a clue what the bandwidth cycles on the Cricket Graph are all about? I don't think I have ever seen such a well defined pattern before.... "Freedom is just Chaos, with better lighting." Alan Dean Foster |
SciManStev Send message Joined: 20 Jun 99 Posts: 6653 Credit: 121,090,076 RAC: 0 |
It certainly is an interesting pattern! Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
That's all us CPU crunchers all on the same schedule. Finish one, turn it in, get another....crunch for a couple of hours.... rinse and repeat! :-) PROUD MEMBER OF Team Starfire World BOINC |
SciManStev Send message Joined: 20 Jun 99 Posts: 6653 Credit: 121,090,076 RAC: 0 |
Heck, I'm just finishing VLAR's. GPU's are idle. I wish there was a way to switch VLAR's back to the GPU, as with the build I am using I can finish a VLAR on the GPU at 40 something minutes, and 1h06m on the CPU. At least that way I could drop below the 20 unit limit, and get some more real GPU work. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
hiamps Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 |
Heck, I'm just finishing VLAR's. GPU's are idle. I wish there was a way to switch VLAR's back to the GPU, as with the build I am using I can finish a VLAR on the GPU at 40 something minutes, and 1h06m on the CPU. At least that way I could drop below the 20 unit limit, and get some more real GPU work. Same here. Official Abuser of Boinc Buttons... And no good credit hound! |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
Heck, I'm just finishing VLAR's. GPU's are idle. I wish there was a way to switch VLAR's back to the GPU, as with the build I am using I can finish a VLAR on the GPU at 40 something minutes, and 1h06m on the CPU. At least that way I could drop below the 20 unit limit, and get some more real GPU work. If you can find a copy of ReSchedule V1.7 it can shift VLAR's back to the GPU. But BE CAREFUL !!!!! V1.7 relabels them for the V6.08 CUDA client and if your not using a V6.08 based program such as the Lunatics clients BOINC will trash the lot. I think that the work arounds worked out by Mad_Mac and Richard Hazlegrove in the Fermi thread may help but I haven't tried them. No responsibilty accepted for trashed WU's :-) Brodo |
SciManStev Send message Joined: 20 Jun 99 Posts: 6653 Credit: 121,090,076 RAC: 0 |
Heck, I'm just finishing VLAR's. GPU's are idle. I wish there was a way to switch VLAR's back to the GPU, as with the build I am using I can finish a VLAR on the GPU at 40 something minutes, and 1h06m on the CPU. At least that way I could drop below the 20 unit limit, and get some more real GPU work. Thank you for the information, but I think I'll just wait for tomorrow to see if I can reload. I will archive your post, so I always have access to it. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Anybody have a clue what the bandwidth cycles on the Cricket Graph are all about? I don't think I have ever seen such a well defined pattern before.... There appears to be correlation with when the splitters are boosting the "Results ready to send" and when they're idle. Compare Scarecrow's graphs, though it's hard to really match the time scales. It's a case where sampling the server status once an hour isn't quite enough to pin down the relationship, but my guess is the high rate download bursts are occurring just after the splitters have stopped for awhile. Or it could just be coincidence... Joe |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
It does seem that the spikes are evenly-spaced, half an hour wide, with a 4-hour gap between. What it actually means.. good luck guessing. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Anybody have a clue what the bandwidth cycles on the Cricket Graph are all about? I don't think I have ever seen such a well defined pattern before.... And "ready to send" doesn't drop to zero when splitters are idle, but bandwidth load drops hugely nevertheless. Correlation w/o explanation it seems... What about any correlations with AP task fetch rate? If some AP-only hosts come together to take their 20 AP tasks share, they could max bandwidth also... But oscillation period looks too small for single AP completion time... Any more ideas ? |
Miep Send message Joined: 23 Jul 99 Posts: 2412 Credit: 351,996 RAC: 0 |
If you look closely, the upload spike precedes the download spike, so it's very likely a case of reporting and getting the next bunch, related to the current cap on WUs. As for the spacing of the oszillation - some BOINC internal scheduler setting? Report every 4 hours? Machines set to always connected with small cache who wouldn't do the 'report one, get one' associated with clients trying to fill a >20 WU cache?. Alas the wild speculations caused by not enough data. Anybody got a host that falls into the pattern? Carola ------- I'm multilingual - I can misunderstand people in several languages! |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
For my own host it's smth like 1,5-2h per taks (if not shorties) but it has 4 cores so pattern blurred. And because it asks for work almost always now, each completed task get reported almost instantly. This all give absolutely no match to bandwidth pattern... |
Hellsheep Send message Joined: 12 Sep 08 Posts: 428 Credit: 784,780 RAC: 0 |
Okay so i'm thinking there is some massive internal communication going on. The patterns are consistent with something running at intervals. It's not related to work up or downloading. I'm thinking maybe some internal communication between the servers, maybe for science purposes. Sending data from one place to another. - Jarryd |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Some type of farm or cluster where the internet connection is allowed 4 hourly? Claggy |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.