Message boards :
Number crunching :
Panic Mode On (75) Server problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 10 · Next
Author | Message |
---|---|
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around... No, the splitters stop, when about 250,000 WUs are ready to send, so once they run out of tapes (for whatever reason), there are usually no more than 250,000 WUs to send out regardless of what the people have in their caches. Larger caches actually force the servers to generate larger work buffer, which is than stored in the cache of each client, so if the servers are down, the clients can still do a lot of work for the project and return it after the outage. If we all had just one day cache, the processing for S@H would have been stopped after about 24 hours completely, with larger caches we process the WUs like nothing happend, the current servers are powerful enough to catch up and restore our caches a while after the outage anyway. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
My single core machine was down to one MB left of its 2.5-day cache, but on the first scheduler contact after it all came back up, it reported all the completed tasks and got 8 new ones to fill the cache back up. Main cruncher is AP-only and reported what it completed during the outage, and hasn't gotten any new APs yet. Just a little less than a day before I run out. Not worried, nor complaining. I'll get more work eventually. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
Noticed the network traffic has dropped off, so i had a look in my messages tab & it's getting a lot of "Project has no tasks available" and "No tasks sent" messages. It's trying to get work, it's just not receiving it. Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
The main reason the bandwidth dropped off is the AP splitters are not running at the moment. "Time is simply the mechanism that keeps everything from happening all at once." |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
As of now, that is only one AP splitter and not a lot of data for it to chew on, Looks like it will be `that` kind of evening on this side of the pond. `More coal in the boiler the ship is slowing down` |
rob smith Send message Joined: 7 Mar 03 Posts: 22538 Credit: 416,307,556 RAC: 380 |
...and by the time I read your post the coal in the AP splitting boiler had run out... Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Dimly Lit Lightbulb 😀 Send message Joined: 30 Aug 08 Posts: 15399 Credit: 7,423,413 RAC: 1 |
I missed some Astropulse's being split? Oh man, I'm currently crunching the last one in my cache. Time to panic me thinks. Member of the People Encouraging Niceness In Society club. |
SciManStev Send message Joined: 20 Jun 99 Posts: 6658 Credit: 121,090,076 RAC: 0 |
My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them. Not really a panic, but I am doing a Spock raised eyebrow..... Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
I missed some Astropulse's being split? Oh man, I'm currently crunching the last one in my cache. Time to panic me thinks. Well, you'll survive, also have 4 times the throughput of my hosts... Since I'm still running BOINC 7.00.25, almost all SETI, MB also Astropulse are run High Priority....MW, too. Merely cosmetic? I've just crunched a few on ATI 5870 GPUs, 3 to 4 hours runtime, 9,779.41 6,056.19 CPU in behandeling* AstroPulse v6 Anoniem platform (ATI GPU) High CPU time is due to the high % of blanking. *Checked no 2nd result, yet.(I'm waiting for my {wing-}man). I could do with some more AstroPulse work, though. Panic, no way, besides thats where Back-Up Projects can be used for ;-). |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them. Oh good, they must be splitting shorties again - I could do with a few of them. |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them. I am on 2600 wu's with my single GTX560 and a 5 day cache. |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them. I've SETI@home Enhanced (1784), (In behandeling)* (831)MB crunched, reported and waiting for a cannonical result. 3 Astropulse WUs reported. Less then 1000 a.t.m. on my i7-2600+ 2x ATI 5870 GPUs, 12 a time, have to set a larger cache, but I'm never out of work, so I let it be. Most of the regular posters, volunteer developpers, testers, have an almost 10 fold throughput, since the(y) use of CUDA (FERMI/KEPPLER) and OpenCL 1.2(ATI-AMD{SDK} 2.4)! More or less. Look at these Results, Atropulse WU 99732719. |
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 572 |
My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them. I've had the same, a run of "fast" regular WU's then a few crunchy ones, it upsets (increases) the estimated completion time and with a larger cache its enough to cause Boinc to panic. Sometimes it will jump in and out of high priority mode, if you have got a bunch of longer running WU's when one finnishes it will kick it into high priority mode and start doing a bunch of shorties, then as the esimated completion time drops it will start back on the longer running ones untill it completes another one and then kicks back into high priority mode again. There does not seem to be a lot of VLAR's or VHAR's around, but there seems to be a lot of variation (runtime wise) on the regular WU's. Kevin |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them. Yup, i am crunching shorties as well most of the names end in VLAR And this 7970 munches them, yum yum :¬) |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
There does not seem to be a lot of VLAR's or VHAR's around, but there seems to be a lot of variation (runtime wise) on the regular WU's. VLAR, I nicked them. Get yer ands off my vlar`s their mine all mine :¬) |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I picked up some APs from earlier. Was getting kind of close to an empty cache again there.. but I'm good for another ~1 day or so now. Would love to fill the 10-day cache up again though. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
mckeand Send message Joined: 27 Jun 99 Posts: 1 Credit: 1,561,465 RAC: 0 |
Is this thing on? Not getting any work, just set this up today 6-5-12 If this is not the place to ask please point me to the right place, eh? Just visited the VLA in New Mexico on highway 60. Most impressive and very interesting. We will go back for the tour. I am only 90 miles from Roswell, NM, do you want me to go look around there? I have heard rumors....... Peace, Bob |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
So before the maintenance, I got 16 APs. No more since it came back up since there aren't any tapes available to split. C'mon tapes.. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66359 Credit: 55,293,173 RAC: 49 |
Lets see I've got nearly a 4 day cache now, but of course Boinc 6.10.58 x64 isn't reporting, unless I do an update with BoincTasks 1.33, then it's 64 at a time. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
Slavac Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 |
|
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.