Message boards :
Number crunching :
Panic Mode On (75) Server problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 10 · Next
Author | Message |
---|---|
rob smith Send message Joined: 7 Mar 03 Posts: 22528 Credit: 416,307,556 RAC: 380 |
Geeze, we're almost at 7 million MB Results out in the field. I can't even remember the last time we were at such a high number. I wonder if the boys and gals in the lab are deliberately trying to find out. If that's that case, just hold tight, it could be a rough ride. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
|
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Now, MB Results out in the field 7,001,148. It was a very long time ago since we had that many MB results out in the field. A lot of little ones went out yesterday or overnight. |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
According to the news on the main page, we will be down for almost 2 days again, beginning in about 1 hour and 30 minutes. Everything will be down, a total blackout, website and all, including the Boinc site. Nope, you will see me on Sunday. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
I hope they bring the splitters back online soon, the Ready to Send buffer is rapidly dwindling. Grant Darwin NT |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
Now it's over... :( Does anybody has a spare WU to share with me? LOL |
Mike Send message Joined: 17 Feb 01 Posts: 34379 Credit: 79,922,639 RAC: 80 |
I have about 10 days left. With each crime and every kindness we birth our future. |
andybutt Send message Joined: 18 Mar 03 Posts: 262 Credit: 164,205,187 RAC: 516 |
I have a couple of days left. Can't get any more as uploading 2500 and damn slow! |
-BeNt- Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 |
I've got about 11 days worth for my gpu's and 7 days for my cpu. So I should be good to go for awhile, I'm sure the fella's will have the splitters back into shape and everyone filled by then....least I hope so. ;p Traveling through space at ~67,000mph! |
Slavac Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 |
|
-BeNt- Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 |
Yep just noticed I've got 8 in the download cue now. Wonderful work for a Thursday guys! Traveling through space at ~67,000mph! |
Misfit Send message Joined: 21 Jun 01 Posts: 21804 Credit: 2,815,091 RAC: 0 |
5/31/2012 5:31:05 PM | SETI@home | Scheduler request completed: got 0 new tasks 5/31/2012 5:31:05 PM | SETI@home | Project has no tasks available Here I go again. And I have no days left since my cache is set to 0.1 days cuz I like to share. Too bad more people don't. me@rescam.org |
Wedge009 Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 |
I do wonder about that, too. I keep a one-day cache, which is normally enough to cover the weekly outage. If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around... and I imagine that WUs would be validated far more quickly too. My 'pending' WU count hasn't gone below 1000 for weeks (it hit nearly 2800 after coming back from the outage). Is it really that good for the project for WUs to be 'sat on' with hosts using huge caches for days before they're actually processed? Just a thought, not having a go at anyone at all. Soli Deo Gloria |
SciManStev Send message Joined: 20 Jun 99 Posts: 6658 Credit: 121,090,076 RAC: 0 |
The way I see it is that we are all sharing whatever the servers can dish out. This is a huge project, with no telling how much data will need to be crunched before we get any results. Things are definitly heading in the right direction with Seti hardware, so I think the capability to deliver more work to us is increasing. The more work that gets sent out, the better our chances. I run a 5 day cache, and that is 5-6 thousand wu's. I crunch every one of them with few errors or invalids. In order for my rig to do as much science as it can, I don't let in run dry if I can help it. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Wedge009 Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 |
I suppose I was just considering Folding@home's philosophy of timeliness of results vs the bulk of work that can be done. I appreciate the improvements in the hardware with regards to WU generation/transfer/validation/etc, but I was thinking that with smaller caches, WUs (from a global project perspective) could be processed and validated faster, freeing resources in the databases and other server processes and generally allowing S@h to run more efficiently. I admit I don't follow the forums here that thoroughly (some users have been less than friendly), so I don't know if this is an idea that's been raised previously. Soli Deo Gloria |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
I suppose I was just considering Folding@home's philosophy of timeliness of results vs the bulk of work that can be done. I appreciate the improvements in the hardware with regards to WU generation/transfer/validation/etc, but I was thinking that with smaller caches, WUs (from a global project perspective) could be processed and validated faster, freeing resources in the databases and other server processes and generally allowing S@h to run more efficiently. You ought to hang-out more! Yeah, it's talked-about from time-to-time. I'd rather have the work I turn-in error-out or validate quickly, but...I kind-of like to have work to do when there is a power-outage or a RAID corruption or something, too. Just over this little outage I had a computer run-out of CPU work (but for some reason had plenty of GPU work in reserve). And for some reason we don't seem to always go FIFO, so the last thing you download may be the next thing you crunch while older tasks sit. So... what do you do? I'm trying a happy 5-day medium, but apparently this whole misreporting issue is messing with the estimated run times so what I set isn't necessarily what I get. |
Misfit Send message Joined: 21 Jun 01 Posts: 21804 Credit: 2,815,091 RAC: 0 |
The difference is you aren't bragging about having several days worth available to crunch while complaining your uploads won't go thru. For the people who do those are not words of encouragement to those of us who are running on empty. me@rescam.org |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
You choose to run on empty. Why should that be a concern to any one else? |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Come now, kitties...... The subject of cache sizes has long been debated. Fact remains, it is up to the individual users to choose what cache size they wish to try to maintain within the choices made available by this project. Let's not stir the pot please. "Time is simply the mechanism that keeps everything from happening all at once." |
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 |
Getting Scheduler request failed: HTTP Internal Server Error I don't believe that this problem is fixed ... cheers |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.