Message boards :
Number crunching :
Panic Mode On (75) Server problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 9 · Next
Author | Message |
---|---|
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
Now, MB Results out in the field 7,001,148. It was a very long time ago since we had that many MB results out in the field. A lot of little ones went out yesterday or overnight. |
![]() ![]() Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 ![]() |
According to the news on the main page, we will be down for almost 2 days again, beginning in about 1 hour and 30 minutes. Everything will be down, a total blackout, website and all, including the Boinc site. Nope, you will see me on Sunday. ![]() |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13903 Credit: 208,696,464 RAC: 304 ![]() ![]() |
I hope they bring the splitters back online soon, the Ready to Send buffer is rapidly dwindling. Grant Darwin NT |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 ![]() |
Now it's over... :( Does anybody has a spare WU to share with me? LOL ![]() |
![]() ![]() ![]() Send message Joined: 17 Feb 01 Posts: 34451 Credit: 79,922,639 RAC: 80 ![]() ![]() |
I have about 10 days left. With each crime and every kindness we birth our future. |
andybutt ![]() Send message Joined: 18 Mar 03 Posts: 262 Credit: 164,205,187 RAC: 516 ![]() ![]() |
I have a couple of days left. Can't get any more as uploading 2500 and damn slow! |
-BeNt- ![]() Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 ![]() |
I've got about 11 days worth for my gpu's and 7 days for my cpu. So I should be good to go for awhile, I'm sure the fella's will have the splitters back into shape and everyone filled by then....least I hope so. ;p Traveling through space at ~67,000mph! |
![]() ![]() Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 ![]() |
|
-BeNt- ![]() Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 ![]() |
Yep just noticed I've got 8 in the download cue now. Wonderful work for a Thursday guys! Traveling through space at ~67,000mph! |
![]() ![]() Send message Joined: 21 Jun 01 Posts: 21804 Credit: 2,815,091 RAC: 0 ![]() |
5/31/2012 5:31:05 PM | SETI@home | Scheduler request completed: got 0 new tasks 5/31/2012 5:31:05 PM | SETI@home | Project has no tasks available Here I go again. And I have no days left since my cache is set to 0.1 days cuz I like to share. Too bad more people don't. me@rescam.org |
Wedge009 ![]() Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 ![]() ![]() |
I do wonder about that, too. I keep a one-day cache, which is normally enough to cover the weekly outage. If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around... and I imagine that WUs would be validated far more quickly too. My 'pending' WU count hasn't gone below 1000 for weeks (it hit nearly 2800 after coming back from the outage). Is it really that good for the project for WUs to be 'sat on' with hosts using huge caches for days before they're actually processed? Just a thought, not having a go at anyone at all. Soli Deo Gloria |
![]() ![]() ![]() ![]() Send message Joined: 20 Jun 99 Posts: 6659 Credit: 121,090,076 RAC: 0 ![]() |
The way I see it is that we are all sharing whatever the servers can dish out. This is a huge project, with no telling how much data will need to be crunched before we get any results. Things are definitly heading in the right direction with Seti hardware, so I think the capability to deliver more work to us is increasing. The more work that gets sent out, the better our chances. I run a 5 day cache, and that is 5-6 thousand wu's. I crunch every one of them with few errors or invalids. In order for my rig to do as much science as it can, I don't let in run dry if I can help it. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Wedge009 ![]() Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 ![]() ![]() |
I suppose I was just considering Folding@home's philosophy of timeliness of results vs the bulk of work that can be done. I appreciate the improvements in the hardware with regards to WU generation/transfer/validation/etc, but I was thinking that with smaller caches, WUs (from a global project perspective) could be processed and validated faster, freeing resources in the databases and other server processes and generally allowing S@h to run more efficiently. I admit I don't follow the forums here that thoroughly (some users have been less than friendly), so I don't know if this is an idea that's been raised previously. Soli Deo Gloria |
tbret ![]() Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 ![]() ![]() |
I suppose I was just considering Folding@home's philosophy of timeliness of results vs the bulk of work that can be done. I appreciate the improvements in the hardware with regards to WU generation/transfer/validation/etc, but I was thinking that with smaller caches, WUs (from a global project perspective) could be processed and validated faster, freeing resources in the databases and other server processes and generally allowing S@h to run more efficiently. You ought to hang-out more! Yeah, it's talked-about from time-to-time. I'd rather have the work I turn-in error-out or validate quickly, but...I kind-of like to have work to do when there is a power-outage or a RAID corruption or something, too. Just over this little outage I had a computer run-out of CPU work (but for some reason had plenty of GPU work in reserve). And for some reason we don't seem to always go FIFO, so the last thing you download may be the next thing you crunch while older tasks sit. So... what do you do? I'm trying a happy 5-day medium, but apparently this whole misreporting issue is messing with the estimated run times so what I set isn't necessarily what I get. |
![]() ![]() Send message Joined: 21 Jun 01 Posts: 21804 Credit: 2,815,091 RAC: 0 ![]() |
The difference is you aren't bragging about having several days worth available to crunch while complaining your uploads won't go thru. For the people who do those are not words of encouragement to those of us who are running on empty. me@rescam.org |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 ![]() |
You choose to run on empty. Why should that be a concern to any one else? |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51520 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
Come now, kitties...... The subject of cache sizes has long been debated. Fact remains, it is up to the individual users to choose what cache size they wish to try to maintain within the choices made available by this project. Let's not stir the pot please. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 ![]() ![]() |
Getting Scheduler request failed: HTTP Internal Server Error I don't believe that this problem is fixed ... cheers |
![]() ![]() Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 ![]() |
If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around... No, the splitters stop, when about 250,000 WUs are ready to send, so once they run out of tapes (for whatever reason), there are usually no more than 250,000 WUs to send out regardless of what the people have in their caches. Larger caches actually force the servers to generate larger work buffer, which is than stored in the cache of each client, so if the servers are down, the clients can still do a lot of work for the project and return it after the outage. If we all had just one day cache, the processing for S@H would have been stopped after about 24 hours completely, with larger caches we process the WUs like nothing happend, the current servers are powerful enough to catch up and restore our caches a while after the outage anyway. ![]() |
Cosmic_Ocean ![]() Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 ![]() ![]() |
My single core machine was down to one MB left of its 2.5-day cache, but on the first scheduler contact after it all came back up, it reported all the completed tasks and got 8 new ones to fill the cache back up. Main cruncher is AP-only and reported what it completed during the outage, and hasn't gotten any new APs yet. Just a little less than a day before I run out. Not worried, nor complaining. I'll get more work eventually. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.