Message boards :
Number crunching :
Panic Mode On (14) Server problems
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 10 · Next
Author | Message |
---|---|
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
I stopped accepting work on my Core i7 so I can switch app_info's and apps. This machine goes through a couple hundred WUs in a matter of a couple hours. My kitties are doing just fine..... Their kibble caches were full before the crash. They are good to go for a week or so, full of P&V and AP...LOL. "Time is simply the mechanism that keeps everything from happening all at once." |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
|
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
I got one "new" AP V5 (wu_5) and three MBs. All three MBs were VLARs, my Opt-app kicked them out. :( PROUD MEMBER OF Team Starfire World BOINC |
Zydor Send message Joined: 4 Oct 03 Posts: 172 Credit: 491,111 RAC: 0 |
Poor VLARs *sniff* no one wants them (me included) - all lonely spinning around out there in the great void - at least they get to go to many places and see many things in their travels rofl :) Regards Zy |
Kinguni Send message Joined: 15 Feb 00 Posts: 239 Credit: 9,043,007 RAC: 0 |
|
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
Poor VLARs *sniff* no one wants them (me included) - all lonely spinning around out there in the great void - at least they get to go to many places and see many things in their travels rofl :)'S just not right, dissing them VLAR's...... If the app cannot handle it, fix the app. Cherry picking is not in good taste just to suit your end game. My rigs are all doing AP right now, but my preferences will take MB of any sort if the AP supply dwindles..... "Time is simply the mechanism that keeps everything from happening all at once." |
Zydor Send message Joined: 4 Oct 03 Posts: 172 Credit: 491,111 RAC: 0 |
I'll take the VLARs. I'm not picky. See - there are still kind hearted souls in this world - there's still hope for the VLAR Liberation Front without extremist attacks on their CUDA brothers :) Regards Zy |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
|
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
I'll do them when they come up on my CPU but my poor little 8500GT Cuda just isn't up to them. Right now, both my cores are busy with AP V5s. PROUD MEMBER OF Team Starfire World BOINC |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I'll do them on my new Opteron 2222's. I have some presently (after changing to an MB-only venue) and it looks like they should run pretty quickly. AP does pay more credits/second than MB, but MB typically grants credit faster. I'm not picky either. I would actually prefer a somewhat even mix of AP/MB like we used to get. Now it seems if you have ap_v5 selected..that's all you get. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Bernie Vine Send message Joined: 26 May 99 Posts: 9958 Credit: 103,452,613 RAC: 328 |
Bernie As I don't understand a word of the first sentence so OK I will drop it, but my name is BERNIE and I hate it when people cant be bothered to READ what is in front of them. BYE BYE |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13841 Credit: 208,696,464 RAC: 304 |
My cache has finally refilled, which is surprising considering the length of the outage & the amount of traffic still flowing. I wasn't expecting to get much work for at least 12 hours after things got rolling again. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13841 Credit: 208,696,464 RAC: 304 |
Usually anything above 85Mb/s means i can't get downloads to happen. Grant Darwin NT |
Niteryder Send message Joined: 1 Mar 99 Posts: 64 Credit: 22,663,988 RAC: 18 |
The graphs are taking a dive again, pages are loading slow and no work available. |
Rudy Send message Joined: 23 Jun 99 Posts: 189 Credit: 794,998 RAC: 0 |
very slow page updates here also, must be the weekend. |
Labbie Send message Joined: 19 Jun 06 Posts: 4083 Credit: 5,930,102 RAC: 0 |
Seems to be working normally again. I was just able to report about 50 WUs too. Calm Chaos Forum...Join Calm Chaos Now |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I noticed it started taking a while to load the pages a few hours ago, and then it just stopped loading them (without giving a server error). Checked cricket graphs and the "in" was at 700Kbit. Yes, below 1 megabit. when the database was down, we were still running about 3Mbit (which I'm guessing was almost completely the forum traffic since scheduler requests sent no replies). Seems fine now. We're maxed out at the bit ceiling again. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66282 Credit: 55,293,173 RAC: 49 |
I noticed it started taking a while to load the pages a few hours ago, and then it just stopped loading them (without giving a server error). Checked cricket graphs and the "in" was at 700Kbit. Yes, below 1 megabit. when the database was down, we were still running about 3Mbit (which I'm guessing was almost completely the forum traffic since scheduler requests sent no replies). SNAFU has receded, All is now good to go. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
|
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Alright, well with everything working again, I have been able to finally prove something that I've been saying as an unfounded rumor. turns out I was right. Venue with MB, AP, AP_v5, allow for others, no GPU: nothing but ap_v5 WUs Venue with MB only, no GPU: plenty of MBs to be had. 2009-03-22 22:14:42|SETI@home|Sending scheduler request: To fetch work. Requesting 1094534 seconds of work, reporting 16 completed tasks 2009-03-22 22:14:47|SETI@home|Scheduler request succeeded: got 8 new tasks 2009-03-22 22:15:03|SETI@home|Sending scheduler request: To fetch work. Requesting 1037461 seconds of work, reporting 0 completed tasks 2009-03-22 22:15:08|SETI@home|Scheduler request succeeded: got 19 new tasks 2009-03-22 22:15:24|SETI@home|Sending scheduler request: To fetch work. Requesting 901757 seconds of work, reporting 0 completed tasks 2009-03-22 22:15:29|SETI@home|Scheduler request succeeded: got 15 new tasks 2009-03-22 22:15:44|SETI@home|Sending scheduler request: To fetch work. Requesting 794627 seconds of work, reporting 0 completed tasks 2009-03-22 22:15:49|SETI@home|Scheduler request succeeded: got 17 new tasks 2009-03-22 22:16:00|SETI@home|Sending scheduler request: To fetch work. Requesting 673232 seconds of work, reporting 0 completed tasks 2009-03-22 22:16:05|SETI@home|Scheduler request succeeded: got 2 new tasks 2009-03-22 22:16:21|SETI@home|Sending scheduler request: To fetch work. Requesting 658994 seconds of work, reporting 0 completed tasks 2009-03-22 22:16:26|SETI@home|Scheduler request succeeded: got 15 new tasks 2009-03-22 22:16:42|SETI@home|Sending scheduler request: To fetch work. Requesting 551662 seconds of work, reporting 0 completed tasks 2009-03-22 22:16:48|SETI@home|Scheduler request succeeded: got 18 new tasks 2009-03-22 22:17:04|SETI@home|Sending scheduler request: To fetch work. Requesting 422650 seconds of work, reporting 0 completed tasks 2009-03-22 22:17:09|SETI@home|Scheduler request succeeded: got 8 new tasks 2009-03-22 22:17:26|SETI@home|Sending scheduler request: To fetch work. Requesting 365525 seconds of work, reporting 0 completed tasks 2009-03-22 22:17:31|SETI@home|Scheduler request succeeded: got 8 new tasks 2009-03-22 22:17:46|SETI@home|Sending scheduler request: To fetch work. Requesting 308477 seconds of work, reporting 0 completed tasks 2009-03-22 22:17:52|SETI@home|Scheduler request succeeded: got 16 new tasks 2009-03-22 22:18:07|SETI@home|Sending scheduler request: To fetch work. Requesting 194249 seconds of work, reporting 0 completed tasks 2009-03-22 22:18:13|SETI@home|Scheduler request succeeded: got 18 new tasks 2009-03-22 22:18:29|SETI@home|Sending scheduler request: To fetch work. Requesting 65725 seconds of work, reporting 0 completed tasks 2009-03-22 22:18:35|SETI@home|Scheduler request succeeded: got 10 new tasks 2009-03-22 22:18:51|SETI@home|Sending scheduler request: To fetch work. Requesting 2846 seconds of work, reporting 0 completed tasks 2009-03-22 22:18:57|SETI@home|Scheduler request succeeded: got 1 new tasks 2009-03-22 22:19:08|SETI@home|Sending scheduler request: To fetch work. Requesting 1251 seconds of work, reporting 0 completed tasks 2009-03-22 22:19:14|SETI@home|Scheduler request succeeded: got 0 new tasks 2009-03-22 22:19:14|SETI@home|Message from server: No work sent 2009-03-22 22:19:14|SETI@home|Message from server: No work is available for SETI@home Enhanced 2009-03-22 22:20:14|SETI@home|Sending scheduler request: To fetch work. Requesting 1368 seconds of work, reporting 0 completed tasks 2009-03-22 22:20:19|SETI@home|Scheduler request succeeded: got 1 new tasks True that second-to-last request is misleading. It has already been proven that the scheduler can only assign what it has left from the last feeder query, so it appears I had pretty decent timing for the requests on all of them except that one. I'm calling this case closed. :p Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.