Message boards :
Number crunching :
Panic Mode On (40) Server problems
Message board moderation
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · Next
Author | Message |
---|---|
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
This is good! I have dropped another 6000 pending units in the last hour. I think they are unclogging the system. :) My pendings have dropped by over half from 33,000 to around 16,000. All the work I have on my machine are at least _2s and above. I just finished a _7 and have a couple of _5 and _6s coming up. One problem though is that I've got about 50 new ghosts this time around. They did resend 20 of them to me as lost tasks but that still leaves a bunch for me especially since they are all other people's cast offs. I hope something triggers another resend lost tasks soon as I'd really like to get those done. PROUD MEMBER OF Team Starfire World BOINC |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
http://setiathome.berkeley.edu Project is slow due to a database machine swap. The master boinc database machine (mork) is not operating properly. The task of serving the database has been moved to another machine (jocelyn). This temporary master database server does not have the capacity to run the project at full speed. Work distribution will be slow until the new server arrives. The purchase of this new machine was made possible by a very successful funding drive carried out by SETI@Home participants. 23 Oct 2010 17:37:39 UTC |
Klurt Send message Joined: 30 Nov 99 Posts: 23 Credit: 13,699,019 RAC: 0 |
Can I say something positive for a change? Downloads haven't been this quick in ages!! (yes i am getting a tiny bit of the new work). |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
I have to agree.. I am getting some work, it is coming in and out really smooth. Although I doubt anyone can really load up for an outtage like this. Still.. it makes for a happy weekend. Janice |
SciManStev Send message Joined: 20 Jun 99 Posts: 6658 Credit: 121,090,076 RAC: 0 |
I have been getting just a little work as well, but I'm drinking beer, crunching Einstein, and having a good day! The uploads and downloads as you said are going very quickly if your scheduler request happens to nab any work. Usually it will grab a few GPU units, which is fine. They are processsed very quickly, and Einstein is warming my CPU cores. Yep, no panic here! Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66334 Credit: 55,293,173 RAC: 49 |
I have been getting just a little work as well, but I'm drinking beer, crunching Einstein, and having a good day! The uploads and downloads as you said are going very quickly if your scheduler request happens to nab any work. Usually it will grab a few GPU units, which is fine. They are processsed very quickly, and Einstein is warming my CPU cores. Yep, no panic here! I have enough for about 6 and 16 hours(cpu/gpu) work and I'm downloading more while I wait for a few weeks, Details for those who want to know more about what I mean are in this thread Here. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Something nice is happening. Over night, my pendings dropped by 100,000, and my RAC shot up. Clearing the backlog has got to be a good thing, not so much for my RAC, but for the huge amount of space it takes up. Space freeing doesn't happen until after a canonical result has been assimilated, so today's validations aren't likely to make additional space before Monday at the earliest. The Assimilator queue had built up to over 1.66 million WUs, but it's gradually shrinking. When oscar is put into service perhaps assimilation will work somewhat faster, meanwhile thumper is getting the job done with a little margin. Joe |
SciManStev Send message Joined: 20 Jun 99 Posts: 6658 Credit: 121,090,076 RAC: 0 |
Something nice is happening. Over night, my pendings dropped by 100,000, and my RAC shot up. Clearing the backlog has got to be a good thing, not so much for my RAC, but for the huge amount of space it takes up. Even if delayed, this seems like a very positive step. I am very happy to see the dynamics shift, and things stop getting worse, but start getting better. I am very pleased in what I am seeing. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
That's why so many MB WU's were just, well in the last hours, validated, also saw quite a number of Ghost's, all due 23 october. Last reported and validated. Second WU. |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
It's about time, I stop my 'what troughput or RAC', I can achieve, by selecting, projects, suiting my hardware. Well, now I know, but I'm not happy about it. SETI, Einstein and GPUgrid, on my XP64 FERMI(GTX480)rig, Docking, Leiden, CPDN, SETI too on my 1st QUAD 2.4GHz+GTS250 host and MW, Collatz C., Docking and (till now) DNETC, SETI and SETI Bêta on the 2nd QUAD Q6600 (ATI)+EAH4850 & EAH5870 And with SETI down for a few weeks, gives me time for reconsideration......... what projects, I will support. Anyone has his own idea's of what is important and consedering the hardware available, choose one or more projects. In the (comming) weeks, there's also time, to do you/our own hardware inspection/cleaning/running Disk-Defrag/Full AV/Trojan scan, BOINC included (switched off), etc. I'm really gettin tired of the RAC rat race, I showing the effects of a RAC-junki ;^), but really, this DNETC, was an eye-opener, pushing other projects aside and made use of both 4 CPU cores and 2 ATI cards, found an option the switch off CPU use, so the sended 100's (ATI14) ATIx5xxxxxx, WU's. Done in 15 minutes, giving 3200 credits, at the expence of burning your card('s)(I was amazed, when using a PCI-E x2 setting, which didn't slow it down, well 2 minutes less... Which used the GPU's to the very max, even with all setting below stock, the noice of the fans are annoying. I can use some time, making up my mind, what project I'll support, also because the ongoing increase, in electricity prices, a.t.m. Feel free, admins to remove this part, cause it's a part of a double post ;) |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I just noticed some of the numbers on the server status page. Ready to Send is soaring, results in the field is dropping surprisingly fast. I guess one good thing about this.. when everything comes back online, there will be PLENTY of work for everyone. I'm going to go ahead and predict 8 solid days at 94mbit on the network graph, assuming the new servers will handle that kind of stress for that long, which they certainly should. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Why deleters are disabled? We can't look results anyway so better to delete all assimilated results/tasks to free disk spacebefore transition, not? Also, completely disabling download servers not a way to go IMHO. Much better would be just to disable splitters. Then resends would provide some small work traffic that should not harm database server but would help to cleanup everything. |
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
Why deleters are disabled? We can't look results anyway so better to delete all assimilated results/tasks to free disk spacebefore transition, not? Also db_purge.x86_64 should be enabled to remove all validated workunits/results from the BOINC database before it will be copied to the new server. |
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 |
Agreed that should disable the feeder.x86_64, and enable the deleters and purge. Makes no sense to assign work to clients if they can't download anything. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Agreed that should disable the feeder.x86_64, and enable the deleters and purge. Makes no sense to assign work to clients if they can't download anything. I not so sure, but the feeder might need to be running for the scheduler to work, and then there's this changeset that appeared the other day: Changeset 22601 - scheduler/feeder: add a project config option <dont_send_jobs>. While there's no proof that this changeset has been applied here, it's the most obvious project that would require it, Claggy |
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
Work is not assigned to anyone... otherwise the results ready to send would be about 0 by now with the splitters turned off. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Work is not assigned to anyone... otherwise the results ready to send would be about 0 by now with the splitters turned off. task to resend assigned even now. Lost task resend feature. But it's impossible to download them. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I'm getting ever-closer to having a cold room. Only about a day and a half of APs left to crunch. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
James Sotherden Send message Joined: 16 May 99 Posts: 10436 Credit: 110,373,059 RAC: 54 |
I'm getting ever-closer to having a cold room. Only about a day and a half of APs left to crunch. LOL, So thats why I had to jack the thermostat up a notch, I turned off the P4 and the i7. I did blow out all the dust bunnies. The i7 didnt seem to have much dust in it, First time I have blown it out in 18 months. [/quote] Old James |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66334 Credit: 55,293,173 RAC: 49 |
I'm getting ever-closer to having a cold room. Only about a day and a half of APs left to crunch. So that's where all the wind and cold air came from. ;) Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.