Panic Mode On (40) Server problems

Message boards : Number crunching : Panic Mode On (40) Server problems
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · Next

AuthorMessage
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1044012 - Posted: 23 Oct 2010, 16:57:04 UTC - in response to Message 1044006.  

This is good! I have dropped another 6000 pending units in the last hour. I think they are unclogging the system. :)

Steve


My pendings have dropped by over half from 33,000 to around 16,000. All the work I have on my machine are at least _2s and above. I just finished a _7 and have a couple of _5 and _6s coming up.

One problem though is that I've got about 50 new ghosts this time around. They did resend 20 of them to me as lost tasks but that still leaves a bunch for me especially since they are all other people's cast offs. I hope something triggers another resend lost tasks soon as I'd really like to get those done.



PROUD MEMBER OF Team Starfire World BOINC
ID: 1044012 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1044027 - Posted: 23 Oct 2010, 18:20:21 UTC

http://setiathome.berkeley.edu

Project is slow due to a database machine swap.
The master boinc database machine (mork) is not operating properly. The task of serving the database has been moved to another machine (jocelyn). This temporary master database server does not have the capacity to run the project at full speed. Work distribution will be slow until the new server arrives. The purchase of this new machine was made possible by a very successful funding drive carried out by SETI@Home participants. 23 Oct 2010 17:37:39 UTC

ID: 1044027 · Report as offensive
Klurt
Avatar

Send message
Joined: 30 Nov 99
Posts: 23
Credit: 13,699,019
RAC: 0
Netherlands
Message 1044063 - Posted: 23 Oct 2010, 20:17:52 UTC - in response to Message 1044027.  

Can I say something positive for a change?
Downloads haven't been this quick in ages!! (yes i am getting a tiny bit of the new work).
ID: 1044063 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1044065 - Posted: 23 Oct 2010, 20:21:21 UTC - in response to Message 1044063.  

I have to agree.. I am getting some work, it is coming in and out really smooth. Although I doubt anyone can really load up for an outtage like this.

Still.. it makes for a happy weekend.
Janice
ID: 1044065 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6654
Credit: 121,090,076
RAC: 0
United States
Message 1044066 - Posted: 23 Oct 2010, 20:30:13 UTC
Last modified: 23 Oct 2010, 20:30:49 UTC

I have been getting just a little work as well, but I'm drinking beer, crunching Einstein, and having a good day! The uploads and downloads as you said are going very quickly if your scheduler request happens to nab any work. Usually it will grab a few GPU units, which is fine. They are processsed very quickly, and Einstein is warming my CPU cores. Yep, no panic here!

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1044066 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65971
Credit: 55,293,173
RAC: 49
United States
Message 1044071 - Posted: 23 Oct 2010, 20:45:38 UTC - in response to Message 1044066.  

I have been getting just a little work as well, but I'm drinking beer, crunching Einstein, and having a good day! The uploads and downloads as you said are going very quickly if your scheduler request happens to nab any work. Usually it will grab a few GPU units, which is fine. They are processsed very quickly, and Einstein is warming my CPU cores. Yep, no panic here!

Steve

I have enough for about 6 and 16 hours(cpu/gpu) work and I'm downloading more while I wait for a few weeks, Details for those who want to know more about what I mean are in this thread Here.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1044071 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1044076 - Posted: 23 Oct 2010, 21:05:15 UTC - in response to Message 1043983.  

Something nice is happening. Over night, my pendings dropped by 100,000, and my RAC shot up. Clearing the backlog has got to be a good thing, not so much for my RAC, but for the huge amount of space it takes up.

Steve

Space freeing doesn't happen until after a canonical result has been assimilated, so today's validations aren't likely to make additional space before Monday at the earliest. The Assimilator queue had built up to over 1.66 million WUs, but it's gradually shrinking. When oscar is put into service perhaps assimilation will work somewhat faster, meanwhile thumper is getting the job done with a little margin.
                                                                 Joe
ID: 1044076 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6654
Credit: 121,090,076
RAC: 0
United States
Message 1044078 - Posted: 23 Oct 2010, 21:12:22 UTC - in response to Message 1044076.  

Something nice is happening. Over night, my pendings dropped by 100,000, and my RAC shot up. Clearing the backlog has got to be a good thing, not so much for my RAC, but for the huge amount of space it takes up.

Steve

Space freeing doesn't happen until after a canonical result has been assimilated, so today's validations aren't likely to make additional space before Monday at the earliest. The Assimilator queue had built up to over 1.66 million WUs, but it's gradually shrinking. When oscar is put into service perhaps assimilation will work somewhat faster, meanwhile thumper is getting the job done with a little margin.
                                                                 Joe


Even if delayed, this seems like a very positive step. I am very happy to see the dynamics shift, and things stop getting worse, but start getting better. I am very pleased in what I am seeing.

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1044078 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1044135 - Posted: 23 Oct 2010, 23:29:35 UTC - in response to Message 1044076.  

That's why so many MB WU's were just, well in the last hours, validated, also saw quite a number of Ghost's, all due 23 october.
Last reported and validated.
Second WU.


ID: 1044135 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1044798 - Posted: 29 Oct 2010, 17:34:08 UTC - in response to Message 1044135.  

It's about time, I stop my 'what troughput or RAC', I can achieve, by selecting, projects, suiting my hardware.
Well, now I know, but I'm not happy about it.
SETI, Einstein and GPUgrid, on my XP64 FERMI(GTX480)rig, Docking, Leiden, CPDN, SETI too on my 1st QUAD 2.4GHz+GTS250 host and MW, Collatz C., Docking and (till now) DNETC, SETI and SETI Bêta on the 2nd QUAD Q6600 (ATI)+EAH4850 & EAH5870

And with SETI down for a few weeks, gives me time for reconsideration......... what projects, I will support.
Anyone has his own idea's of what is important and consedering the hardware available, choose one or more projects.


In the (comming) weeks, there's also time, to do you/our own hardware
inspection/cleaning/running Disk-Defrag/Full AV/Trojan scan, BOINC included (switched off), etc.


I'm really gettin tired of the RAC rat race, I showing the effects of a RAC-junki ;^), but really, this DNETC, was an eye-opener, pushing other projects aside and made use of both 4 CPU cores and 2 ATI cards, found an option the switch off CPU use, so the sended 100's (ATI14) ATIx5xxxxxx, WU's. Done in 15 minutes, giving 3200 credits, at the expence of
burning your card('s)(I was amazed, when using a PCI-E x2 setting, which didn't slow it down, well 2 minutes less...
Which used the GPU's to the very max, even with all setting below stock, the noice of the fans are annoying.


I can use some time, making up my mind, what project I'll support, also because the ongoing increase, in electricity prices, a.t.m.
Feel free, admins to remove this part, cause it's a part of a double post ;)

ID: 1044798 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1044943 - Posted: 30 Oct 2010, 5:09:08 UTC

I just noticed some of the numbers on the server status page. Ready to Send is soaring, results in the field is dropping surprisingly fast.

I guess one good thing about this.. when everything comes back online, there will be PLENTY of work for everyone. I'm going to go ahead and predict 8 solid days at 94mbit on the network graph, assuming the new servers will handle that kind of stress for that long, which they certainly should.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1044943 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1044967 - Posted: 30 Oct 2010, 7:54:02 UTC

Why deleters are disabled? We can't look results anyway so better to delete all assimilated results/tasks to free disk spacebefore transition, not?

Also, completely disabling download servers not a way to go IMHO. Much better would be just to disable splitters. Then resends would provide some small work traffic that should not harm database server but would help to cleanup everything.
ID: 1044967 · Report as offensive
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 834
Credit: 1,807,369
RAC: 0
Germany
Message 1044996 - Posted: 30 Oct 2010, 12:32:13 UTC - in response to Message 1044967.  

Why deleters are disabled? We can't look results anyway so better to delete all assimilated results/tasks to free disk spacebefore transition, not?

Also db_purge.x86_64 should be enabled to remove all validated workunits/results from the BOINC database before it will be copied to the new server.
ID: 1044996 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 1045002 - Posted: 30 Oct 2010, 13:42:50 UTC - in response to Message 1044996.  

Agreed that should disable the feeder.x86_64, and enable the deleters and purge. Makes no sense to assign work to clients if they can't download anything.
ID: 1045002 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1045005 - Posted: 30 Oct 2010, 13:55:13 UTC - in response to Message 1045002.  
Last modified: 30 Oct 2010, 13:57:03 UTC

Agreed that should disable the feeder.x86_64, and enable the deleters and purge. Makes no sense to assign work to clients if they can't download anything.


I not so sure, but the feeder might need to be running for the scheduler to work, and then there's this changeset that appeared the other day: Changeset 22601

- scheduler/feeder: add a project config option <dont_send_jobs>.


If set, the feeder doesn't read jobs into shmem,
and the scheduler doesn't send jobs.
Intended for use when a project wants to process
a backlog of completed jobs and not issue more.


While there's no proof that this changeset has been applied here, it's the most obvious project that would require it,

Claggy
ID: 1045005 · Report as offensive
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 834
Credit: 1,807,369
RAC: 0
Germany
Message 1045007 - Posted: 30 Oct 2010, 13:55:55 UTC - in response to Message 1045002.  

Work is not assigned to anyone... otherwise the results ready to send would be about 0 by now with the splitters turned off.
ID: 1045007 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1045064 - Posted: 30 Oct 2010, 18:49:19 UTC - in response to Message 1045007.  

Work is not assigned to anyone... otherwise the results ready to send would be about 0 by now with the splitters turned off.

task to resend assigned even now. Lost task resend feature.
But it's impossible to download them.
ID: 1045064 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1045107 - Posted: 30 Oct 2010, 21:25:15 UTC

I'm getting ever-closer to having a cold room. Only about a day and a half of APs left to crunch.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1045107 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1045123 - Posted: 30 Oct 2010, 23:07:53 UTC - in response to Message 1045107.  

I'm getting ever-closer to having a cold room. Only about a day and a half of APs left to crunch.


LOL, So thats why I had to jack the thermostat up a notch, I turned off the P4 and the i7. I did blow out all the dust bunnies. The i7 didnt seem to have much dust in it, First time I have blown it out in 18 months.
[/quote]

Old James
ID: 1045123 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65971
Credit: 55,293,173
RAC: 49
United States
Message 1045130 - Posted: 30 Oct 2010, 23:29:06 UTC - in response to Message 1045123.  

I'm getting ever-closer to having a cold room. Only about a day and a half of APs left to crunch.


LOL, So thats why I had to jack the thermostat up a notch, I turned off the P4 and the i7. I did blow out all the dust bunnies. The i7 didnt seem to have much dust in it, First time I have blown it out in 18 months.

So that's where all the wind and cold air came from. ;)
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1045130 · Report as offensive
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · Next

Message boards : Number crunching : Panic Mode On (40) Server problems


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.