Message boards :
Number crunching :
Panic Mode On (95) Server Problems?
Message board moderation
Previous · 1 . . . 17 · 18 · 19 · 20 · 21 · 22 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
The cricket just made a huge jump, this must mean something. They're transferring new data to be split. The Scheduler is still dead. Grant Darwin NT |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
The Scheduler is still dead. And now my C2D T8100 has managed to contact the scheduler: http://setiathome.berkeley.edu/hosts_user.php?userid=35858 Not dead, just strangled. Claggy |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
Not dead, just strangled. How about mostly dead? One of my systems has been able to get a response, about twice in the last couple of hours. The other one, No New Tasks or not, no luck. EDIT- until I made this post. The usual perversity of nature & inanimate machines rears it head, yet again. Grant Darwin NT |
Darth Beaver Send message Joined: 20 Aug 99 Posts: 6728 Credit: 21,443,075 RAC: 3 |
well can't upload so the project has craped it's self again ... Maybe it's time to shut the whole thing down till they fix everything as it's starting to get reduculas |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
If that is true Einstein will get moderate boost. |
Mr. Kevvy Send message Joined: 15 May 99 Posts: 3806 Credit: 1,114,826,392 RAC: 3,319 |
Have a nice and fan noise free weekend folks. Nah, that's what Einstein@Home is there for. Spent too much on this farm to keep it idle. I did plan on hanging there until Green Bank was online but of course they had to go have their first outage ever there shortly after I went over... typical. Well I'll hang out there again and see if my presence still proves as destabilizing. It would be wonderful if there was crosstalk between the two projects as they both use Arecibo's data; Einstein is far smoother and less labour-intensive: ie no tiny work cache limit (you ask for two days' cache you get two days' cache not four hours' cache and your client hammering the scheduler every five minutes for more) it doesn't go planned offline for hours every Tuesday and then be unreachable for about the same after getting hammered with thousands of requests playing catchup, and it hardly goes offline unplanned as well. So ideally there would just be one set of Arecibo data and we'd search it for both ETI and pulsars or whatever else. But dream on Mr. Kevvy. :^) |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
...It would be wonderful if there was crosstalk between the two projects...There's been a fair bit of that in bursts... Though mostly about Boinc design flaws and bugs. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
For a slight bit of "that was soooo two weeks ago," I pulled up the inr-304/8_34 graph again, and the data that is being sent to the servers on our normal inr-211/6_17 link.. isn't coming from the lab. I know we used to have that old 100mbit link up there.. but there's just over 300mbit being sent to the servers, so it can't be from that link, either. Maybe it's from the off-site storage repository? *shrug* I'm hoping to see ~950Mbit for 18+ hours of the blue line any day now.. meaning the AP DB is fixed and being sent back down to the co-lo. But.. dream on, right? :p Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1856 Credit: 268,616,081 RAC: 1,349 |
Definitely playing havoc with normal transactions. 322k results, but can't get work download for any of my boxes, and uploads and scheduler access are spotty at best. |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 31014 Credit: 53,134,872 RAC: 32 |
Friday Night, way past quitting time, and someone is using the link rather heavy. Might be getting new data from off campus, or maybe moving a database back in place from a location where it was fixed. Or in the worse case, loading some backup material needed to fix a blowout in the database. Anyway, good work and it is appreciated. Even if we don't know exactly what is going on right now. |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I got rigs starting to go cold here.... I have sent messages to Eric, Matt, and Jeff. Don't know if anybody is on deck tonight yet, or if it could be fixable by remote. Looks like the furnace is going to have to kick in, as it's very cold here tonight. Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 27 |
Total available channels on disk" fron the SSP page keeps going up. Maybe blue line is just normal load of more tapes Dave |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Total available channels on disk" fron the SSP page keeps going up. Something is most assuredly not 'normal'. Stats on results received are 175 hours old. Something is tied tight and not updated normally. I am getting little other than 'can not connect to server' errors for hours now and most rigs are now gone cold other than CPU work. Nothing on the SSP can be trusted right now, even if updating. Something is seriously borked, and the Cricket graphs, which are not connected to the Seti servers other than monitoring their traffic, confirm that work is not going outbound. Could the upload or transfer of data be stifling the ability to send out work? I dunno that. I think something more than that is afoot right now. Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
Could the upload or transfer of data be stifling the ability to send out work? Nope. The Scheduler had been playing up for over a week. Several hours prior to it dying completely is played up more often than it had been. Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Could the upload or transfer of data be stifling the ability to send out work? Granted. The think I don't know is why the boyz in da lab had not done anything about it all week. "Time is simply the mechanism that keeps everything from happening all at once." |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
Could the upload or transfer of data be stifling the ability to send out work? Probably not aware of it. You could see it in the Cricket graphs, and if you looked at your log you could see the Scheduler failures mixed in with the successes, but overall the work was still going out & results coming back in. Grant Darwin NT |
mramakers Send message Joined: 20 Jul 04 Posts: 42 Credit: 3,694,335 RAC: 0 |
|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
There was a brief burst of life there, then it died again. One system should be out of GPU work in 30min or so, the other in a couple of hours. Grant Darwin NT |
Wiggo Send message Joined: 24 Jan 00 Posts: 36850 Credit: 261,360,520 RAC: 489 |
If that blue line stays there any longer then it'll be the AP server being reloaded with its data as just a normal new work files transfer would be about done by now (or very very shortly). ;-) Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 36850 Credit: 261,360,520 RAC: 489 |
Well my main rig now has 2-3 days of backup GPU work and it won't be long before my 2nd rig has to go there as well, but it'll likely only grab a quarter of what the main rig did. :-( [edit] the blue cricket line is still going so it's likely not just new work coming down from the hill. Cheers. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.