Panic Mode On (109) Server Problems?

Message boards : Number crunching : Panic Mode On (109) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 31 · 32 · 33 · 34 · 35 · Next

AuthorMessage
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1913632 - Posted: 17 Jan 2018, 23:11:09 UTC

. . <smirk>

mb_splitter/ap_splitter: Reads tapes (or tape images on disk) containing raw telescope data and creates SETI@home (multi-beam) or Astropulse workunits for the BOINC/SETI@home clients. At least one needs to be running to produce work, and that's usually enough.

. . nuff said :)

Stephen

:)
ID: 1913632 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913634 - Posted: 17 Jan 2018, 23:15:22 UTC

I noticed that gbt_splitter#4 got briefly pulled from duty. Now back. I wonder if it was hanging up the output. Really frustrating that my Linux box is still unable to get any appreciable work while my slow Windows7 boxes have mostly full caches.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913634 · Report as offensive
Profile Mr. Kevvy Crowdfunding Project Donor*Special Project $250 donor
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 3848
Credit: 1,114,826,392
RAC: 3,319
Canada
Message 1913636 - Posted: 17 Jan 2018, 23:20:14 UTC - in response to Message 1913634.  
Last modified: 17 Jan 2018, 23:21:59 UTC

Really frustrating that my Linux box is still unable to get any appreciable work while my slow Windows7 boxes have mostly full caches.


I noticed that as well... I'm sure some would disagree but I see it with my own eyes: even with a zero resource share, if E@H has minimal work (one WU per slot running and zero cached) and SETI@Home is empty, BOINC often stops asking for SETI@Home work (has zero work units and it's just sitting with no countdown timer to next scheduler request) and I have to hit Update manually to remind it. Well I did that with six fast Linux machines and they got zilch and the seventh was my wife's slower SoG Win7 box and presto... 130 tasks. Just the luck of the draw (I hope.)
ID: 1913636 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 37754
Credit: 261,360,520
RAC: 489
Australia
Message 1913638 - Posted: 17 Jan 2018, 23:42:36 UTC - in response to Message 1913634.  

I noticed that gbt_splitter#4 got briefly pulled from duty. Now back. I wonder if it was hanging up the output. Really frustrating that my Linux box is still unable to get any appreciable work while my slow Windows7 boxes have mostly full caches.

The guys are probably trying to break up the congregation of splitters working on the 1st GBT file in the list (still looks to be a half dozen splitters still working on the 1 file when you zoom in on it) and get them working on other files instead and you just caught them in the act. ;-)

Cheers.
ID: 1913638 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1913642 - Posted: 17 Jan 2018, 23:59:39 UTC - in response to Message 1913634.  

I noticed that gbt_splitter#4 got briefly pulled from duty. Now back. I wonder if it was hanging up the output. Really frustrating that my Linux box is still unable to get any appreciable work while my slow Windows7 boxes have mostly full caches.


. . Ironic as it may be for once I have the opposite experience. Both Linux boxes received nearly full caches (over many requests) while the Windows box got work right away for the GPU but struggled over many many attempts before getting any work for the CPU. I had to reschedule WUs from the GPU Q.

Stephen

? ?
ID: 1913642 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913643 - Posted: 18 Jan 2018, 0:02:35 UTC

I finally had to shut off gpu work entirely for the project and wait through 4 request cycles before the servers finally decided to send me 40 cpu tasks on the Linux machine. I haven't had any cpu tasks from Seti for 2 days now on that machine until just now.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913643 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913644 - Posted: 18 Jan 2018, 0:05:02 UTC - in response to Message 1913642.  

I noticed that gbt_splitter#4 got briefly pulled from duty. Now back. I wonder if it was hanging up the output. Really frustrating that my Linux box is still unable to get any appreciable work while my slow Windows7 boxes have mostly full caches.


. . Ironic as it may be for once I have the opposite experience. Both Linux boxes received nearly full caches (over many requests) while the Windows box got work right away for the GPU but struggled over many many attempts before getting any work for the CPU. I had to reschedule WUs from the GPU Q.

Stephen

? ?

I'm absolutely convinced that the project ignores Ryzen machines for work requests. Couldn't get work on the Windows Ryzen either.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913644 · Report as offensive
Profile Chris904395093209d Project Donor
Volunteer tester

Send message
Joined: 1 Jan 01
Posts: 112
Credit: 29,923,129
RAC: 6
United States
Message 1913655 - Posted: 18 Jan 2018, 0:52:12 UTC

Creation time is at 72 a second at the moment. That's the highest I've ever seen it. Most of my machines have a full cache - a couple of them I did have to kick start though.

0 results ready to send and number of units received in the last hour is at 55k but I would imagine those numbers will return to normal in the next few hours.
~Chris

ID: 1913655 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22739
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1913685 - Posted: 18 Jan 2018, 6:27:06 UTC

Hard to say exactly what the state of play is with the replica being over an hour behind the master....
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1913685 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22739
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1913695 - Posted: 18 Jan 2018, 10:35:09 UTC

....from the number of tasks on my crunchers it looks as if the splitters are spluttering again :-(
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1913695 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1913700 - Posted: 18 Jan 2018, 11:13:31 UTC - in response to Message 1913695.  

I'd allowed one of my single GPU machines to help Einstein for a while. I've just given it a prod to restore normal service, and got the full 100 at the first request. As somebody said, it's the luck of the draw.
ID: 1913700 · Report as offensive
Profile JaundicedEye
Avatar

Send message
Joined: 14 Mar 12
Posts: 5375
Credit: 30,870,693
RAC: 1
United States
Message 1913730 - Posted: 18 Jan 2018, 14:34:53 UTC

Caches are full and RAC is steadily dropping
............everything back to normal and the Credit Screw is turning as usual.

"Sour Grapes make a bitter Whine." <(0)>
ID: 1913730 · Report as offensive
Profile Mr. Kevvy Crowdfunding Project Donor*Special Project $250 donor
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 3848
Credit: 1,114,826,392
RAC: 3,319
Canada
Message 1913739 - Posted: 18 Jan 2018, 16:00:29 UTC - in response to Message 1913730.  

Caches are full and RAC is steadily dropping...


Indeed... the lack of work accelerated the fall to where it was going to end up anyways; ripping off the proverbial bandage.
ID: 1913739 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11451
Credit: 29,581,041
RAC: 66
United States
Message 1913743 - Posted: 18 Jan 2018, 16:30:25 UTC

Those who use Einstein as a backup project should take note and have a sufficient cache next Tuesday or they may very well run out of work.
Einstein has decided to join the fun by stating
We are going to shut down the project next Tuesday, Jan 23rd at around 10 AM CET for an upgrade of our database backend systems to make them ready for the years to come. We're going to upgrade hardware parts, operating systems as well the databases themselves, which is why we need to shut down the entire project, including the BOINC backend and this very website.

We should have the pleasure of a double outrage.
ID: 1913743 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1859
Credit: 268,616,081
RAC: 1,349
United States
Message 1913778 - Posted: 18 Jan 2018, 18:42:44 UTC - in response to Message 1913743.  

Those who use Einstein as a backup project should take note and have a sufficient cache next Tuesday or they may very well run out of work.
Einstein has decided to join the fun by stating
We are going to shut down the project next Tuesday, Jan 23rd at around 10 AM CET for an upgrade of our database backend systems to make them ready for the years to come. We're going to upgrade hardware parts, operating systems as well the databases themselves, which is why we need to shut down the entire project, including the BOINC backend and this very website.

We should have the pleasure of a double outrage.

Yeah, it is a shame about the scheduling. Any other day would have sufficed ...
ID: 1913778 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913807 - Posted: 18 Jan 2018, 20:58:17 UTC - in response to Message 1913778.  

Can always choose another backup project. I'll have MilkyWay and GPUGrid.net as backups also. Though if I build a big enough cache of Einstein work, that shouldn't be an issue either.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913807 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11451
Credit: 29,581,041
RAC: 66
United States
Message 1913813 - Posted: 18 Jan 2018, 21:21:18 UTC - in response to Message 1913778.  

Yeah, it is a shame about the scheduling. Any other day would have sufficed ...

But a double outrage is something to behold.
ID: 1913813 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1913833 - Posted: 18 Jan 2018, 23:01:11 UTC - in response to Message 1913807.  

Can always choose another backup project. I'll have MilkyWay and GPUGrid.net as backups also. Though if I build a big enough cache of Einstein work, that shouldn't be an issue either.
Last time I crossed swords with MilkyWay, it was appallingly badly managed. And GPUGrid is giving me RSI in the mouse-click finger, because I run it but have extreme difficulty snagging new work.
ID: 1913833 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913844 - Posted: 19 Jan 2018, 0:05:46 UTC - in response to Message 1913833.  

Can always choose another backup project. I'll have MilkyWay and GPUGrid.net as backups also. Though if I build a big enough cache of Einstein work, that shouldn't be an issue either.
Last time I crossed swords with MilkyWay, it was appallingly badly managed. And GPUGrid is giving me RSI in the mouse-click finger, because I run it but have extreme difficulty snagging new work.

MilkyWay?? Badly managed?? Wow, very different experience here.. MW is the most set and forget project I have run. I never have to micromanage it at all. I love the hard limit of 80 tasks per gpu at any one time. Never a chance of getting too much work and never any chance of running out. I only crunch gpu tasks so that means I have run the Binary Pulsar Search while it lasted and now run the Gamma Ray Pulsar Search. The only issues I have seen with the project is the occasional bad work unit that promptly gets tossed out very quickly. The servers seem to stay up for very long times, months at a time in fact.

Yes, I have just recently joined GPUGrid.net and the gpu work availability is very spotty and random. The tasks when they are made available are quickly gobbled up by many fingers bashing the update button. The cpu work for Linux hosts has had work available pretty much all the time. No problem getting cpu work for the Linux host. We are asking the project scientists to make the cpu work available for Windows too.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913844 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1913850 - Posted: 19 Jan 2018, 0:27:43 UTC - in response to Message 1913844.  

Well, I found it necessary to make Post 58550. Read through to Post 58572, and note his titles.
ID: 1913850 · Report as offensive
Previous · 1 . . . 31 · 32 · 33 · 34 · 35 · Next

Message boards : Number crunching : Panic Mode On (109) Server Problems?


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.