The Server Issues / Outages Thread - Panic Mode On! (119)

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 53 · 54 · 55 · 56 · 57 · 58 · 59 . . . 107 · Next

AuthorMessage
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2042204 - Posted: 1 Apr 2020, 5:49:01 UTC - in response to Message 2042202.  

Yeah. I'm guessing limits were removed. I'm above mine, but that's ok.
ID: 2042204 · Report as offensive     Reply Quote
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 2042206 - Posted: 1 Apr 2020, 5:53:15 UTC - in response to Message 2042204.  

Yeah. I'm guessing limits were removed. I'm above mine, but that's ok.


Not that I know of, my TR has 4 GPU's in it, 750 is my limit.
Kevin


ID: 2042206 · Report as offensive     Reply Quote
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2042208 - Posted: 1 Apr 2020, 5:56:45 UTC - in response to Message 2042206.  
Last modified: 1 Apr 2020, 5:59:50 UTC

The limit on CPU 150 or 100? If it's 150, I'm just under my limit.
7  AstroPulse Files
576  SETI Files

Forgot I put another GPU in it.
ID: 2042208 · Report as offensive     Reply Quote
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 2042209 - Posted: 1 Apr 2020, 6:00:00 UTC - in response to Message 2042208.  

The limit on CPU 150 or 100?


150 CPU and 150 per GPU.

Multi CPU is limited to 150.
Kevin


ID: 2042209 · Report as offensive     Reply Quote
AllgoodGuy

Send message
Joined: 29 May 01
Posts: 293
Credit: 16,348,499
RAC: 266
United States
Message 2042214 - Posted: 1 Apr 2020, 6:03:54 UTC - in response to Message 2042209.  

Don't think I have a long thunderbolt cable or I'd pull another out of my closet :)
ID: 2042214 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2042232 - Posted: 1 Apr 2020, 8:48:00 UTC - in response to Message 2042202.  

One of mine was for 366 New tasks, hit the jackpot with that request:-)


. . OK I think that is the current record for observed WU request response. Anyone had more?

Stephen

:)
ID: 2042232 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2042242 - Posted: 1 Apr 2020, 9:40:20 UTC - in response to Message 2042232.  

One of mine was for 366 New tasks, hit the jackpot with that request:-)
. . OK I think that is the current record for observed WU request response. Anyone had more?
My personal record is 322 and that happened yesterday. I had never before got anything over 300, so the scheduler buffer size may have been increased recently.
ID: 2042242 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2042270 - Posted: 1 Apr 2020, 12:11:54 UTC
Last modified: 1 Apr 2020, 12:19:58 UTC

Mine was 300 IIRC and we could definitely set Panic mode to OFF

I know it's late for S@H now but I was wondering, in this last days the generation of new work, worked with almost no problem. I leave the host with a 110K cache yesterday at night and when i wake up the cache still at that level. But following the history file where where more than 2K WU crunched during the night. What's was changed who makes all the troubles with the DB size simply gone, and form the client side all are working fine now? Did someone finally discovered what was the cause of our nightmares? Will be nice to know. If you look at the SSP the WU numbers are a lot higher that the 20 MM (or something around) who is suppose to be the DB Servers RAM limit.

Ian and others who has hungry hosts, did you see similar behavior on yours?
ID: 2042270 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2042278 - Posted: 1 Apr 2020, 12:51:41 UTC - in response to Message 2042270.  

I finally ran out of work on my fastest host last night.

only about 5000 left on my other one.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2042278 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2042282 - Posted: 1 Apr 2020, 13:10:14 UTC - in response to Message 2042278.  
Last modified: 1 Apr 2020, 13:16:01 UTC

I finally ran out of work on my fastest host last night.

only about 5000 left on my other one.

Sorry to hear that. I was believing you was running on high caches too.
All the resends and close deadline WU received where already cleared (was a lot BTW).
Now the close deadline WU who is crunched right now is May 11 for the GPU and 24 May for the CPU.
Sure even my high cache will be depleted a lot earlier that that.
ID: 2042282 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2042286 - Posted: 1 Apr 2020, 13:22:33 UTC - in response to Message 2042282.  

I had about 15000 on one host, but you know it's very fast :P, that only lasts about 2 days.

the even faster host I had set to a limit of 1000 because I did not anticipate getting a large amount of new work, so it didnt have time to build a large cache.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2042286 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2042288 - Posted: 1 Apr 2020, 13:26:51 UTC - in response to Message 2042286.  
Last modified: 1 Apr 2020, 13:28:19 UTC

I did not anticipate getting a large amount of new work

No one could anticipate that, we only could hope, but as a glorious last bang for the project.
The last supernova on the Setiverse as we know.
My question remains unanswered, and probably will forever. What changes that makes all works so fine?
Definitely No panic in the last days.
ID: 2042288 · Report as offensive     Reply Quote
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 2042289 - Posted: 1 Apr 2020, 13:31:45 UTC

My TR is dry, 1 ghost which I which I am leaving, it expires at the end of this month if anyone wants to catch it. It will soon be bothering those in the Einstein top 50 list.

This machine seems to be picking up the odd task.
Kevin


ID: 2042289 · Report as offensive     Reply Quote
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19041
Credit: 40,757,560
RAC: 67
United Kingdom
Message 2042304 - Posted: 1 Apr 2020, 14:07:51 UTC

Well I am still getting d/loads, got 16 tasks, all initial splits, 10 minutes ago at
01/04/2020 14:54:48 | SETI@home | Scheduler request completed: got 16 new tasks
ID: 2042304 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2042318 - Posted: 1 Apr 2020, 14:59:08 UTC

damn, Juan. You added quite a few more GPUs to your system, eh? ;P
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2042318 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13163
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2042319 - Posted: 1 Apr 2020, 15:11:08 UTC

I only have the daily driver that seems to have been getting fairly regular replacement work to keep close to the desired cache levels.

The others are dropping fairly fast.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2042319 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2042322 - Posted: 1 Apr 2020, 15:16:35 UTC - in response to Message 2042318.  
Last modified: 1 Apr 2020, 15:21:05 UTC

damn, Juan. You added quite a few more GPUs to your system, eh? ;P

Cloaking Device Off.


The last one channel to do for MB is ready to start to split: 27mr20ad 50.20 GB (13)

Was a good ride. Congrats all who made that possible.

I'm out for the AP part of the festivities.
ID: 2042322 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2042327 - Posted: 1 Apr 2020, 15:45:36 UTC

31mr11ai file has appeared there after Eric's announcement!
ID: 2042327 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2042334 - Posted: 1 Apr 2020, 16:03:54 UTC - in response to Message 2042327.  
Last modified: 1 Apr 2020, 16:05:13 UTC

31mr11ai file has appeared there after Eric's announcement!

Yes but that tape was already splited, just look the SSP.

27mr20ab	50.20 GB	 (14)
27mr20ac	50.20 GB	 (14) 	
27mr20ad	50.20 GB	 (13) 	
31mr11ai	50.20 GB	(done)

ID: 2042334 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2042347 - Posted: 1 Apr 2020, 17:13:42 UTC - in response to Message 2042334.  

31mr11ai file has appeared there after Eric's announcement!

Yes but that tape was already splited, just look the SSP.
It appeared there alredy split instead of just being split before we noticed it?

Neither of my computers having any task split from it seems to support this was the case. I guess it was added for AP splitters only.

But AP splitters haven't made any progress since yesterday. That same file has been in that state with 7 channels done and another 7 being split for a long time now. Also my smaller machine has tried to ask AP tasks for its CPU queue without receiving anything but one resend for more than 2 days.
ID: 2042347 · Report as offensive     Reply Quote
Previous · 1 . . . 53 · 54 · 55 · 56 · 57 · 58 · 59 . . . 107 · Next

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (119)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.