Panic Mode On (104) Server Problems?

Message boards : Number crunching : Panic Mode On (104) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 42 · Next

AuthorMessage
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1837112 - Posted: 21 Dec 2016, 0:55:17 UTC

That was a long one...
ID: 1837112 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11416
Credit: 29,581,041
RAC: 66
United States
Message 1837120 - Posted: 21 Dec 2016, 1:08:44 UTC - in response to Message 1837112.  

That was a long one...

I hope you enjoyed it.
ID: 1837120 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13855
Credit: 208,696,464
RAC: 304
Australia
Message 1837293 - Posted: 22 Dec 2016, 6:19:22 UTC - in response to Message 1837211.  

People have multi-day caches

Not since the server-side limits came in to being.
Barely enough to get through the outage. When more optimised applications become available, even slower crunchers with GPUs will run out of work with the present limits.
Grant
Darwin NT
ID: 1837293 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13855
Credit: 208,696,464
RAC: 304
Australia
Message 1837305 - Posted: 22 Dec 2016, 10:42:59 UTC - in response to Message 1837303.  

So are you saying that in Boinc Manager these settings are ineffective and don't work?

They do work, however with the server side limits it's not possible to store enough work to last more than about 6 hours or so. Those with more powerful hardware and with better optimised applications can run out of work during the weekly maintenance.

So what are these "server side limits" you speak of? Do you have inside info that we don't know about?

100 WUs per CPU, 100 WUs per GPU.
They've been in place probably for a few years now to help keep the load down on the servers.
Grant
Darwin NT
ID: 1837305 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22535
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1837307 - Posted: 22 Dec 2016, 11:28:32 UTC

The interpretation of the two "types" of cache has changed somewhat over the years.

The first "Store at least X days of work" is fairly straightforward - you may store up to X days of work, provided you stay inside the server-side limits as explained by Grant.
The second "Store up to an additional Y days of work" is the one that causes confusion... It DOES NOT mean what it says (and the legend really should be updated) - it means something like "run the cache down until you have X-Y days of work left, then fill it up again". Again the same server limits apply.

The net result is if you set your cache to say 10 days, with the extra set to 10 days you will only call for new work when your cache is (almost) empty - the call will probably happen when it is down to a couple of tasks. Setting the extra days to a very small number (say 0.01 days) will prompt a cache top-up every time you complete and return a task, so your cache will stay its maximum possible size.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1837307 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34380
Credit: 79,922,639
RAC: 80
Germany
Message 1837315 - Posted: 22 Dec 2016, 12:53:24 UTC

So what are these "server side limits" you speak of? Do you have inside info that we don't know about?


Those settings are for multi project crunchers who want a small cache only.
Some projects have a rather short deadline, so some projects would either never touched or missing the deadline.


With each crime and every kindness we birth our future.
ID: 1837315 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1837352 - Posted: 22 Dec 2016, 17:43:20 UTC - in response to Message 1837307.  
Last modified: 22 Dec 2016, 17:47:42 UTC

The interpretation of the two "types" of cache has changed somewhat over the years.

The first "Store at least X days of work" is fairly straightforward - you may store up to X days of work, provided you stay inside the server-side limits as explained by Grant.
The second "Store up to an additional Y days of work" is the one that causes confusion... It DOES NOT mean what it says (and the legend really should be updated) - it means something like "run the cache down until you have X-Y days of work left, then fill it up again". Again the same server limits apply.

The net result is if you set your cache to say 10 days, with the extra set to 10 days you will only call for new work when your cache is (almost) empty - the call will probably happen when it is down to a couple of tasks. Setting the extra days to a very small number (say 0.01 days) will prompt a cache top-up every time you complete and return a task, so your cache will stay its maximum possible size.


. . Hi Rob,

. . Or just leave the "Extra days work" at 0. With the overall limits (SSLs) just set your work limit to 10 days and get the max cache, most full time crunchers will get through those limits in a day or 3.

Stephen

.
ID: 1837352 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51478
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1837353 - Posted: 22 Dec 2016, 17:47:48 UTC

Dunno if this is a panic yet or not.
Noticed the pfb splitters tearing through the 09 datasets very quickly.
They are almost all gone now.
Will know in the next hour or so if they are gonna do the same with the '15 and '16 data in the splitter cache.
Sent a heads up to Eric and Matt.

Meow?
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1837353 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1837355 - Posted: 22 Dec 2016, 17:50:18 UTC - in response to Message 1837353.  

Dunno if this is a panic yet or not.
Noticed the pfb splitters tearing through the 09 datasets very quickly.
They are almost all gone now.
Will know in the next hour or so if they are gonna do the same with the '15 and '16 data in the splitter cache.
Sent a heads up to Eric and Matt.

Meow?


. . I have been getting a lot of low AR Arecibo work lately, might that have something to do with it?

Stephen

.
ID: 1837355 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51478
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1837356 - Posted: 22 Dec 2016, 17:52:21 UTC - in response to Message 1837355.  

Dunno if this is a panic yet or not.
Noticed the pfb splitters tearing through the 09 datasets very quickly.
They are almost all gone now.
Will know in the next hour or so if they are gonna do the same with the '15 and '16 data in the splitter cache.
Sent a heads up to Eric and Matt.

Meow?


. . I have been getting a lot of low AR Arecibo work lately, might that have something to do with it?

Stephen

.

Nah, the splitting rate is too fast to be real. Kinda like what the AP splitters do when they know the work has already been done.
It's either the 09 data or some cockup with the splitters.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1837356 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1837358 - Posted: 22 Dec 2016, 17:59:24 UTC - in response to Message 1837305.  


100 WUs per CPU, 100 WUs per GPU.


NOT per CPU. Max CPU WUs is a flat 100. My dual E5-2670 can attest to that.
On the other hand, GPU IS per GPU.
ID: 1837358 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36829
Credit: 261,360,520
RAC: 489
Australia
Message 1837364 - Posted: 22 Dec 2016, 18:46:07 UTC
Last modified: 22 Dec 2016, 18:47:55 UTC

Now lets have a look to see what having my caches set at 10 days results in.

My i5 3570K turns it 100 CPU task limit over every 1.95 days, my i5 2500K takes 2.13 days so that's as multi day as my caches get (both only using 2 cores).

My GTX 660's go through theirs in just under a day while my GTX 1060's go through theirs in just half a day (only just barely getting through the last 3 outrages).

10 days? Oh how I wish and they're both only mid range setups.

Cheers.
ID: 1837364 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1837367 - Posted: 22 Dec 2016, 18:58:55 UTC - in response to Message 1837364.  

Now lets have a look to see what having my caches set at 10 days results in.

My i5 3570K turns it 100 CPU task limit over every 1.95 days, my i5 2500K takes 2.13 days so that's as multi day as my caches get (both only using 2 cores).

My GTX 660's go through theirs in just under a day while my GTX 1060's go through theirs in just half a day (only just barely getting through the last 3 outrages).

10 days? Oh how I wish and they're both only mid range setups.

Cheers.


Hey there Wiggo,

. . My CPUs are similar to yours but my i5 is crunching on three cores so it gets through 100 tasks in just over a day (about 1.3), my C2D on the other hand, crunching on only one core would take about 8 days to get through its 100 tasks. But the interesting thing is that my humble GT730 will get through 100 tasks in less than 2 days (Arecibo work), guppies choke it up badly and it would only process about a dozen or so per day of them. So averaged out 100 mixed tasks would probably take it about 3 to 4 days. But that is a rather low end card.

Stephen

.
ID: 1837367 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36829
Credit: 261,360,520
RAC: 489
Australia
Message 1837372 - Posted: 22 Dec 2016, 19:20:53 UTC - in response to Message 1837367.  

Now lets have a look to see what having my caches set at 10 days results in.

My i5 3570K turns it 100 CPU task limit over every 1.95 days, my i5 2500K takes 2.13 days so that's as multi day as my caches get (both only using 2 cores).

My GTX 660's go through theirs in just under a day while my GTX 1060's go through theirs in just half a day (only just barely getting through the last 3 outrages).

10 days? Oh how I wish and they're both only mid range setups.

Cheers.


Hey there Wiggo,

. . My CPUs are similar to yours but my i5 is crunching on three cores so it gets through 100 tasks in just over a day (about 1.3), my C2D on the other hand, crunching on only one core would take about 8 days to get through its 100 tasks. But the interesting thing is that my humble GT730 will get through 100 tasks in less than 2 days (Arecibo work), guppies choke it up badly and it would only process about a dozen or so per day of them. So averaged out 100 mixed tasks would probably take it about 3 to 4 days. But that is a rather low end card.

Stephen

.

Even my old Athlon X4 630 with dual GTX 550Ti's numbers were 3.4 days and 1.5 days respectively.

Cheers.
ID: 1837372 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1837410 - Posted: 23 Dec 2016, 0:15:51 UTC

Back when APs flowed pretty steady, I had no problem holding a 10-day cache. If I bumped up to 100% CPU usage instead of 50%, I could hit the server limit of 100 APs and that would be about 17 days of cache.

But there haven't been any/many of them floating around lately. I'm still on the fence about adding MB to my app_info or not.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1837410 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1837442 - Posted: 23 Dec 2016, 5:07:31 UTC

I'm back to receiving mostly BLCs again. Very few Arecibo tasks. Looking at the SSP, it looks as though the numbers are stuck. I dunno...
I'll post the numbers to see if they change;
04fe09ac	50.20 GB	 (3)
05ap09aa	50.20 GB	 (1)
20jl16ae	50.20 GB	 (14)
21ja16ae	50.20 GB	 (5)
21ja16af	50.20 GB	 (2)
ID: 1837442 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51478
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1837462 - Posted: 23 Dec 2016, 8:23:43 UTC - in response to Message 1837442.  

I'm back to receiving mostly BLCs again. Very few Arecibo tasks. Looking at the SSP, it looks as though the numbers are stuck. I dunno...
I'll post the numbers to see if they change;
04fe09ac	50.20 GB	 (3)
05ap09aa	50.20 GB	 (1)
20jl16ae	50.20 GB	 (14)
21ja16ae	50.20 GB	 (5)
21ja16af	50.20 GB	 (2)

They appear to be moving along.............
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1837462 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51478
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1837465 - Posted: 23 Dec 2016, 8:40:07 UTC - in response to Message 1837463.  

Well, as it stands right now, Seti is the most successful distributed computing project on the planet.
Amassing a power, as far as I know, greater than some of the biggest baddest supercomputers on the planet.
And at no cost to the project, other than supporting the infrastructure. Which is of course, a far lesser cost than obtaining and operating an actual supercomputer.
Even IF we do not succeed in finding what we are looking for, the search itself and the computing power harnessed is an amazing achievement.
And done on an amazingly tiny budget, as many science projects go.

I am very proud to be a part of it. And shall continue to be so as long as the project exists, and I am so able to do so.

I believe that each and every contributing user, past and present, has the right to be equally as proud.
Some of us just went a tad beyond 'unused computing cycles'......LOL indeed we did.

Meow.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1837465 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1837528 - Posted: 23 Dec 2016, 15:40:41 UTC - in response to Message 1837522.  

Well, as it stands right now, Seti is the most successful distributed computing project on the planet.
Amassing a power, as far as I know, greater than some of the biggest baddest supercomputers on the planet.

Not quite Mark :-)

With over 145,000 active computers in the system (1.4 million total) in 233 countries, as of 23 June 2013, SETI@home had the ability to compute over 668 teraFLOPS. For comparison, the Tianhe-2 computer, which as of 23 June 2013 was the world's fastest supercomputer, was able to compute 33.86 petaFLOPS (approximately 50 times greater).


. . But at those numbers it puts us in the ballpark :), only 50 times greater is a good achievement for us :).

Stephen

.
ID: 1837528 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1837534 - Posted: 23 Dec 2016, 16:10:36 UTC

Steam is down, worldwide.

What do you mean, not the right server problems thread? ;-)
ID: 1837534 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 42 · Next

Message boards : Number crunching : Panic Mode On (104) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.