Message boards :
Number crunching :
Panic Mode On (104) Server Problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 42 · Next
Author | Message |
---|---|
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
That was a long one... |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
That was a long one... I hope you enjoyed it. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
People have multi-day caches Not since the server-side limits came in to being. Barely enough to get through the outage. When more optimised applications become available, even slower crunchers with GPUs will run out of work with the present limits. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
So are you saying that in Boinc Manager these settings are ineffective and don't work? They do work, however with the server side limits it's not possible to store enough work to last more than about 6 hours or so. Those with more powerful hardware and with better optimised applications can run out of work during the weekly maintenance. So what are these "server side limits" you speak of? Do you have inside info that we don't know about? 100 WUs per CPU, 100 WUs per GPU. They've been in place probably for a few years now to help keep the load down on the servers. Grant Darwin NT |
rob smith Send message Joined: 7 Mar 03 Posts: 22535 Credit: 416,307,556 RAC: 380 |
The interpretation of the two "types" of cache has changed somewhat over the years. The first "Store at least X days of work" is fairly straightforward - you may store up to X days of work, provided you stay inside the server-side limits as explained by Grant. The second "Store up to an additional Y days of work" is the one that causes confusion... It DOES NOT mean what it says (and the legend really should be updated) - it means something like "run the cache down until you have X-Y days of work left, then fill it up again". Again the same server limits apply. The net result is if you set your cache to say 10 days, with the extra set to 10 days you will only call for new work when your cache is (almost) empty - the call will probably happen when it is down to a couple of tasks. Setting the extra days to a very small number (say 0.01 days) will prompt a cache top-up every time you complete and return a task, so your cache will stay its maximum possible size. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Mike Send message Joined: 17 Feb 01 Posts: 34380 Credit: 79,922,639 RAC: 80 |
So what are these "server side limits" you speak of? Do you have inside info that we don't know about? Those settings are for multi project crunchers who want a small cache only. Some projects have a rather short deadline, so some projects would either never touched or missing the deadline. With each crime and every kindness we birth our future. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
The interpretation of the two "types" of cache has changed somewhat over the years. . . Hi Rob, . . Or just leave the "Extra days work" at 0. With the overall limits (SSLs) just set your work limit to 10 days and get the max cache, most full time crunchers will get through those limits in a day or 3. Stephen . |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Dunno if this is a panic yet or not. Noticed the pfb splitters tearing through the 09 datasets very quickly. They are almost all gone now. Will know in the next hour or so if they are gonna do the same with the '15 and '16 data in the splitter cache. Sent a heads up to Eric and Matt. Meow? "Time is simply the mechanism that keeps everything from happening all at once." |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Dunno if this is a panic yet or not. . . I have been getting a lot of low AR Arecibo work lately, might that have something to do with it? Stephen . |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Dunno if this is a panic yet or not. Nah, the splitting rate is too fast to be real. Kinda like what the AP splitters do when they know the work has already been done. It's either the 09 data or some cockup with the splitters. "Time is simply the mechanism that keeps everything from happening all at once." |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
NOT per CPU. Max CPU WUs is a flat 100. My dual E5-2670 can attest to that. On the other hand, GPU IS per GPU. |
Wiggo Send message Joined: 24 Jan 00 Posts: 36829 Credit: 261,360,520 RAC: 489 |
Now lets have a look to see what having my caches set at 10 days results in. My i5 3570K turns it 100 CPU task limit over every 1.95 days, my i5 2500K takes 2.13 days so that's as multi day as my caches get (both only using 2 cores). My GTX 660's go through theirs in just under a day while my GTX 1060's go through theirs in just half a day (only just barely getting through the last 3 outrages). 10 days? Oh how I wish and they're both only mid range setups. Cheers. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Now lets have a look to see what having my caches set at 10 days results in. Hey there Wiggo, . . My CPUs are similar to yours but my i5 is crunching on three cores so it gets through 100 tasks in just over a day (about 1.3), my C2D on the other hand, crunching on only one core would take about 8 days to get through its 100 tasks. But the interesting thing is that my humble GT730 will get through 100 tasks in less than 2 days (Arecibo work), guppies choke it up badly and it would only process about a dozen or so per day of them. So averaged out 100 mixed tasks would probably take it about 3 to 4 days. But that is a rather low end card. Stephen . |
Wiggo Send message Joined: 24 Jan 00 Posts: 36829 Credit: 261,360,520 RAC: 489 |
Now lets have a look to see what having my caches set at 10 days results in. Even my old Athlon X4 630 with dual GTX 550Ti's numbers were 3.4 days and 1.5 days respectively. Cheers. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Back when APs flowed pretty steady, I had no problem holding a 10-day cache. If I bumped up to 100% CPU usage instead of 50%, I could hit the server limit of 100 APs and that would be about 17 days of cache. But there haven't been any/many of them floating around lately. I'm still on the fence about adding MB to my app_info or not. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I'm back to receiving mostly BLCs again. Very few Arecibo tasks. Looking at the SSP, it looks as though the numbers are stuck. I dunno... I'll post the numbers to see if they change; 04fe09ac 50.20 GB (3) 05ap09aa 50.20 GB (1) 20jl16ae 50.20 GB (14) 21ja16ae 50.20 GB (5) 21ja16af 50.20 GB (2) |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I'm back to receiving mostly BLCs again. Very few Arecibo tasks. Looking at the SSP, it looks as though the numbers are stuck. I dunno... They appear to be moving along............. "Time is simply the mechanism that keeps everything from happening all at once." |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Well, as it stands right now, Seti is the most successful distributed computing project on the planet. Amassing a power, as far as I know, greater than some of the biggest baddest supercomputers on the planet. And at no cost to the project, other than supporting the infrastructure. Which is of course, a far lesser cost than obtaining and operating an actual supercomputer. Even IF we do not succeed in finding what we are looking for, the search itself and the computing power harnessed is an amazing achievement. And done on an amazingly tiny budget, as many science projects go. I am very proud to be a part of it. And shall continue to be so as long as the project exists, and I am so able to do so. I believe that each and every contributing user, past and present, has the right to be equally as proud. Some of us just went a tad beyond 'unused computing cycles'......LOL indeed we did. Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Well, as it stands right now, Seti is the most successful distributed computing project on the planet. . . But at those numbers it puts us in the ballpark :), only 50 times greater is a good achievement for us :). Stephen . |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
|
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.