Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (117)
Message board moderation
Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 52 · Next
Author | Message |
---|---|
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I'm starting to see a small amount in the Ready to Send Queue... 40K. I take this as a good sign. Are some of the faster machines now getting some WUs to fill the cache?? Think that is the effect of all the spoofed clients reducing their gpu count to reasonable levels owing the 400 per gpu limit now. I certainly backed off considerably on all my hosts. Still working through all the overabundance of gpu tasks trying find the new reduced cache floor. Haven't asked for gpu work since discovering the new limits this morning. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1856 Credit: 268,616,081 RAC: 1,349 |
+1 !! |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
And hosts such as my Linux one that kept getting mostly "Project has no tasks available" responses when trying for work, now that's it's regularly getting work it's finally managed to fill it's cache.I'm starting to see a small amount in the Ready to Send Queue... 40K. I take this as a good sign. Are some of the faster machines now getting some WUs to fill the cache??Think that is the effect of all the spoofed clients reducing their gpu count to reasonable levels owing the 400 per gpu limit now. The Results-in-progress line is now more horizontal than vertical; it's still going to take a while for things to settle down but the end is in sight. Looks like there will be an extra 1.8 million or so WUs out with hosts now (around 6.8 million in total). Grant Darwin NT |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
+1 !! . . OK, I'll go get my big red nose ... :) Stephen |
Wiggo Send message Joined: 24 Jan 00 Posts: 36764 Credit: 261,360,520 RAC: 489 |
At least you got your laughs Stephen (just not the way you thought). :-D. . OK, I'll go get my big red nose ... :)+1 !! Cheers. |
Kiska Send message Joined: 31 Mar 12 Posts: 302 Credit: 3,067,762 RAC: 0 |
I'll put that in once I remember how I setup munin :DExcellent! Ummm.... sure, I'll add it when I have time. So I guess soonâ„¢ is apt |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
Just had another bunch of stuck downloads in extended back-off, ended up going through on first manual retry, although several of them were rather reluctant. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
Looks like Results-out-in-the-field has found it's new level, but the splitters are still struggling to refill the Ready-to-send buffer. Grant Darwin NT |
Kiska Send message Joined: 31 Mar 12 Posts: 302 Credit: 3,067,762 RAC: 0 |
Looks like Results-out-in-the-field has found it's new level, but the splitters are still struggling to refill the Ready-to-send buffer. I think I've dealt with the errant plugin causing some gaps in the monitoring. But if Yafu doesn't sort out their response times... I'll have to drop them from graphing Also we should move this to another thread |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
Splitters still unable to refill Ready-to-send buffer. Grant Darwin NT |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
Splitters still unable to refill Ready-to-send buffer. The hourly return rate is high (148K) and the out in the field has crossed the 7 million mark. The system is still filling a hole. I'm glad we have any RTS. On further thought, I really like the extra WUs, and I think it will allow me to not connect on Tuesdays and skip the whole maintenance "hunger games" grab for WUs edit: I wonder if they changed the size of the RTS queue. There isn't a need for such a large RTS queue if we all have larger cache of them. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Looking at Kiska's RTS graph, it's very striking how quickly RTS dropped on Friday night. That says a lot of hosts were set to cache more than they were allowed, and might have been hammering on the server doors all this time. That might, paradoxically, make the server load lighter in the future, once we've filled everyone up. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Splitters still unable to refill Ready-to-send buffer. . . The RTS is a buffer between splitter output and WU demand which is proportional to the results returned number. Regardless of the size of caches 'in the field', the RTS needs to be what it is so that work requests can be met and the higher the rate of returns the bigger it needs to be, unless the splitter output is itself significantly higher that the rate of returns fairly constantly. Stephen . . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Looking at Kiska's RTS graph, it's very striking how quickly RTS dropped on Friday night. That says a lot of hosts were set to cache more than they were allowed, and might have been hammering on the server doors all this time. . . Undoubtedly! Most of the hosts in the field request far more work than the previous limits allowed, which is why it was very 'brave' to suddenly increase those limits manifold without some prior notice and recommendations, such as "reduce the size of your work requests". A host with a pair of older GPUs that can only process a couple of dozen WUs per day does not need a cache of 800 GPU WUs, and some old clunker out there with a Core 2 duo and no GPU crunching through a dozen or so WUs/day does not need 200 CPU WUs. But that is a long time bug bear of mine. Stephen :( |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Some of us have been advising people to moderate their cache requests for decades - but the message never seems to get through :-( It's hard get get people to think of themselves as part of a larger collective: every part of that collective has to work together, or we all suffer. |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7379 Credit: 44,181,323 RAC: 238 |
Some of us have been advising people to moderate their cache requests for decades - but the message never seems to get through :-( Hi Richard, Resistance is futile, you will be assimilated. We will add your biological and technological distinctiveness to our own. lol ;) Have a great day! :) Siran CAPT Siran d'Vel'nahr - L L & P _\\// Winders 11 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
rob smith Send message Joined: 7 Mar 03 Posts: 22526 Credit: 416,307,556 RAC: 380 |
There are a couple of server side "tricks" that could totally thwart attempts at having excessively large caches - I can well imagine the gnashing of teeth that would ensue if one of them was triggered.... Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Current result creation rate ** 0/sec 0/sec 0/sec 5m Panic mode ON? <edit> Never mind, it's back now. |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1856 Credit: 268,616,081 RAC: 1,349 |
Some of us have been advising people to moderate their cache requests for decades - but the message never seems to get through :-( People react to what they see, not what they're told, I believe. In that regard, I think the previous low hard limits were in fact counterproductive. Personally, I like being able to set realistic cache time and actually see an effect. If nothing else, return times should improve a bit based on an improved CPU cache turnaround. |
betreger Send message Joined: 29 Jun 99 Posts: 11415 Credit: 29,581,041 RAC: 66 |
Methinks Tuesday will be very interesting with the low RTS cache. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.