Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation
Previous · 1 . . . 85 · 86 · 87 · 88 · 89 · 90 · 91 . . . 94 · Next
Author | Message |
---|---|
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
Yep. Borked again. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13766 Credit: 208,696,464 RAC: 304 |
Linux system out of GPU work. Grant Darwin NT |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
Looks like we are back up again. Getting decent downloads. Progressing forward. All in all, I'm not disappointed with what they're doing behind the scenes. I know they must be frustrated with the outages. Still, the changes they're making is making a huge difference in my machines' overall performance. RAC has went up about 5-10K/day with each machine tracking higher. Machines are not taking huge dives. Downtimes have been fairly minimal. Edit: Even though RTS is empty :( |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13766 Credit: 208,696,464 RAC: 304 |
Looks like we are back up again. Getting decent downloads.Not here. Just the odd WU. "Project has no tasks available" is still the current response. Edit- until i posted this, then i got 18. Another 400 or so and i'll hit the limits again. Edit- and then got more work. The bad news- it's 98% shorties, and all new WUs. So the database backlog is going to get even worse than it is now. Grant Darwin NT |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
My RAC is way up from what it was before the problems started in December but not because I'm crunching more. Quite the opposite. My machines have idled a lot in the long outages but the credit I get per crunched cpu or gpu hour has gone up a lot. I guess Setiathome staff turned the "Credit screw" several turns looser to compensate for the problems. This change happened suddenly after the very long Tuesday outage that started on Jan 14. Before that outage my RAC was climbing slowly back towards is normal value from the depression the Christmas problems had caused. After the outage it shot through the roof so that just a few days after the outage my RAC had broken my all time high. Despite that same outage having dropped it below the lowest I had had with the same hardware before. |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
Not here. Yeah. I got a few decent downloads on the 10Sep file. Lots of shorties. Still, they're keeping the system kicking. I'd almost be at the point that they should think about shutting it down for a few days to clear that backlog problem. Yeah, it'd suck, but if they planned it out, at least we would know about it, and those of us who work on other projects would be forewarned. This limping along from one outage to the next, even if they are fairly short in duration, just can't keep happening and get to any real resolution. |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
I'd almost be at the point that they should think about shutting it down for a few days to clear that backlog problem.The problem would then immediately reappear when all the starved hosts are reporting the millions of tasks they crunched during the long outage and asking millions of new tasks to fill their caches. |
Eric B Send message Joined: 9 Mar 00 Posts: 88 Credit: 168,875,085 RAC: 762 |
I'm afraid this is going to recur over and over until they reach out to someone (google for example) who deals with very large and very busy databases, for help. No amount of shutting down and letting things catch up is going to solve this problem. As far as I know they haven't even been able to root cause the issue. |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7379 Credit: 44,181,323 RAC: 238 |
I'd almost be at the point that they should think about shutting it down for a few days to clear that backlog problem.The problem would then immediately reappear when all the starved hosts are reporting the millions of tasks they crunched during the long outage and asking millions of new tasks to fill their caches. Hi Ville, Something else that could help is to end the spoofing. If a host has 4 GPUs they should only get WUs for those 4 GPUs. No more editing software to spoof that a host has 15, 20, 30 or more GPUs when they have no more than 8. My main is OUT of WUs now and probably stands little chance of getting any in the foreseeable future. My other hosts are getting dangerously low as well. :( Have a great day! :) Siran CAPT Siran d'Vel'nahr - L L & P _\\// Winders 11 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
Something else that could help is to end the spoofing. If a host has 4 GPUs they should only get WUs for those 4 GPUs. No more editing software to spoof that a host has 15, 20, 30 or more GPUs when they have no more than 8.Spoofing has actually allowed me to be nice to the servers and other users. I reduce my fake gpu count when the Tuesday outage has started so that when the outage ends, I'm still above my new cap so I'm only reporting results but not competing with the other hosts for new tasks. When my host finally starts asking for new tasks, it is only asking a few at the time matching the number it reported. And when this happens, the post-outage congestion is over already. Also I have configured my computers to report at most 100 results per scheduler request. So that they aren't flooding the server with a ridiculous bomb after the outage. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
the issue seems to be with the assimilators and validators. those have been steadily increasing, not able to work at the rate that work is being returned, causing the backlog. in the past they had issues with the deleters and db purgers backing up like this, but they seem to have gotten a handle on those, they haven't been backing up. they really just need better hardware. oh well, as is usual now, when my systems run out of work, they will shift over to Einstein as a backup, no worries. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
pututu Send message Joined: 21 Jul 16 Posts: 12 Credit: 10,108,801 RAC: 6 |
....they really just need better hardware... These are really powerful system that they are currently running ¯\_(ツ)_/¯ Hosts bruno: Intel Server (2 x 2.66GHz Xeon, 8 GB RAM) carolyn: Intel Server (2 x quad-core 2.4GHz Xeon, 96 GB RAM) centurion: Intel Server (2 x hexa-core 3.4GHz Xeon, 512 GB RAM) georgem: Intel Server (2 x hexa-core 3.07GHz Xeon, 96 GB RAM) khan: Intel Server (2 x 3.0GHz Xeon, 32 GB RAM) lando: Intel Server (2 x quad-core 2.4GHz Xeon, 12 GB RAM) marvin: Intel Server (2 x 2.66GHz Xeon, 16 GB RAM) oscar: Intel Server (2 x quad-core 2.4GHz Xeon, 96 GB RAM) paddym: Intel Server (2 x hexa-core 3.07GHz Xeon, 132 GB RAM) synergy: Intel Server (2 x hexa-core 2.53GHz Xeon, 96 GB RAM) muarae1: Intel Server (2 x hexa-core 3.07GHz Xeon, 76 GB RAM) muarae4: Intel Server (2 x hexa-core 3.07GHz Xeon, 76 GB RAM) thumper: Sun Fire X4500 (2 x dual-core 2.6GHz Opteron, 16 GB RAM) vader: Intel Server (2 x dual-core 3GHz Xeon, 32 GB RAM) |
Freewill Send message Joined: 19 May 99 Posts: 766 Credit: 354,398,348 RAC: 11,693 |
Hi Siran, The project has far more work than can be processed. Even with current computing power, the database and servers cannot handle the load during this "steady state" period between outages. The spoofing just helps fast PCs keep processing during an outage. At times like today, I don't think it's a problem, other than results out in the field. That number is only about 1/3 of results returned and awaiting validation. I get no more priority to tasks downloads than you do on a Pi system. Every 5 min, we each get a shot. You may recall when they increased the tasks per CPU and GPU from 100 to 300(?), the system jammed up. With moderate GPUs even that is not a lot. They need to find and address the root cause. Various solutions have been suggested and I'm sure they've considered all of them. I'm ready to contribute some $ if they'll just tell us what they need. Roger |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
....they really just need better hardware... in terms of modern server hardware, this stuff is ancient, probably ~10 years old. i mean several users here are running more capable systems. dual quad and dual hex systems are not impressive anymore. and many of them are maxed out without any ability to upgrade without a full platform upgrade. they need more cores, and in general more modern platforms to handle more I/O. Theoretically they could replace all of these systems with just a couple AMD Epyc based servers. a full overhaul is the "best" solution, but also the most costly, and time consuming. Time and money are in short supply over there from what it seems. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
Spoofing has no impact on database size whatsoever as long as the hosts using it are fast enough to process their spoofed cache faster than their wingmen. The result wouldn't go any further in the pipe before the wingman has returned his result anyway. I have observed the effect of varying my fake gpu count to the number of tasks the web site lists for it and it has no effect on the total unless I go way further than I usually do. Changing the gpu count just moves tasks between 'in progress' and 'validation pending' states. |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
Theoretically they could replace all of these systems with just a couple AMD Epyc based servers.A single modern dual socket Epyc server can have more cores than all those listed servers combined! There are even many single core chips - those must be really ancient. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
Theoretically they could replace all of these systems with just a couple AMD Epyc based servers.A single modern dual socket Epyc server can have more cores than all those listed servers combined! There are even many single core chips - those must be really ancient. indeed. by my count its <60Cores and ~1TB RAM total. you can do that in a SINGLE socket Epyc board! 64 cores, 128 threads, MUCH better IPC, 1-2TB of faster DDR4 memory. but it's probably best to at least spread it out over a couple systems to decrease sources of bottlenecks (network connectivity, disk I/O, etc) and to not have all your eggs in one basket so to speak in the case of hardware issues taking down the whole project lol. this stuff isn't cheap, but we can dream. the point is, even if they upgrade to more modern setups, but not necessarily bleeding edge, they will be a lot better off. Intel Xeon E5-2600v2 chips can be had cheaply and available up to 12c/24t parts, Registered ECC DDR3 ram is cheap and plentiful. even a meager upgrade like that on some key systems would go a LONG way. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
rob smith Send message Joined: 7 Mar 03 Posts: 22262 Credit: 416,307,556 RAC: 380 |
While it MAY have more work than can be processed (a claim for which there is NO evidence) then, if there is a problem delivering that work to the users then it make no sense to attempt to grab all one can, and so turn the average user away because they can't get work due to the greed of a very vocal minority. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
indeed. by my count its <60Cores and ~1TB RAM total. you can do that in a SINGLE socket Epyc board!With my math those are 110 cores. Note that all the listed servers are dual socket ones. |
kittyman Send message Joined: 9 Jul 00 Posts: 51469 Credit: 1,018,363,574 RAC: 1,004 |
While it MAY have more work than can be processed (a claim for which there is NO evidence) then, if there is a problem delivering that work to the users then it make no sense to attempt to grab all one can, and so turn the average user away because they can't get work due to the greed of a very vocal minority. I think everybody has about the same odds of hitting the servers when it has work in the RTS queue to hand out. I am far short of having a full cache, and most work requests are getting the 'project has no tasks available' response. But, about 20 minutes ago I got a 36 task hit to keep my cruncher going. This does not help those who have mega-crunchers very much. So, work is going out and being returned. Wish things were better, but it is what it is. Meow. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.