Message boards :
Number crunching :
Panic Mode On (92) Server Problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 23 · Next
Author | Message |
---|---|
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
The task limits were implemented quite some time ago. It was necessary to prevent the db server from crashing on a regular basis once there were too many results out in the field. IIRC it was the table that holds the results out in the field taking to long to be parsed every time there was a request in the db for it. Which happens often. I'm not sure any amount of software can work around that. It would most likely take a DB redesign or more robust hardware to handle such a workload. With a single table growing to 11,000,000+ rows it is more logical to reduce the number of entries in it. Rather than spend time redoing the works or throwing hardware at it. That's not to say if they put a 4 socket server with 2TB of RAM & enterprise SSD storage on their wishlist that we wouldn't try to make it happen. They also have access to DB services from the Berkeley IST, but I would imagine they would have looked in to what is offered vs what they require already. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
qbit Send message Joined: 19 Sep 04 Posts: 630 Credit: 6,868,528 RAC: 0 |
Looks like there's a lot of problems lately. I'm almost out of work and have 127 tasks that are not validated yet, mostly APs. My RAC dropped by more then 50% already. Hope they can fix everything soon, until then it's just vLHC for me. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
it's really quick now, not like before. Edit.. Looks like they have added more tapes to be split for MB |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
SSP shows all PFB splitters running, but creation rate is only 3.2843/sec. David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
IIRC it was the table that holds the results out in the field taking to long to be parsed every time there was a request in the db for it. Which happens often. Well, I DO have one suggestion: split the tasks summary from the listing of individual tasks - when I want my tasks, it's usually to see (by machine or total), what my Pendings are, and the fact that the servers also list all the individual tasks out there MUST give them a hernia from what is a simple request. Just (if they don't exist already) define summary fields for apps, pendings by app, validated by app, and so on to report the summary that I want, and let me have another way to get the list. That seems like it ought to help a LOT. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
Hey - just got my first APv7 to crunch - GPU only, on my GTX 780. Initial time estimate: 34+ hours (!); so far, in < 10 min. elapsed time, near 60% done (???). Addendum: It finished in just over 17 minutes! Is this right? Are the new v7 apps that much better than v6? Or was it a freak? 11/18/2014 8:35:14 PM | SETI@home | Computation for task ap_03se14ac_B4_P1_00015_20141024_03356.wu_2 finished |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Hey - just got my first APv7 to crunch - GPU only, on my GTX 780. There was also AP v7 task 3835367078 which came and went last Saturday which took ~22 minutes. All your AP v6 GPU tasks still showing were run on GTX 680. Task 3795336181 is a reasonable one to compare, and took about 42 minutes. In addition to the 780 being inherently faster than a 680, the factor of about 2 improvement may partly be because Raistmer added code to scale the default settings of some of the most important tuning options. The AP v6 defaults were set so even low end cards would not be overstressed, so were too conservative for high end cards. The automatic scaling up for more capable cards is probably helping that 780 considerably, even though it doesn't go as far as Mike's recommendation in ReadMe_AstroPulse_OpenCL_NV.txt. Joe |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
IIRC it was the table that holds the results out in the field taking to long to be parsed every time there was a request in the db for it. Which happens often. Ideally, the solution to that has been around for a few years. The master database handles all the new WUs and tasks and who has what and so forth, the replica is a live back-up of the master. Years ago, the task pages (everything you can see on the website) was handled by the replica. At some point, that division of workload got undone and everything (as far as I know, presently) comes from the master. That being said, I believe the current I/O limitations we are dealing with is software related. I know the disk I/O isn't as great as we were hoping when all the parts got assembled for that server, but I know we're dealing with software (or even CPU/RAM I/O), because: The reason there are task limits is to keep the table of "active tasks" small enough to fit in a RAM cache (which is over 100GB). After rebooting the server, there is a lot of disk access and cache thrashing, but after an hour or so, the active table has been read into RAM, with changes being written immediately to disk (and then immediately mirrored to the replica). That means that instead of being stuck with ~5000 IOPS from the disk access, and 8-20ms of response time for each access, the active table resides in RAM, which has close to 100,000 IOPS, nanoseconds of response time, and theoretically, many GB/sec throughput. Even with that, the DB access and operations are still slow. That means that either the CPU is the bottleneck (not likely), or the software is (very likely). Especially since we know that A) Informix has already reached a few limits over the years and needed fixes and workarounds, and B) the DB is so huge, there aren't many free or low-cost solutions out there that can handle a DB of this magnitude..efficiently. If we had $100k/year to spend on a license for some really high-end DB software, that would likely fix everything, but we don't have that kind of money, so we have to do the best we can with what we have (and can afford). Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
I think Cosmic sums it up quite nicely within my limits of comprehension. Matt said a couple of years ago that he couldn't foresee any Informix limitation that Seti might hit for the foreseeable future. That may have to be re-visited. Actually, I think we've got confused over the two different types of database. Firstly, we have the 'BOINC' database - master and replica - which handles all the transactional stuff for daily processing. That's the one which typically has ~3 million rows for tasks in progress, which means ~1.6 million for WUs in progress - and judging by the message number on Chris's post, 1.6 million (and growing) rows for the forums. More to the point, it has a huge rate of churn, with a turnover of ~1.5 million rows per day in normal operation. That fragments the database and index structure: as I understand it, compacting and re-indexing the BOINC database is the main reason for the duration of the weekly maintenance (and by implication, if the 'tasks in progress' limits were removed, the weekly outage would take much longer). This is the database which is re-loaded from disk into RAM by the initial queries after each outage: it's run by a MySQL (free, open-source) database engine, and I don't see any prospect of (or need for) changing that: all the BOINC server daemons (splitter, validator, etc.) have to interact directly with this database, and even the slightest change in query syntax would require a lot of work - and render our version of the code incompatible with all the other BOINC projects. Informix is used for the other databases - the SETI@home and Astropulse science databases. We don't see any data from those databases in our day-to-day interactions with BOINC. They hold data on all the signals found since the begiining of SETI@Home, 15 years ago: about 14 billion rows, according to the science status page. That's three orders of magnitude greater than the BOINC transactional stuff. If we are using 100 GB of RAM to cache the BOINC database, we might need 100 Terabytes of RAM to cache the science DB - which probably accounts for the difficulties they're having getting Ntpckr up to speed. I think the last suggestion I read was to leave it on disk, but to use SSD disks for speed: I don't know how far they've got with that. The numbers are still eye-watering. |
PKnowles Send message Joined: 24 Apr 10 Posts: 49 Credit: 8,347,432 RAC: 0 |
Last task just finished... :-( |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30649 Credit: 53,134,872 RAC: 32 |
I think Cosmic sums it up quite nicely within my limits of comprehension. Matt said a couple of years ago that he couldn't foresee any Informix limitation that Seti might hit for the foreseeable future. That may have to be re-visited. More likely Informix doesn't have a limitation but the hardware does impose a limit. Obvious ones would be a row that is so large it will not fit in RAM, even pass by reference. This is imposed by the machine and the O/S, not by the design of Informix. |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I think Cosmic sums it up quite nicely within my limits of comprehension. Matt said a couple of years ago that he couldn't foresee any Informix limitation that Seti might hit for the foreseeable future. That may have to be re-visited. I've been waiting for someone to inadvertently answer my question...because it seems so obvious I'm afraid the answer is something I've simply overlooked. I'm almost afraid to ask. So...I thought the SETI@Home MB database was much larger than the AstroPulse database. If so, why does the MB database seem fine where the AP database is not so fine? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
I've been waiting for someone to inadvertently answer my question...because it seems so obvious I'm afraid the answer is something I've simply overlooked. I'm almost afraid to ask it. So...I thought the SETI@Home MB database was much larger than the AstroPulse database. If so, why does the MB database seem fine where the AP database is not so fine? Fair question. Just to get it out of the way - in BOINC terms (the bits we interact with - either through this forum or via our client processing), both MB and AP share the same MySQL database. It's only at the science (Informix) database level that they're kept separate. I'd agree that in theory, the AP database should be smaller: the search has been running for a shorter time, and each 'tape' generates fewer WUs, with similar signal limits to the SETI search. Reading between the lines of Matt's Technical News (Feb 16 2012), I'd suggest that certain limits have to be set by administrators when a database is first set up (the number of extents in a table, for example). Perhaps that has to be done to fit the amount of available disk space, or something. Perhaps the AP database was set up and configured - when it was young and small - to fit on a server with limited disk space. Since AP has been gaining a lot of attention recently, and the table sizes have been growing unexpectedly quickly, maybe they hit one of those in-house configuration limits sooner than expected? |
KWSN Ekky Ekky Ekky Send message Joined: 25 May 99 Posts: 944 Credit: 52,956,491 RAC: 67 |
What we now need is a "Not number crunching" thread....... |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I've been waiting for someone to inadvertently answer my question...because it seems so obvious I'm afraid the answer is something I've simply overlooked. I'm almost afraid to ask it. So...I thought the SETI@Home MB database was much larger than the AstroPulse database. If so, why does the MB database seem fine where the AP database is not so fine? I wonder if in newer versions of Informix some of these issues are easier to deal with. I would imagine the db software isn't changed often, but perhaps it could reduce some of their headaches. Looks like the most recent version was released this July. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I've been waiting for someone to inadvertently answer my question...because it seems so obvious I'm afraid the answer is something I've simply overlooked. I'm almost afraid to ask it. So...I thought the SETI@Home MB database was much larger than the AstroPulse database. If so, why does the MB database seem fine where the AP database is not so fine? Yes, Matt's old explanation would allow for the seemingly much larger MB database to be still functioning while the AP database is down; Not a big deal, and we hit this limit with other tables several times before. But the fix is a bit of a hassle. Basically you have to recreate a whole new table from scratch with more extents and repopulate it with all the data from the "full" table. We have a billion workunits in that table... Hopefully the AP fix will not run into new problems and things will return to normal soon. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
Regardless, this is a great opportunity to clean up all those pendings, isn't it? |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Well, If we hit Thanksgiving and it's still not up then we are probably looking at the beginning of next year. Now, I'm not saying that to undermine all the things that the guys over there are doing. It sounds like they are busting their behinds trying to get things to run. Just, from my experience with dealing with others that when you are trying to fix something and it happens around the holidays, it's almost impossible to get help (ie venders, suppliers, help desk, etc) when you are trying to fix things. People have planned vacations, extended weekends, etc And this is pretty much the biggest holiday season of the year. Not to mention, they DO have families.. So at this point, if it comes back before then Great. If it doesn't. My world isn't going to end. And on that note, I hope EVERYONE has a Great Holiday season!! |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Well, Eric's post mentioned it has taken 5 days to get to 25% & that was about 6 day ago. So if it still going it should be somewhere around 50% complete. In theory that would put the completion time around the 28/29th. Being a holiday weekend for the US maybe after maintenance on the 2nd AP shall return. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
As I inferred in my comment in Eric's report thread Informix can do some really strange things during big rebuilds. Depending on the nature of the way the particular deployment is working 25% might be 25% of the tables created and validated, or 25% of the total data rebuilt. Then there's the matter of the temporary tables Informix uses during a rebuild, these are there to act as buffers, so a degree of defragmentation can take place, they can actually increase the performance in later stages of the process, but more often than not are "speed neutral" to "speed negative". If pushed I would say 5 days for 25% would equate to nearer 30 days for 100% as a minimum. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.