Message boards :
Number crunching :
Panic Mode On (112) Server Problems?
Message board moderation
Previous · 1 . . . 29 · 30 · 31 · 32 · 33 · Next
Author | Message |
---|---|
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Others have said the same about the splitter priority, but with each of the receiver sets we have had lately they have had the same date and time period (all done in the one night) yet the newer tape set still over rides the existing ... though this time some of the Blc04 tapes are still splitting (seems maybe 6 or so) ... . . And like yourself it upsets my OCD as well :) Stephen :) |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
Probably not a panic moment as splitting is happening and the ready to send queue is staying full. More like a curiosity or a possible problem But the split files don't seem to be finishing and going away... blc04_2bit_guppi_58227_04470_HIP53229_0010 52.42 GB (128) blc04_2bit_guppi_58227_04819_HIP52575_0011 52.42 GB (128) blc04_2bit_guppi_58227_05169_HIP53229_0012 52.42 GB (128) blc04_2bit_guppi_58227_05505_HIP52675_0013 52.42 GB (128) |
Filipe Send message Joined: 12 Aug 00 Posts: 218 Credit: 21,281,677 RAC: 20 |
We are currently processing +/- 110.000 results an hour. What should be our processing rate to be able to match data sets creation rate? |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I think we get in trouble when the return rate gets up over 140K. Then the splitters can't keep up. So at 110K we should be in good shape. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Probably not a panic moment as splitting is happening and the ready to send queue is staying full. . . The lowest numbered Blc04 file (bloc04...0010) is stuck and nothing after that is "going away" as you put it. That 'tape' needs a bit of a kick ... Stephen :) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
We are currently processing +/- 110.000 results an hour. . . Well we have spent the last few weeks processing the output from Greenbank in one night, so I think it would need to be considerably higher than it is. But since the SETI server room could not cope with that level of processing it is a moot point. Stephen :) |
Filipe Send message Joined: 12 Aug 00 Posts: 218 Credit: 21,281,677 RAC: 20 |
. . Well we have spent the last few weeks processing the output from Greenbank in one night, so I think it would need to be considerably higher than it is. But since the SETI server room could not cope with that level of processing it is a moot point. So we would need 20-30x our current processing power to process everything in real time... |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
. . Well we have spent the last few weeks processing the output from Greenbank in one night, so I think it would need to be considerably higher than it is. But since the SETI server room could not cope with that level of processing it is a moot point. For some reason we got an amazing load of data for this one day. Is it a special day?? was something interesting in space where the data was collected on that day?? were there just funds and opportunity?? I don't think we get data every day from each source. There is a post showing the dates we have analyzed for Arecibo , but I don't know if there is something similar for Greenbank. We did have a "hiccup" with the connection and data download from Greenbank for a while not too long ago, so I can only guess we have a backlog to get through, but since they don't seem to be giving us Greenbank data in any date order that I can see it is hard to tell how much we have left to work through. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
There is a post showing the dates we have analyzed for Arecibo , https://setiathome.berkeley.edu/forum_thread.php?id=74238 but I don't know if there is something similar for Greenbank.That was me. There are a couple more months to add since then, but not a huge amount. The Green Bank tapes are in the same database, but the names are a horrible mess - very difficult to parse automatically. I had a look through manually to see if they were worth charting, but I could only find 25 distinct numbers in the 'modified julian date' field - which feels wrong. Maybe I should spot-check a few of the actual recording dates in the workunit xml header and see if they match the MJD. Edit - on a sample of one (task 6858969764), it works: <name>blc13_2bit_guppi_58227_07595_HIP52911_0019.30670.818.22.45.188.vlar</name> <group_info> <tape_info> <name>blc13_2bit_guppi_58227_07595_HIP52911_0019</name> <start_time>2458227.5896015</start_time> <time_recorded>Thu Apr 19 02:09:01 2018</time_recorded> <time_recorded_jd>2458227.5896004</time_recorded_jd>MJD 58227 is indeed April 19 this year. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
There is a post showing the dates we have analyzed for Arecibo , https://setiathome.berkeley.edu/forum_thread.php?id=74238 but I don't know if there is something similar for Greenbank.That was me. There are a couple more months to add since then, but not a huge amount. . . Don't forget that to truly be "in real time" we would have to process all the data from Arecibo and ALL recorders at Greenbank (and soon Parkes as well) within a 24 hour period. Currently we running at about 100th of that rate (or less). Stephen :) |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
Let's just say in "close" to real time. It takes time to transmit, prepare files (is anything done to make the data ready to be split?), and sent out to Boinc/Seti machines and then returned (6 week time period given, more , if data needs more than 2 results to confirm) and analyzed. The point I was trying to make is that if you look at Richard's graphs (Thanks!) of Arecibo data, that we don't get data every day from Arecibo, and that when we do get data, it is of varying amounts. We don't have to meet some goal of being able to process ALL, when we don't (and probably won't) get ALL from Arecibo. I'm guessing we don't get ALL the Green Bank data either. Some of the data collected may belong to the project paying for the telescope time, and not accessible until a later date to Seti. It is hard to know how much Green Bank data they have in hand to give to us, so it is hard to guess how we are doing, or how close to analyzing all the data they have to give us we are getting. I'm looking forward to getting the Parkes datasets, but it might still be a while. Unixchick who still wonders about the 'oumaumau dataset that was all errors. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Just like with Arecibo, we get our data irregardless of what the telescope is doing for the researchers paying for the telescope time. We piggyback our own hardware on the scope. We just have no control over where the telescope is pointing unless we have input to choose what the target is going to be. We never had control at Arecibo. We might have some control over GBT because it is aligned with the Breakthrough Listen program and they are the ones that have the funds to choose targets. And yes, we can't use the raw data coming from the recorders. There is always going to be some pre-processing of the data. Mainly for radar blanking from any data from Arecibo but I think there is also processing on data from GBT even though it is ostensibly cleaner. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Up to a point. For obvious reasons, we can't record our incredibly delicate data while Arecibo's high-power radar transmitter is in use. And our data recorder is currently tied to the ALFA antenna array: when the directing astronomers want a different piece of kit at the prime focus, we're not likely to record much of any use. The Arecibo Observatory Telescope Schedule can be viewed online: not many people seem to be using ALFA at the moment, although the PALFA Galactic Plane Survey (P2030) is getting more time towards the end of the month. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Let's just say in "close" to real time. It takes time to transmit, prepare files (is anything done to make the data ready to be split?), and sent out to Boinc/Seti machines and then returned (6 week time period given, more , if data needs more than 2 results to confirm) and analyzed. . . Yes the "tapes" (as they used to be and it still makes a useful term even though they have not been on tape for quite a while) go through a preparation process before they can go to the splitters. Then to the download servers, into the "hopper" , until they get sent to a host, then back to the upload server which forwards them to the validators then into the science database where they sit until they get dealt with by the back end app (nebula?) and when all that is done they are ... The point I was trying to make is that if you look at Richard's graphs (Thanks!) of Arecibo data, that we don't get data every day from Arecibo, and that when we do get data, it is of varying amounts. We don't have to meet some goal of being able to process ALL, when we don't (and probably won't) get ALL from Arecibo. I'm guessing we don't get ALL the Green Bank data either. Some of the data collected may belong to the project paying for the telescope time, and not accessible until a later date to Seti. It is hard to know how much Green Bank data they have in hand to give to us, so it is hard to guess how we are doing, or how close to analyzing all the data they have to give us we are getting. . . I am pretty sure we do not get all the data from any telescope. I don't know why. I thought the SETI recorders just piggy backed the normal architecture of the scope so we would get a copy of everything they observe. But I guess if the recorders are full we don't get any more data until those recordings are transferred back to Berkeley and fresh (empty) drives are mounted to record more. I'm looking forward to getting the Parkes datasets, but it might still be a while. . . Oh me too, I am very curious to see if it is just like the GBT data or something very different again. Not to mention the "local" input part of it. :) who still wonders about the 'oumaumau dataset that was all errors. . . That could have been almost anything. Problems in the transfer process, a glitch in the prep for splitting causing them to split improperly or any of many other reasons. Maybe that will be sorted and we will see them again or maybe they are in the data trash can. Stephen :) |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
For some reason my Xeon box has stopped downloading CPU tasks. It is only requesting GPU tasks. Is there a server issue or is it that I am over run of Rosetti tasks so the scheduler doesn't want to ship any more cpu tasks my way? Tom A proud member of the OFA (Old Farts Association). |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Set sched_op_debug in Event Log Diagnostic flags and it will tell you whether you are overcommitted on cpu tasks. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
Set sched_op_debug in Event Log Diagnostic flags and it will tell you whether you are overcommitted on cpu tasks. Thank you. A proud member of the OFA (Old Farts Association). |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
Probably not a panic moment as splitting is happening and the ready to send queue is staying full. There seems to be blc04_2bit_guppi_58227_04470_HIP53229_0010 52.42 GB (128) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Hi Speedy, . . Those are not channels, they are tapes (Disks actually) and each one holds, I believe, 128 channels ... . . I have got no idea how many tasks are spawned from each channel though. Stephen ? ? |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
[quote]Probably not a panic moment as splitting is happening and the ready to send queue is staying full. There seems to be blc04_2bit_guppi_58227_04470_HIP53229_0010 52.42 GB (128)[/quote My weird theory is that we have 28 channels running instead of the usual 14. They gave us extra channels to help speed up recovery after last Tuesday's outage. At some point 14 of the processes stayed with the last file they did, trapped and not moving on to new files. Which seems fine to me as they aren't needed. Come Tuesday's (Tomorrow's) recovery after outage time and all 28 channels will be running again to shorten recovery time. After recovery happens they might be again files in this state as at some point only 14 channels are needed. Hopefully they have come up with a more graceful way for the channels to exit when no longer needed though. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.