Message boards :
Number crunching :
Suggestion: Apply the emergency brake!
Message board moderation
Author | Message |
---|---|
Ulrich Metzner Send message Joined: 3 Jul 02 Posts: 1256 Credit: 13,565,513 RAC: 13 |
Since the validators aren't able to catch up the current load (Validator queue rapidly rising) and Matt Lebofsky mentioned, the problem are "Large directories" and their time exposure needed for scanning, so what's the point in letting the system struggle against a load it can't manage at the moment? The 'Large directories' and the problems are getting larger and larger. Apply the emergency brake and let the system recover to smaller directories. Then take a break and do a brain storming to restructure the data to be hold in more and smaller directories to avoid such problems in the future. Just my 0.02 € from a developers point of view... [edit] For example, at the moment all WU data, disregarding its record time, is hold in a single directory. How about a separate directory of WU data for each month or week of record time? That would dramatically reduce directory size and access time scanning for a particular WU. And you still have the unique key to find the matching returned WUs for validation. Aloha, Uli |
Tigher Send message Joined: 18 Mar 04 Posts: 1547 Credit: 760,577 RAC: 0 |
Well yes. I would stop all down loads and uploads to let the validators clear. Let the deleters do their bit. End up with reduced directory sizes as you say Ulrich. Then start uploads only to clear what's out here. Then start downloads. Otherwise...this may never end. That's a developers view and a sys management view. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
Even better that basing it on date (which may involve a search in several directories would be to split the file name so that it is a path rather than a single file name. Eg. 01no03ab621.28433.1003414.241_3 might be usefully split into: 01no03\\ab621\\28433\\1003414\\241_3 which would only require string manipulation to get to the correct file instantly, and would reduce the size of the largest directories. BOINC WIKI |
Tigher Send message Joined: 18 Mar 04 Posts: 1547 Credit: 760,577 RAC: 0 |
Even better that basing it on date (which may involve a search in several directories would be to split the file name so that it is a path rather than a single file name. Eg. Indeed. You can binary chop through that a lot faster. Good idea. I presume they are in one directory then....nice and flat? or not so nice with these quantities! |
Ulrich Metzner Send message Joined: 3 Jul 02 Posts: 1256 Credit: 13,565,513 RAC: 13 |
... 01no03ab621.28433.1003414.241_3 might be usefully split into: That's really a much better idea :) Aloha, Uli |
Toby Send message Joined: 26 Oct 00 Posts: 1005 Credit: 6,366,949 RAC: 0 |
BOINC supports splitting the download directory into a hierarchical structure. This was done on seti@home last fall. I believe it splits it into 1024 subdirectories and puts each file in one of them based on a hash of the file name. But even with 1024 subdirectories, it seems a couple million files still clogs things up pretty good... A member of The Knights Who Say NI! For rankings, history graphs and more, check out: My BOINC stats site |
uba36 Send message Joined: 17 Jul 02 Posts: 74 Credit: 1,159,280 RAC: 0 |
I'am surprised, they store the data in subdirectories. They have always stressed the fact that they are using mySQL for a database system. So I thought they would store the data in that data base. Seems to be the wrong conclusion. But for what purpose is MYSQL then used? |
Pooh Bear 27 Send message Joined: 14 Jul 03 Posts: 3224 Credit: 4,603,826 RAC: 0 |
I'am surprised, they store the data in subdirectories. They have always stressed the fact that they are using mySQL for a database system. So I thought they would store the data in that data base. Seems to be the wrong conclusion. But for what purpose is MYSQL then used? The files that are sent to us and sent back are the files that are being talked about. The splitters split the tapes into files for delivery to the users. The users send the files back, then the data from the files get imported to the database and then deleted. So, as you can see there are lots of files to work with. My movie https://vimeo.com/manage/videos/502242 |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
I'am surprised, they store the data in subdirectories. They have always stressed the fact that they are using mySQL for a database system. So I thought they would store the data in that data base. Seems to be the wrong conclusion. But for what purpose is MYSQL then used? The returned science is stored in files in a directory. All of the meta information about any result that has been sent to a client is stored in the database (which client, how long it took,...) just hot the actual results. The canonical result after verification is stored in a different database than the one that holds the user information. The copy of the WU that is sent to you is also stored in a file in a database, but the assignment information and status is stored in the database. BOINC WIKI |
jshenry1963 Send message Joined: 17 Nov 04 Posts: 182 Credit: 68,878 RAC: 0 |
I could see it now, if you put the emergency brake on, it will mean uploads and downloads won't work. No matter how much heads up you would give the seti crunchers, you would get 2Billion (sorry classic, couldn't resist, but no direct pun intended) posts on this site on "why aren't my uploads working?" or "is it down AGAIN".... Remember, this group has some of the most childish panicking idiots I have ever seen in my life. If we could only do this for science, and get rid of the credits, then maybe the idiots would leave and the real scientists would stay. ooops, sorry, I have just hijacked this thread and started a new argument.... scientists vs creditwhores. Thanks, and Keep on crunchin' John Henry KI4JPL Sevierville TN I started with nothing, and I still have some of it left. <img src="http://www.boincstats.com/stats/banner.php?cpid=989478996ebd8eadba8f0809051cdde2"> |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Even better that basing it on date (which may involve a search in several directories would be to split the file name so that it is a path rather than a single file name. Eg. I like this idea, but (you knew that was coming, right?) Observation: the file name has a bit of a hierarchy, and the "fanout" might be better if they resuffled the order when making the directory and file name: 10003414\\28433\\ab621\1no03\\241_3 You'd still be able to convert from one to the other pretty quickly. |
ML1 Send message Joined: 25 Nov 01 Posts: 20331 Credit: 7,508,002 RAC: 20 |
...I like this idea, but (you knew that was coming, right?) Berkeley already do better than this. They use a hashing algorithm to evenly spread the file names across all the hierarchy of directories that they use. It sounds like they use one level of hierarchy of 1024 directories so that you get about a 1000 files in each directory for a million results or WUs. Looks like they need to recode so that they have another one or two levels or more to break that down a lot further. Regards, Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Keck_Komputers Send message Joined: 4 Jul 99 Posts: 1575 Credit: 4,152,111 RAC: 1 |
This basic idea in the first post of this thread is not too bad. But think of a governor instead of a brake. As the 'bad queues' grow slow down the splitters. This could be impelemented as a permanent and automatic solution, because when everything is running fast it would allow the splitters to run at full speed. When there is a big backlog like there is now we might be cut down to only one splitter. Another possible addition to the governor would be to limit the number of workunits suppied in one request and include a delay before reconnecting. Both the limit and the amount of delay would scale depending on how big the backlog is. This would help to spread out the available work instead of it all being in someones mega queue. This would supply some work at all times, even if it is not enough for all the requests, so maybe the flames would not be too hard to bear. As well as reducing the overall load on the servers so that more work can be processed on the back end. It would also start getting people used to the idea that there will not always be work available for everyone even when everything is running at top speed. Assuming that things stay the way they are now the current rate of data aquisition can not keep up with the demand for data to process. Once Astropulse and the enhanced seti app are released that may change, but odds are computers will get faster and we will be right back in that situation. PS. I think I copied this idea from another post, but it was a good idea. BOINC WIKI BOINCing since 2002/12/8 |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
...I like this idea, but (you knew that was coming, right?) I haven't read the code. Using a hash would work as well. Using a bigger hash would certainly work better given the load -- and as far as I know the only really good way to do this is experimentally. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
This basic idea in the first post of this thread is not too bad. But think of a governor instead of a brake. As the 'bad queues' grow slow down the splitters. This could be impelemented as a permanent and automatic solution, because when everything is running fast it would allow the splitters to run at full speed. When there is a big backlog like there is now we might be cut down to only one splitter. Another possible idea: when the governor kicks in, have some mechanism to slow down the connect rate in the clients -- I know this is essentially pushing the directory problem out to our machines, but if the uploads and reports are at near optimal speed, they won't have to "govern" the clients for very long. That setting could be in the master file for the project. |
Dorsai Send message Joined: 7 Sep 04 Posts: 474 Credit: 4,504,838 RAC: 0 |
It certianly seems that SETI/berkeley are currently able to supply WUs at a rate far faster then they can process the returned results. Something clearly needs to be done. I agree with the suggestions about putting some form of "govenor" into place. When things sart to stack up, sooner or later you have to say, "stop giving me more things, I have too many already"... Foamy is "Lord and Master". (Oh, + some Classic WUs too.) |
Don Erway Send message Joined: 18 May 99 Posts: 305 Credit: 471,946 RAC: 0 |
Yeah, what choice is there? The waiting for validation queue is about to hit 700,000, and has never moved down, since everything came back up. Turn off 2 or 3 splitters, and turn them into extra validator processes, until the que is drained. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Yeah, what choice is there? The waiting for validation queue is about to hit 700,000, and has never moved down, since everything came back up. According to Matt, who is sitting a lot closer to the servers than I am, the problem is too many files in one directory. As the size of a directory grows, it takes longer and longer and longer for the operating system to search the directory for files. Adding processing power won't speed that up. Others have commented that the server code already uses a hash to "fan out" the files into multiple directories. A reasonable fix is to increase the fan out. Of course, you also have to redistribute the files to make everything work, so while that's going on, the project will keep falling behind. I think we'll hear more about that when Matt posts what he's going to post, but I'm playing armchair quarterback, and Matt's actually in the game. |
ML1 Send message Joined: 25 Nov 01 Posts: 20331 Credit: 7,508,002 RAC: 20 |
... Others have commented that the server code already uses a hash to "fan out" the files into multiple directories. OK, so from my armchair driving here... ;) To increase the fan-out, they would need to code up the new hashing scheme, create a new set of corresponding directories, and then use a one-off code to move all the files from the existing scheme into the new scheme... That would require the uploads and downloads to be offline for a good few hours... A more clever scheme could be to use the new hashing scheme for all new files, and let the old scheme gradually clear all the files out as they get finished with. That would only need a little extra coding, the deleters updating also, and then the system would steadily speed up as the old files tree gets cleared... All good big system fun! (I wonder what is actually being done for the fix...) Regards, Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Ulrich Metzner Send message Joined: 3 Jul 02 Posts: 1256 Credit: 13,565,513 RAC: 13 |
I just have to re-emphasize my suggestion to take a break and pull the emergency brake. The validator queue has nearly doubled in size since i started this thread and i don't see light at the end of the tunnel. Instead i see a big sturdy concrete block and we are accelerating at weird speed... Aloha, Uli |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.