Message boards :
Technical News :
Tom (Dec 23 2008)
Message board moderation
Author | Message |
---|---|
![]() ![]() Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 ![]() |
Today had our weekly outage for mysql database backup, maintenance, etc. This week we are recreating the replica database from scratch using the dump from the master. This is to ensure that the crash last week didn't leave any secret lingering corruption. That's all happening now as I type this and the project is revving back up to speed. Had a conference call with our Overland Storage connections to clean up a couple cosmetic issues with their new beta server. That's been working well and is already half full of raw data. Once the splitters start acting on those files the other raw data storage server will breathe a major sigh of relief. I was also set to (finally) bump up the workunit storage space yesterday using their new expansion unit - but waited until their procedure confirmation today lest I did anything silly and blew away millions of workunit files by accident. The good news is that I increased this storage by almost a terabyte today, with more to come. We have officially broken that dam. I also noticed this morning the high load on bruno (the upload server) may be partially due to an old, old cronjob that checks "last upload" time and alert us accordingly. This process was mounting the upload directories over NFS and doing long directory listings, etc. which might have been slowing down that filesystem in general from time to time. I cleaned all that up - we'll see if it has any positive effect. Jeff's been hard at work on the NTPCker. It's actually chewing on the beta database now in test mode. We did find that an "order by" clause in the code was causing the informix database engine to lock out all other queries. This may have been the problem we've been experiencing at random over the past months. Maybe informix needs more scratch space to do these sorts, and it locks the database in some kind of internal management panic if it can't find enough. Something to add to the list of "things to address in the new year." - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
![]() ![]() Send message Joined: 26 May 99 Posts: 9952 Credit: 103,452,613 RAC: 328 ![]() ![]() |
Thanks Matt One of the first things I do each day is to check the Technical News thread. Thank you for keeping us all up to date on the inner workings of SETI@Home May I wish you a Good Christmas and New Year. Bernie |
![]() Send message Joined: 2 Sep 06 Posts: 8961 Credit: 12,678,685 RAC: 0 ![]() |
Thx Matt-enjoy your holiday! ![]() ![]() |
![]() ![]() Send message Joined: 29 Feb 00 Posts: 16019 Credit: 794,685 RAC: 0 ![]() |
. . . Thanks Matt - Good to hear something regarding NTPCker too < All @ Berkeley have a Wonderful Christmas Eve - Enjoy the Holidays ![]() Science Status Page . . . |
![]() ![]() Send message Joined: 27 Aug 06 Posts: 835 Credit: 2,129,006 RAC: 0 ![]() |
Happy holidays guys. Thanks for some very welcome good news Matt. ![]() |
Speedy ![]() Send message Joined: 26 Jun 04 Posts: 1639 Credit: 12,921,799 RAC: 89 ![]() ![]() |
Thanks for the updates Matt. Hope you have a nice work free break. How long are the hard working crew at the lab taking off? Seasons greetings to all ![]() |
![]() ![]() Send message Joined: 5 Feb 03 Posts: 285 Credit: 29,750,804 RAC: 15 ![]() ![]() |
If any of the problems such as upload, running out of WU’s, bandwidth limits has anything to do with CUDA being released, why not limit the number of CUDA WU’s you can request per day to a lower number, until thing sort themselves out. Only allow 20, 40, 60 WU’s a day per computer. I don’t know if this is possible, you only allow a certain number of Wu’s per CPU on a computer. Why not only allow a certain number of WU’s per NIVIDIA card? Something LOW for now, then turn it up slowly over time. Happy Holidays |
George Send message Joined: 14 Oct 08 Posts: 100 Credit: 435,680 RAC: 0 ![]() |
they are limited to 100 wu's per computer cpu core same as all others just they can do that much work where as cpus can't |
PhilG57 Send message Joined: 18 Nov 03 Posts: 17 Credit: 17,538,280 RAC: 11 ![]() |
Matt - I read and appreciate the technical updates. I can only imagine the hundres of problems, large and small, you and your team handle every day. But I'm confised: I read the project is coming back up, see there are 55,818 workunits to be downloaded, but when I request more workunits, the status returned is 'no work available'. What's up? THX. |
PhilG57 Send message Joined: 18 Nov 03 Posts: 17 Credit: 17,538,280 RAC: 11 ![]() |
Hmmm. Now, a couple of minutes later, it's found and downloaded some work for me. I'm good for awhile. THX. |
![]() ![]() Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 ![]() |
I read the project is coming back up, see there are 55,818 workunits to be downloaded, but when I request more workunits, the status returned is 'no work available'. What's up? THX. What's probably happening here is that, yes, there is work "available." However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out. So the feeder process has to constantly fill its tiny cache, and to do so queries the database every two seconds to see if there's more work available. Long after the project comes back up the database is quite overloaded, so it may not respond very fast. In fact, it sometimes takes many minutes for it to cough up results to the feeder, during which clients get "no work available." In other words, the feeder is like a single cashier in a large department store. Sometime the cashier needs to make change, which holds up the entire line, even though there's plenty of money kept elsewhere behind the counter. - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
![]() ![]() Send message Joined: 29 Feb 00 Posts: 16019 Credit: 794,685 RAC: 0 ![]() |
I read the project is coming back up, see there are 55,818 workunits to be downloaded, but when I request more workunits, the status returned is 'no work available'. What's up? THX. . . . now that's clear and to the Point - Thanks for the Update and have a Good Holiday Sir! ![]() Science Status Page . . . |
Eddy Send message Joined: 6 Jun 00 Posts: 3 Credit: 33,079 RAC: 0 ![]() |
quick question, can you have more than two projects running at the same time? |
![]() ![]() Send message Joined: 28 Jan 06 Posts: 1410 Credit: 934,158 RAC: 0 ![]() |
Quick answer: YES ![]() |
![]() ![]() Send message Joined: 28 Apr 07 Posts: 21 Credit: 1,168,873 RAC: 0 ![]() |
Hi; It seems I am not experiencing any faster computation speed from the latest Boinc Client 6.4.5 using the CUDA add on. My project is Seti@Home OS: Windows XP Media Center (Pro) 32 bit w/ Sp3 Video Card: NVidia GeForce 8800 GT Intel Core2 Quad CPU Q6600 @ 2.4 GHz 2.0 GB RAM driver version: 180.48 I down loaded the the new Boinc client. Downloaded the latest NVidia Driver (ver.180.48) Restarted Client stated that it did have CUDA Compatible components. The option for the GPU is enabled on my settings. But I am not seeing any benefit. I am not getting any error messages what so ever. Any thoughts?? AM I doing something wrong? Jason Tobin jasontobin48@hotmail.com Jason Tobin Alien Hunting Specialist |
Speedy ![]() Send message Joined: 26 Jun 04 Posts: 1639 Credit: 12,921,799 RAC: 89 ![]() ![]() |
It seems I am not experiencing any faster computation speed from the latest Boinc Client 6.4.5 using the Have you read this thread? Hopefully this will help ![]() |
Eddy Send message Joined: 6 Jun 00 Posts: 3 Credit: 33,079 RAC: 0 ![]() |
Quick answer: YES so how do you do it? long answer please |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
Quick answer: YES First, it depends on what you mean by "projects". If by "projects" you mean tasks or workunits, then you need a CPU (logical, virtual or real) for each task you want to run. For instance, if you have a dual core CPU, or a dual CPU machine, or a CPU with Intel Hyperthreading, you can run at most two tasks. If you have a quad core CPU, or a quad CPU system, or a dual core with Intel Hyperthreading, you can run at most four tasks at once. And so on and so forth. If by "projects" you mean different BOINC projects, you must first attach your computer to additional projects to download their work, but note the same CPU limitations apply for CPUs in your machine as stated above (i.e. you cannot run more tasks than you have CPUs in your machine). Also, you cannot explicitly state which task or project runs on which CPU as this is handled by BOINC. |
![]() ![]() Send message Joined: 28 Jan 06 Posts: 1410 Credit: 934,158 RAC: 0 ![]() |
Thanks for stepping in there, Ozz... I had come back in, but missed Eddy's reply. I should add, though, that since installing CUDA, I am presently running threee work units simultaneously. (2 x SETI and 1 x SETI Beta) ![]() |
Eddy Send message Joined: 6 Jun 00 Posts: 3 Credit: 33,079 RAC: 0 ![]() |
thanks just like to help as much as possible, omly dual core PC so two tasks it is Cheers |
©2023 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.