No Work/Little Work

Message boards : Number crunching : No Work/Little Work
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 · Next

AuthorMessage
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 944868 - Posted: 3 Nov 2009, 17:25:18 UTC

Getting work will be spotty at best. Backup Projects are a good thing.

As was noted in Matt's post in the Tech News.
Splitsville (Nov 02 2009)

It takes 4 hours to haul a single 50 Gig Archive from offline storage. Then when the Software blanking is ran the archive might not be suitable.

The Good News is " ALFA is back up and we're collecting new data again." It will take a bit of time to see that data.


The side news is that Pending should be starting to decrease as they get bumped into the que quicker.

Regards

Please consider a Donation to the Seti Project.

ID: 944868 · Report as offensive
Terost

Send message
Joined: 3 Apr 99
Posts: 8
Credit: 465,514
RAC: 0
United States
Message 944870 - Posted: 3 Nov 2009, 21:14:49 UTC - in response to Message 944868.  

Suppose many of us make a donation to Seti@home, would you use it to setup a backup system so you can do maintenance on the primary systems AND/OR use the backup systems to get the data from the archives? Use the backup systems while the primary is down for maintenance so that we won't have to wait for WU's.
ID: 944870 · Report as offensive
Profile 52 Aces
Avatar

Send message
Joined: 7 Jan 02
Posts: 497
Credit: 14,261,068
RAC: 67
United States
Message 944873 - Posted: 3 Nov 2009, 21:36:18 UTC - in response to Message 944868.  
Last modified: 3 Nov 2009, 21:41:43 UTC

The side news is that Pending should be starting to decrease as they get bumped into the queue quicker.


That might whiteboard well, but I had the very opposite occur during the Halloween 2009 outage. My RAC quickly jumped up 20% (from ~31k to ~39k).

Game theory at play I suppose... I had run down my task queue on purpose last week, and then stockpiled just ahead of the weekend. I guess enough of my wingmen had been stockpiling throughout the week, thus their reciprocal WU's were at the end of a very very long articially DCF'd line, where-as mine were consistently aged ;-) Thus they out-stripped any of my wingmen who ran bone dry. I know, my RAC will average back down over the next 3 weeks ;-)
ID: 944873 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14644
Credit: 200,643,578
RAC: 874
United Kingdom
Message 944879 - Posted: 3 Nov 2009, 21:51:22 UTC

The big advantage of the work shortage, and the inevitable consequence that people have been living off their caches and reporting in old work, is that the weekly maintenance has finished at least an hour earlier than usual and we can post here again.

If we could get the size of the database consistently smaller (through smaller caches), I suspect the whole project will run more smoothly.
ID: 944879 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 944882 - Posted: 3 Nov 2009, 22:10:13 UTC - in response to Message 944879.  

If we could get the size of the database consistently smaller (through smaller caches), I suspect the whole project will run more smoothly.


True, but pending will go back up and RAC down unless a lot of people decrease their cache days in their preferences.
ID: 944882 · Report as offensive
Profile 52 Aces
Avatar

Send message
Joined: 7 Jan 02
Posts: 497
Credit: 14,261,068
RAC: 67
United States
Message 944911 - Posted: 4 Nov 2009, 0:47:18 UTC - in response to Message 944879.  

weekly maintenance has finished at least an hour earlier than usual


Like 3 hours earlier by my clock (ie: the original 3-4 hour estimate for maint is somewhat accurate again). But I thought a significant part of that credit goes to the Solid State drives' ablility to juggle & compress faster.

PS: I meant to say PENDINGS in my earlier post, not RAC -- doh!


ID: 944911 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 944922 - Posted: 4 Nov 2009, 1:26:58 UTC - in response to Message 944870.  

Suppose many of us make a donation to Seti@home, would you use it to setup a backup system so you can do maintenance on the primary systems AND/OR use the backup systems to get the data from the archives? Use the backup systems while the primary is down for maintenance so that we won't have to wait for WU's.


While I do not oppose Donations to Seti.

To compress the Database in MySQL (or most larger databases), it is required to be offline. The word "compress" has added meaning in that to remove records that are marked as "deleted" it has to be offline. So until that is complete everything is still in the database just marked for deletion. In some cases the Primary key is reindexed during the compression.

There is a problem with running off the Duplicate Database Server(s) during maintenance. The Database Servers use the Gigabit Link to stay in Sync. So while the master is offline and having records that are "no longer there" being removed and the backup is "Adding New Records"... Things for recovery after the outage become infinately more complex.

The Master would then need All the New Records "added" and then to remove the dead records on the backup database.

Regards

Please consider a Donation to the Seti Project.

ID: 944922 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 944937 - Posted: 4 Nov 2009, 2:13:42 UTC - in response to Message 944873.  

The side news is that Pending should be starting to decrease as they get bumped into the queue quicker.


That might whiteboard well, but I had the very opposite occur during the Halloween 2009 outage. My RAC quickly jumped up 20% (from ~31k to ~39k).

Game theory at play I suppose... I had run down my task queue on purpose last week, and then stockpiled just ahead of the weekend. I guess enough of my wingmen had been stockpiling throughout the week, thus their reciprocal WU's were at the end of a very very long articially DCF'd line, where-as mine were consistently aged ;-) Thus they out-stripped any of my wingmen who ran bone dry. I know, my RAC will average back down over the next 3 weeks ;-)


Pending credit is a fickle thing with the addition of Cuda and the monster caches. One of the Key indicators about what is happening is in the Server Stats page.

Result turnaround time (last hour average) 135.36 hours 159.85 hours 5m

For Enhanced Multibeam the 135 hour turnaround states the average return from when a WU is sent to when the Result comes back is 135 hours or 5.6 days. What that basically states is that the normal "user" could expect to see his Workunit sit in Pending for 5.6 days. That is an indication that there are some really Monster Caches on the heavy hitters... It also means that they probably doing 80%+ of the work.

Some time back Matt stated that a million WU's a day are sent. Then when we see that
Results returned and awaiting validation 3,450,994 67,630 9m

One could surmise that during the course of the week that the Database grows from ~ 4000000 records to 53000000 records. The part that is scary is there is some unknow number of records that due to all the ten days caches sizes that can not be calculated.

IF I were to presume that "everyone" had a 7 day cache and that 80% of the records stated above for a week were true. Then there are another 42400000 database records that are not visible. That would mean true datbase size would more closely equal 95400000 records. Then people wonder why the tech news states they are trying to make the database faster.

Or every time you do a scheduler request... It is like taking your favorite picture that is an 8x10 and choping into 95 million pieces. Parts of the picture is sent to everyone. You have 1 or two pieces to send back to the Server for reassembly. The Server has to sort those 95 million pieces for each computer. How fast could YOU do a 95 million piece Jigsaw puzzle. A Database Crash on a 95 million record database is not a pretty sight. We have seen one in the last few weeks. The recovery caused people to bump their caches. Which is setting up for the next crash.

By reducing the Cache size on a few machines to 4 days. IT could make a large impact on reliabilty of the Seti Servers reducing downtime during maintenance and insuring you get work when you need it.


Regards

Please consider a Donation to the Seti Project.

ID: 944937 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 944982 - Posted: 4 Nov 2009, 4:58:17 UTC - in response to Message 944937.  

By reducing the Cache size on a few machines to 4 days. It could make a large impact on reliability of the Seti Servers reducing downtime during maintenance and insuring you get work when you need it.


Plausible, but the BOINC developers will likely ignore this unless it can be proven mathematically or via simulation. If convinced, they could enforce a more strict upper limit on cache sizes. Basically, I believe you but I want to see the proof.
ID: 944982 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 945012 - Posted: 4 Nov 2009, 9:05:01 UTC - in response to Message 944982.  

Plausible, but the BOINC developers will likely ignore this unless it can be proven mathematically or via simulation. If convinced, they could enforce a more strict upper limit on cache sizes. Basically, I believe you but I want to see the proof.

There's already the two options <max_wus_in_progress> (really also per core) and <max_wus_in_progress_gpu> to limit how much work any computer can cache at any given time.

That projects chooses to not use the available options isn't really BOINC's fault.

"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
ID: 945012 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 945037 - Posted: 4 Nov 2009, 17:08:34 UTC - in response to Message 945012.  

Thank You Ingelside

Plausible, but the BOINC developers will likely ignore this unless it can be proven mathematically or via simulation. If convinced, they could enforce a more strict upper limit on cache sizes. Basically, I believe you but I want to see the proof.

There's already the two options <max_wus_in_progress> (really also per core) and <max_wus_in_progress_gpu> to limit how much work any computer can cache at any given time.

That projects chooses to not use the available options isn't really BOINC's fault.


This outage was convienient to allow me to grab some SIMAP during the first of the month run. I have seen (as my preferences are set to - connect time is .1 days and maintain enough work for 1.25 days) the message <max_wus_in_progress> is enforced on at least one project.

11/3/2009 4:57:03 PM boincsimap Message from server: (reached limit of 60 tasks in progress)

Regards


Please consider a Donation to the Seti Project.

ID: 945037 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 945044 - Posted: 4 Nov 2009, 17:25:57 UTC - in response to Message 945012.  

There's already the two options <max_wus_in_progress> (really also per core) and <max_wus_in_progress_gpu> to limit how much work any computer can cache at any given time.


That feature would certainly prevent the extreme cases of computers with thousands of unfinished tasks. Would you happen to know what version (or tag) of the BOINC server code that introduced this feature?

Pappa, I assume you're interested in looking into this, and perhaps proposing the idea to Matt/Eric for their next meeting?
ID: 945044 · Report as offensive
Profile 52 Aces
Avatar

Send message
Joined: 7 Jan 02
Posts: 497
Credit: 14,261,068
RAC: 67
United States
Message 945056 - Posted: 4 Nov 2009, 18:14:45 UTC - in response to Message 945044.  
Last modified: 4 Nov 2009, 18:32:54 UTC

The hoarders will find a way around it ;-)

Just give all new seti wu's a deadline of 12 days for the next 3 months. The problem isn't that someone can grab 10000 wu's if the results are returned 5 days later, the problem is a heavy curve of those same WU's will "kite" in the database for up to 8 weeks as subsequent DL's bump their priority (be it SETI or other projects).

I assume that part of the tuesday cleanup is a query for zombie lost records (ie: would the present 4 million out ever get down to zero if all results were returned).
ID: 945056 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 945147 - Posted: 5 Nov 2009, 0:03:42 UTC - in response to Message 945044.  

There's already the two options <max_wus_in_progress> (really also per core) and <max_wus_in_progress_gpu> to limit how much work any computer can cache at any given time.

That feature would certainly prevent the extreme cases of computers with thousands of unfinished tasks. Would you happen to know what version (or tag) of the BOINC server code that introduced this feature?
...

It's been around for an extended period, but the present form is fairly recent:
David  14 May 2007
    - scheduler: add max_wus_in_progress option.
        Limits total # of in-progress results per host
        (independently of #CPUs)

David  Jan 6 2008
    - scheduler: change <max_wus_in_progress> to be per CPU, not per host

David  1 June 2009
    - scheduler: add new config option <max_wus_in_progress_gpus>.
        The limit on jobs in progress is now
            max_wus_in_progress * NCPUS
            + max_wus_in_progress * NGPUS
        where NCPUS and NGPUS reflect prefs and are capped.
        Furthermore: if the client reports plan class for in-progress jobs
        (see checkin of 31 May 2009)
        then these limits are enforced separately;
        i.e. the # of in-progress CPU jobs is <= max_wus_in_progress*NCPUS,
        and the # of in-progress GPU jobs is <= max_wus_in_progress_gpu*NGPUS
---------------
                                                                 Joe
ID: 945147 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 945152 - Posted: 5 Nov 2009, 0:37:42 UTC

Thank You Joe

My intent was never to start a debate other than provide information about the situation at Seti.

Everyone is "welcome to ask questions" and gets answers from anyone who might have the information.

As Matt has posted in the Tech News that the major hurdle is past. I will release the sticky holding this at the Top of the Forum.

Thank You Everyone


Please consider a Donation to the Seti Project.

ID: 945152 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 945181 - Posted: 5 Nov 2009, 2:56:30 UTC - in response to Message 944870.  

Suppose many of us make a donation to Seti@home, would you use it to setup a backup system so you can do maintenance on the primary systems AND/OR use the backup systems to get the data from the archives? Use the backup systems while the primary is down for maintenance so that we won't have to wait for WU's.

Actually, there are a lot of things that could be done with donations, like fund the Gigabit link, that a "backup system" doesn't seem that attractive.

When the weekly maintenance outage goes smoothly, it's not that big a deal.
ID: 945181 · Report as offensive
PhonAcq

Send message
Joined: 14 Apr 01
Posts: 1656
Credit: 30,658,217
RAC: 1
United States
Message 945184 - Posted: 5 Nov 2009, 3:11:12 UTC

regarding the science database size, and excuse me if this is a de je vu, what is the need of keeping all the processed results in one place? why not break the database into smaller segments and store away. If referencing is required, such as to know if you have been at a particular sky location befre, then have a smaller index table for that. I most certainly can't engineer this concept from afar, but there is always so much apparent overhead with the database, that I really wonder if it is needed to be on-line. Update it once a week or month or whatever, as needed, with new stuff.

I don't know if the follwoing is related or not, but with the CA furloughs, it would make sense (cents?) for seti to do what it takes to be more people-productive and reduce the time Matt & co. need to spend man-handling the project.
ID: 945184 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 945192 - Posted: 5 Nov 2009, 4:56:12 UTC

Once Again

So many things are intertwined. Each has its own merit.

If we want to pull issues out for discussion that is fine. We can do that. I can point Matt and Eric at the thread(s), but am realistic enough to know that unless something provides some majik insite. It will go unresponded. Yes I know there are many sharp indivuals here....

So while I was dismissive of a statement about donations (I was not here to promote donations, I only tried to let everyone know that the problem was acknowledged), as that type of donation drive would not solve the issue at hand. As is mentioned Donations are needed for the Gigabit link to get up the hill from the point of presence at UCB. That is being held up by the study to make it climb the 1.5 mile hill from the Main UCB Campus to the Space Sciences Lab. As for roughly 100K+ to create some kind of super backup capability... Well Seti is just trying to keep the doors open. That is what donations are for. So when you are discussing the Video card purchase to make the most out of your Credits think about sending a few bucks to Seti to keep the doors open so that you can use it. Then we can discuss as many things as we like.

The discussion originally for the Boinc Database which holds links to the WU to be sent, WU's in progress and Returned Results. Which is MySQL. Which gets extremely bloat'd and the new ssd drives are taking care of the spooling transaction logs. Just say that is a Big Plus! For non techies those logs are used to help rebuild the database during a crash and more.

The Science Database is in Informix it is a different issue (big ticket item when it was put in place). It has been discussed before as to how they can migrate away from Informix. Part of the problem is it has the ten+ years of Seti Result History that NITPCKR needs.

Pending Credits and Cache Size is still a different issue.

Changes to Server Code that allow ATI and Cuda 2.3 yep another issue.

Some of the more recent outages are caused by a machine or two that have been intertwined into the core Seti Services over the years. That was because resources were needed, at that time it was the easy fix (no money/hardware to do it correctly). That once again falls to donations.

Then a Raided Hard drive fails... Hmmmm I won't labor the donations thing again. Or the offline hours to reconfigure the Raid (can you say outage?).

Another Staff member is needed to Help Matt and Crew is needed. Tough times for all of "us."

I have visited the Seti Lab. I have seen Seti Classic (retired, and I got to touch what I put 47+ computer years into). I have worked on the type of NAS that held the storage for many years. I helped chase the drives needed to keep it working. I have stood when the doors to the Server Closet were opened (in the Hall, yes it is loud). I have seen the small space(s) that is/are shared (for many purposes). I have seen most of the Working and Defunct Computers which filled every place they could be placed.

So after 30+ years in computers it is easy to ask why did they do that. Having met many of Seti Staff and seen what is happening I know that Users Needs often causes workarounds that later have to be repaid. The Seti Staff Do Care, sometimes is not so obvious. Sometimes Users see a Server reboot in the middle of the night. I am Happy when that happens...

Regards




Please consider a Donation to the Seti Project.

ID: 945192 · Report as offensive
ps2os2

Send message
Joined: 12 May 00
Posts: 11
Credit: 351,945
RAC: 0
United States
Message 949519 - Posted: 24 Nov 2009, 17:34:06 UTC

11/24/09

Still getting no work (see below).
Sat Nov 21 22:57:22 2009 Starting BOINC client version 6.6.36 for powerpc-apple-darwin
Sat Nov 21 22:57:22 2009 log flags: task, file_xfer, sched_ops
Sat Nov 21 22:57:22 2009 Libraries: libcurl/7.19.4 OpenSSL/0.9.7l zlib/1.2.3 c-ares/1.6.0
Sat Nov 21 22:57:22 2009 Data directory: /Library/Application Support/BOINC Data
Sat Nov 21 22:57:23 2009 Processor: 2 Power Macintosh Power Macintosh [Power Macintosh Model PowerMac7,3] [AltiVec]
Sat Nov 21 22:57:23 2009 Processor features:
Sat Nov 21 22:57:23 2009 OS: Darwin: 8.11.0
Sat Nov 21 22:57:23 2009 Memory: 6.00 GB physical, 125.98 GB virtual
Sat Nov 21 22:57:23 2009 Disk: 172.24 GB total, 125.73 GB free
Sat Nov 21 22:57:23 2009 Local time is UTC -6 hours
Sat Nov 21 22:57:23 2009 Can't load library libcudart
Sat Nov 21 22:57:23 2009 No coprocessors
Sat Nov 21 22:57:25 2009 Not using a proxy
Sat Nov 21 22:57:25 2009 rosetta@home URL: http://boinc.bakerlab.org/rosetta/; Computer ID: 1090433; location: (none); project prefs: default
Sat Nov 21 22:57:25 2009 SETI@home URL: http://setiathome.berkeley.edu/; Computer ID: 1102572; location: home; project prefs: default
Sat Nov 21 22:57:25 2009 SETI@home General prefs: from SETI@home (last modified 21-Jun-2005 12:08:23)
Sat Nov 21 22:57:25 2009 SETI@home Computer location: home
Sat Nov 21 22:57:25 2009 SETI@home General prefs: no separate prefs for home; using your defaults
Sat Nov 21 22:57:25 2009 Reading preferences override file
Sat Nov 21 22:57:25 2009 Preferences limit memory usage when active to 3072.00MB
Sat Nov 21 22:57:25 2009 Preferences limit memory usage when idle to 5529.60MB
Sat Nov 21 22:57:25 2009 Preferences limit disk usage to 86.12GB
Sat Nov 21 22:57:26 2009 SETI@home Restarting task 03mr07ab.26247.23232.8.10.144_1 using setiathome_enhanced version 603
Sat Nov 21 22:57:26 2009 SETI@home Restarting task 03mr07ab.2677.13978.15.10.222_0 using setiathome_enhanced version 603
Sun Nov 22 00:51:25 2009 SETI@home Computation for task 03mr07ab.26247.23232.8.10.144_1 finished
Sun Nov 22 00:51:25 2009 SETI@home Starting 03mr07ac.16750.20931.9.10.169_1
Sun Nov 22 00:51:26 2009 SETI@home Starting task 03mr07ac.16750.20931.9.10.169_1 using setiathome_enhanced version 603
Sun Nov 22 00:51:27 2009 SETI@home Started upload of 03mr07ab.26247.23232.8.10.144_1_0
Sun Nov 22 00:51:31 2009 SETI@home Finished upload of 03mr07ab.26247.23232.8.10.144_1_0
Sun Nov 22 03:44:53 2009 rosetta@home Sending scheduler request: To fetch work.
Sun Nov 22 03:44:53 2009 rosetta@home Requesting new tasks
Sun Nov 22 03:44:58 2009 rosetta@home Scheduler request completed: got 0 new tasks
Sun Nov 22 03:44:58 2009 rosetta@home Message from server: No work sent
Sun Nov 22 03:44:58 2009 rosetta@home Message from server: (there was work for other platforms)
Sun Nov 22 05:54:05 2009 SETI@home Sending scheduler request: To fetch work.
Sun Nov 22 05:54:05 2009 SETI@home Reporting 2 completed tasks, requesting new tasks
Sun Nov 22 05:54:10 2009 SETI@home Scheduler request completed: got 1 new tasks
Sun Nov 22 05:54:12 2009 SETI@home Started download of 03mr07ad.8917.1708.16.10.183
Sun Nov 22 05:54:15 2009 SETI@home Finished download of 03mr07ad.8917.1708.16.10.183
Sun Nov 22 08:22:36 2009 SETI@home Sending scheduler request: To fetch work.
Sun Nov 22 08:22:36 2009 SETI@home Requesting new tasks
Sun Nov 22 08:22:41 2009 SETI@home Scheduler request completed: got 1 new tasks
Sun Nov 22 08:22:43 2009 SETI@home Started download of 03mr07ad.8667.16432.14.10.253
Sun Nov 22 08:22:46 2009 SETI@home Finished download of 03mr07ad.8667.16432.14.10.253
Sun Nov 22 12:44:23 2009 SETI@home Computation for task 03mr07ab.2677.13978.15.10.222_0 finished
Sun Nov 22 12:44:23 2009 SETI@home Starting 03mr07ad.8917.1708.16.10.183_0
Sun Nov 22 12:44:23 2009 SETI@home Starting task 03mr07ad.8917.1708.16.10.183_0 using setiathome_enhanced version 603
Sun Nov 22 12:44:25 2009 SETI@home Started upload of 03mr07ab.2677.13978.15.10.222_0_0
Sun Nov 22 12:44:29 2009 SETI@home Finished upload of 03mr07ab.2677.13978.15.10.222_0_0
Sun Nov 22 15:27:31 2009 SETI@home Computation for task 03mr07ac.16750.20931.9.10.169_1 finished
Sun Nov 22 15:27:31 2009 SETI@home Starting 03mr07ad.8667.16432.14.10.253_1
Sun Nov 22 15:27:31 2009 SETI@home Starting task 03mr07ad.8667.16432.14.10.253_1 using setiathome_enhanced version 603
Sun Nov 22 15:27:33 2009 SETI@home Started upload of 03mr07ac.16750.20931.9.10.169_1_0
Sun Nov 22 15:27:37 2009 SETI@home Finished upload of 03mr07ac.16750.20931.9.10.169_1_0
Sun Nov 22 18:47:03 2009 SETI@home Sending scheduler request: To fetch work.
Sun Nov 22 18:47:03 2009 SETI@home Reporting 2 completed tasks, requesting new tasks
Sun Nov 22 18:47:04 2009 A new version of BOINC (6.10.17) is available for your computer
Sun Nov 22 18:47:04 2009 Visit http://boinc.berkeley.edu/download.php to get it.
Sun Nov 22 18:47:09 2009 SETI@home Scheduler request completed: got 1 new tasks
Sun Nov 22 18:47:11 2009 SETI@home Started download of 03mr07af.5866.15614.10.10.60
Sun Nov 22 18:47:13 2009 SETI@home Finished download of 03mr07af.5866.15614.10.10.60
Sun Nov 22 19:55:54 2009 SETI@home Sending scheduler request: To fetch work.
Sun Nov 22 19:55:54 2009 SETI@home Requesting new tasks
Sun Nov 22 19:56:00 2009 SETI@home Scheduler request completed: got 1 new tasks
Sun Nov 22 19:56:02 2009 SETI@home Started download of 03mr07af.5866.162881.10.10.202
Sun Nov 22 19:56:05 2009 SETI@home Finished download of 03mr07af.5866.162881.10.10.202
Mon Nov 23 01:23:11 2009 SETI@home Computation for task 03mr07ad.8667.16432.14.10.253_1 finished
Mon Nov 23 01:23:11 2009 SETI@home Starting 03mr07af.5866.15614.10.10.60_0
Mon Nov 23 01:23:11 2009 SETI@home Starting task 03mr07af.5866.15614.10.10.60_0 using setiathome_enhanced version 603
Mon Nov 23 01:23:13 2009 SETI@home Started upload of 03mr07ad.8667.16432.14.10.253_1_0
Mon Nov 23 01:23:17 2009 SETI@home Finished upload of 03mr07ad.8667.16432.14.10.253_1_0
Mon Nov 23 03:44:59 2009 rosetta@home Sending scheduler request: To fetch work.
Mon Nov 23 03:44:59 2009 rosetta@home Requesting new tasks
Mon Nov 23 03:45:04 2009 rosetta@home Scheduler request completed: got 0 new tasks
Mon Nov 23 03:45:04 2009 rosetta@home Message from server: No work sent
Mon Nov 23 03:45:04 2009 rosetta@home Message from server: (there was work for other platforms)
Mon Nov 23 04:06:51 2009 SETI@home Computation for task 03mr07ad.8917.1708.16.10.183_0 finished
Mon Nov 23 04:06:52 2009 SETI@home Starting 03mr07af.5866.162881.10.10.202_1
Mon Nov 23 04:06:52 2009 SETI@home Starting task 03mr07af.5866.162881.10.10.202_1 using setiathome_enhanced version 603
Mon Nov 23 04:06:54 2009 SETI@home Started upload of 03mr07ad.8917.1708.16.10.183_0_0
Mon Nov 23 04:06:58 2009 SETI@home Finished upload of 03mr07ad.8917.1708.16.10.183_0_0
Mon Nov 23 06:13:54 2009 SETI@home Sending scheduler request: To fetch work.
Mon Nov 23 06:13:54 2009 SETI@home Reporting 2 completed tasks, requesting new tasks
Mon Nov 23 06:14:00 2009 SETI@home Scheduler request completed: got 1 new tasks
Mon Nov 23 06:14:02 2009 SETI@home Started download of 11oc06aa.18337.4162.5.10.51
Mon Nov 23 06:14:05 2009 SETI@home Finished download of 11oc06aa.18337.4162.5.10.51
Mon Nov 23 07:49:59 2009 SETI@home Sending scheduler request: To fetch work.
Mon Nov 23 07:49:59 2009 SETI@home Requesting new tasks
Mon Nov 23 07:50:04 2009 SETI@home Scheduler request completed: got 1 new tasks
Mon Nov 23 07:50:06 2009 SETI@home Started download of 03mr07ag.4537.13978.10.10.185
Mon Nov 23 07:50:09 2009 SETI@home Finished download of 03mr07ag.4537.13978.10.10.185
Mon Nov 23 10:19:13 2009 SETI@home Computation for task 03mr07af.5866.15614.10.10.60_0 finished
Mon Nov 23 10:19:13 2009 SETI@home Starting 11oc06aa.18337.4162.5.10.51_1
Mon Nov 23 10:19:14 2009 SETI@home Starting task 11oc06aa.18337.4162.5.10.51_1 using setiathome_enhanced version 603
Mon Nov 23 10:19:16 2009 SETI@home Started upload of 03mr07af.5866.15614.10.10.60_0_0
Mon Nov 23 10:19:21 2009 SETI@home Finished upload of 03mr07af.5866.15614.10.10.60_0_0
Mon Nov 23 12:21:15 2009 SETI@home Sending scheduler request: To fetch work.
Mon Nov 23 12:21:15 2009 SETI@home Reporting 1 completed tasks, requesting new tasks
Mon Nov 23 12:21:20 2009 SETI@home Scheduler request completed: got 1 new tasks
Mon Nov 23 12:21:22 2009 SETI@home Started download of 01se09ad.14741.17245.10.10.214
Mon Nov 23 12:21:25 2009 SETI@home Finished download of 01se09ad.14741.17245.10.10.214
Mon Nov 23 15:05:54 2009 SETI@home Sending scheduler request: To fetch work.
Mon Nov 23 15:05:54 2009 SETI@home Requesting new tasks
Mon Nov 23 15:05:59 2009 SETI@home Scheduler request completed: got 1 new tasks
Mon Nov 23 15:06:01 2009 SETI@home Started download of 11oc06aa.20720.43926.7.10.167
Mon Nov 23 15:06:04 2009 SETI@home Finished download of 11oc06aa.20720.43926.7.10.167
Mon Nov 23 17:02:23 2009 SETI@home Computation for task 03mr07af.5866.162881.10.10.202_1 finished
Mon Nov 23 17:02:23 2009 SETI@home Starting 03mr07ag.4537.13978.10.10.185_1
Mon Nov 23 17:02:23 2009 SETI@home Starting task 03mr07ag.4537.13978.10.10.185_1 using setiathome_enhanced version 603
Mon Nov 23 17:02:25 2009 SETI@home Started upload of 03mr07af.5866.162881.10.10.202_1_0
Mon Nov 23 17:02:29 2009 SETI@home Finished upload of 03mr07af.5866.162881.10.10.202_1_0
Mon Nov 23 19:19:29 2009 SETI@home Computation for task 11oc06aa.18337.4162.5.10.51_1 finished
Mon Nov 23 19:19:29 2009 SETI@home Starting 01se09ad.14741.17245.10.10.214_2
Mon Nov 23 19:19:29 2009 SETI@home Starting task 01se09ad.14741.17245.10.10.214_2 using setiathome_enhanced version 603
Mon Nov 23 19:19:31 2009 SETI@home Started upload of 11oc06aa.18337.4162.5.10.51_1_0
Mon Nov 23 19:19:35 2009 SETI@home Finished upload of 11oc06aa.18337.4162.5.10.51_1_0
Mon Nov 23 20:08:37 2009 SETI@home Computation for task 03mr07ag.4537.13978.10.10.185_1 finished
Mon Nov 23 20:08:37 2009 SETI@home Starting 11oc06aa.20720.43926.7.10.167_1
Mon Nov 23 20:08:38 2009 SETI@home Starting task 11oc06aa.20720.43926.7.10.167_1 using setiathome_enhanced version 603
Mon Nov 23 20:08:40 2009 SETI@home Started upload of 03mr07ag.4537.13978.10.10.185_1_0
Mon Nov 23 20:08:45 2009 SETI@home Finished upload of 03mr07ag.4537.13978.10.10.185_1_0
Tue Nov 24 02:49:26 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 02:49:26 2009 SETI@home Reporting 3 completed tasks, requesting new tasks
Tue Nov 24 02:49:31 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 02:49:31 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 02:50:47 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 02:50:47 2009 SETI@home Requesting new tasks
Tue Nov 24 02:50:52 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 02:50:52 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 02:53:08 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 02:53:08 2009 SETI@home Requesting new tasks
Tue Nov 24 02:53:13 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 02:53:13 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 02:55:29 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 02:55:29 2009 SETI@home Requesting new tasks
Tue Nov 24 02:55:34 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 02:55:34 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 02:58:51 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 02:58:51 2009 SETI@home Requesting new tasks
Tue Nov 24 02:58:56 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 02:58:56 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 03:06:15 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 03:06:15 2009 SETI@home Requesting new tasks
Tue Nov 24 03:06:20 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 03:06:20 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 03:35:50 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 03:35:50 2009 SETI@home Requesting new tasks
Tue Nov 24 03:35:55 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 03:35:55 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 03:45:05 2009 rosetta@home Sending scheduler request: To fetch work.
Tue Nov 24 03:45:05 2009 rosetta@home Requesting new tasks
Tue Nov 24 03:45:10 2009 rosetta@home Scheduler request completed: got 0 new tasks
Tue Nov 24 04:05:25 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 04:05:25 2009 SETI@home Requesting new tasks
Tue Nov 24 04:05:30 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 04:05:30 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 05:19:21 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 05:19:21 2009 SETI@home Requesting new tasks
Tue Nov 24 05:19:26 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 05:19:26 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 05:46:54 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 05:46:54 2009 SETI@home Requesting new tasks
Tue Nov 24 05:47:00 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 05:47:00 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 06:02:22 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 06:02:22 2009 SETI@home Requesting new tasks
Tue Nov 24 06:02:27 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 06:02:27 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:19:02 2009 SETI@home Computation for task 01se09ad.14741.17245.10.10.214_2 finished
Tue Nov 24 09:19:04 2009 SETI@home Started upload of 01se09ad.14741.17245.10.10.214_2_0
Tue Nov 24 09:19:06 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:19:06 2009 SETI@home Requesting new tasks
Tue Nov 24 09:19:09 2009 SETI@home Finished upload of 01se09ad.14741.17245.10.10.214_2_0
Tue Nov 24 09:19:11 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:19:11 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:20:27 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:20:27 2009 SETI@home Reporting 1 completed tasks, requesting new tasks
Tue Nov 24 09:20:32 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:20:32 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:21:48 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:21:48 2009 SETI@home Requesting new tasks
Tue Nov 24 09:21:53 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:21:53 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:25:09 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:25:09 2009 SETI@home Requesting new tasks
Tue Nov 24 09:25:14 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:25:14 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:25:30 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:25:30 2009 SETI@home Requesting new tasks
Tue Nov 24 09:25:35 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:25:35 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:35:55 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:35:55 2009 SETI@home Requesting new tasks
Tue Nov 24 09:36:00 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:36:00 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:58:15 2009 SETI@home Computation for task 11oc06aa.20720.43926.7.10.167_1 finished
Tue Nov 24 09:58:15 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:58:15 2009 SETI@home Requesting new tasks
Tue Nov 24 09:58:17 2009 SETI@home Started upload of 11oc06aa.20720.43926.7.10.167_1_0
Tue Nov 24 09:58:20 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:58:20 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 09:58:22 2009 SETI@home Finished upload of 11oc06aa.20720.43926.7.10.167_1_0
Tue Nov 24 09:59:36 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 09:59:36 2009 SETI@home Reporting 1 completed tasks, requesting new tasks
Tue Nov 24 09:59:41 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 09:59:41 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 10:00:57 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 10:00:57 2009 SETI@home Requesting new tasks
Tue Nov 24 10:01:02 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 10:01:02 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 10:03:18 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 10:03:18 2009 SETI@home Requesting new tasks
Tue Nov 24 10:03:23 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 10:03:23 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 10:04:39 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 10:04:39 2009 SETI@home Requesting new tasks
Tue Nov 24 10:04:44 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 10:04:44 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 10:10:01 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 10:10:01 2009 SETI@home Requesting new tasks
Tue Nov 24 10:10:07 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 10:10:07 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 10:31:31 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 10:31:31 2009 SETI@home Requesting new tasks
Tue Nov 24 10:31:36 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 10:31:36 2009 SETI@home Message from server: (Project has no jobs available)
Tue Nov 24 10:36:54 2009 SETI@home Sending scheduler request: To fetch work.
Tue Nov 24 10:36:54 2009 SETI@home Requesting new tasks
Tue Nov 24 10:36:59 2009 SETI@home Scheduler request completed: got 0 new tasks
Tue Nov 24 10:36:59 2009 SETI@home Message from server: (Project has no jobs available)

ID: 949519 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 19 Sep 00
Posts: 3184
Credit: 446,358
RAC: 0
Germany
Message 949536 - Posted: 24 Nov 2009, 23:33:34 UTC - in response to Message 949520.  

We believe you, even without the logs. And if you read the technical news, you'll know the reason, too.

Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)

SETI@home classic workunits 3,758
SETI@home classic CPU time 66,520 hours
ID: 949536 · Report as offensive
1 · 2 · 3 · 4 · Next

Message boards : Number crunching : No Work/Little Work


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.