Help !! I can't get any work

Message boards : Number crunching : Help !! I can't get any work
Message board moderation

To post messages, you must log in.

AuthorMessage
Hanford WA4LZC
Avatar

Send message
Joined: 15 May 99
Posts: 38
Credit: 10,129,207
RAC: 0
United States
Message 959306 - Posted: 29 Dec 2009, 17:22:02 UTC

I can't get any work....

12/29/2009 12:13:17 Starting BOINC client version 6.10.18 for windows_intelx86
12/29/2009 12:13:17 log flags: file_xfer, sched_ops, task
12/29/2009 12:13:17 Libraries: libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3
12/29/2009 12:13:17 Data directory: C:\Documents and Settings\All Users\Application Data\BOINC
12/29/2009 12:13:17 Running under account Hanford R Wright
12/29/2009 12:13:17 Processor: 4 AuthenticAMD AMD Phenom(tm) II X4 940 Processor [x86 Family 16 Model 4 Stepping 2]
12/29/2009 12:13:17 Processor: 512.00 KB cache
12/29/2009 12:13:17 Processor features: fpu tsc pae nx sse sse2 3dnow mmx
12/29/2009 12:13:17 OS: Microsoft Windows XP: Professional x86 Edition, Service Pack 3, (05.01.2600.00)
12/29/2009 12:13:17 Memory: 3.25 GB physical, 5.09 GB virtual
12/29/2009 12:13:17 Disk: 465.76 GB total, 186.97 GB free
12/29/2009 12:13:17 Local time is UTC -5 hours
12/29/2009 12:13:18 NVIDIA GPU 0: GeForce 9500 GT (driver version 19562, CUDA version 3000, compute capability 1.1, 1024MB, 90 GFLOPS peak)
12/29/2009 12:13:18 SETI@home Found app_info.xml; using anonymous platform
12/29/2009 12:13:18 Not using a proxy
12/29/2009 12:13:18 SETI@home URL http://setiathome.berkeley.edu/; Computer ID 4295038; resource share 100
12/29/2009 12:13:18 SETI@home General prefs: from SETI@home (last modified 27-Dec-2009 03:39:46)
12/29/2009 12:13:18 SETI@home Host location: none
12/29/2009 12:13:18 SETI@home General prefs: using your defaults
12/29/2009 12:13:18 Reading preferences override file
12/29/2009 12:13:18 Preferences limit memory usage when active to 2494.81MB
12/29/2009 12:13:18 Preferences limit memory usage when idle to 3259.89MB
12/29/2009 12:13:18 Preferences limit disk usage to 100.00GB
12/29/2009 12:13:51 SETI@home update requested by user
12/29/2009 12:13:53 SETI@home Sending scheduler request: Requested by user.
12/29/2009 12:13:53 SETI@home Requesting new tasks for CPU and GPU
12/29/2009 12:13:58 SETI@home Scheduler request completed: got 0 new tasks
12/29/2009 12:13:58 SETI@home Message from server: (Project has no jobs available)

ID: 959306 · Report as offensive
Ianab
Volunteer tester

Send message
Joined: 11 Jun 08
Posts: 732
Credit: 20,635,586
RAC: 5
New Zealand
Message 959309 - Posted: 29 Dec 2009, 17:26:41 UTC

From the Main home page...

We are now recovering from a planned power outage, during which all services were off line. The workunit storage machine is undergoing a RAID resync and no new workunits will be generated until this has finished. The outage was for power upgrades on campus, and a similar outage will happen again this upcoming Sunday (01/03/2010) at 12 noon


Nothing that we can do about it.

Waiting is....

Ian
ID: 959309 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 19 Sep 00
Posts: 3184
Credit: 446,358
RAC: 0
Germany
Message 959310 - Posted: 29 Dec 2009, 17:29:29 UTC - in response to Message 959306.  
Last modified: 29 Dec 2009, 17:31:11 UTC

I can't get any work...

So it's for everyone...

Message from server: (Project has no jobs available)

How could someone on the forum help with that?

Though, there are new infos on the SETI homepage [edit]as Ianab has mentioned while I typed :-)[/edit]

Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)

SETI@home classic workunits 3,758
SETI@home classic CPU time 66,520 hours
ID: 959310 · Report as offensive
Hanford WA4LZC
Avatar

Send message
Joined: 15 May 99
Posts: 38
Credit: 10,129,207
RAC: 0
United States
Message 959320 - Posted: 29 Dec 2009, 17:57:37 UTC - in response to Message 959309.  

From the Main home page...

We are now recovering from a planned power outage, during which all services were off line. The workunit storage machine is undergoing a RAID resync and no new workunits will be generated until this has finished. The outage was for power upgrades on campus, and a similar outage will happen again this upcoming Sunday (01/03/2010) at 12 noon


Nothing that we can do about it.

Waiting is....

Ian


Thanks Ianab for the info.....
ID: 959320 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65735
Credit: 55,293,173
RAC: 49
United States
Message 959322 - Posted: 29 Dec 2009, 18:02:01 UTC

[yoda]All that can be done, Waiting is.[/yoda] :D
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 959322 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 959334 - Posted: 29 Dec 2009, 18:39:44 UTC

As everyone is waiting for the Workunit storage server to get healthy again. I have made the Thread sticky so that everyone Might be able to see.

Quote from the Main Page

Scheduled Power Outage - update
We are now recovering from a planned power outage, during which all services were off line. The workunit storage machine is undergoing a RAID resync and no new workunits will be generated until this has finished. The outage was for power upgrades on campus, and a similar outage will happen again this upcoming Sunday (01/03/2010) at 12 noon (Pacific time). 29 Dec 2009 4:26:16 UTC


We Wait

Regards

Please consider a Donation to the Seti Project.

ID: 959334 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65735
Credit: 55,293,173
RAC: 49
United States
Message 959362 - Posted: 29 Dec 2009, 20:50:36 UTC - in response to Message 959334.  

ID: 959362 · Report as offensive

Message boards : Number crunching : Help !! I can't get any work


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.