Panic Mode On (100) Server Problems?

Message boards : Number crunching : Panic Mode On (100) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 20 · 21 · 22 · 23 · 24 · 25 · 26 . . . 32 · Next

AuthorMessage
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1853
Credit: 268,616,081
RAC: 1,349
United States
Message 1730936 - Posted: 2 Oct 2015, 0:49:56 UTC - in response to Message 1730905.  

According to David the routing problem is fixed.
Or maybe that's just the routing to the BOINC Domain, as I still can't report my (now) 38 results.

The BOINC route is OK. Still no go on reporting or downloading tasks. setiboinc.ssl.berkeley.edu is still lost in space ...
ID: 1730936 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1730944 - Posted: 2 Oct 2015, 1:56:37 UTC
Last modified: 2 Oct 2015, 1:58:31 UTC

I'm still having to hit the manual Update button more often than not, but, it is connecting to the scheduler. The Win 8.1 Host got down to the last couple APs so I rebooted into a new Ubuntu system that only had 3 tasks remaining. It downloaded new tasks and continues to Report completed tasks and Download new ones. The traceroot looks the same as it did with OSX and Win 8.1, no problem finding the scheduler at setiboinc.ssl.berkeley.edu. The addresses in the client_state.xml trace fine, even http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi opens an xml file.

I'm not sure what will happen when I go to bed and stop hitting the Update button though...
ID: 1730944 · Report as offensive
Profile Zombu2
Volunteer tester

Send message
Joined: 24 Feb 01
Posts: 1615
Credit: 49,315,423
RAC: 0
United States
Message 1730949 - Posted: 2 Oct 2015, 2:08:49 UTC

still not working here even pushing the update button like there s no tomorrow
I came down with a bad case of i don't give a crap
ID: 1730949 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1730965 - Posted: 2 Oct 2015, 2:36:25 UTC - in response to Message 1730949.  

still not working here even pushing the update button like there s no tomorrow


Same here. I would cobble together a bash script to update every 10 seconds if I thought it would help, but doing it many times manually has convinced me that it won't. Even killing and restarting boinc that worked for les-helen-day didn't help. 119 APs uploaded and not one acknowledgement so no new WUs

The good news is that at least no new APs are being offered for download. That would really break my heart :).
ID: 1730965 · Report as offensive
Profile Oz
Avatar

Send message
Joined: 6 Jun 99
Posts: 233
Credit: 200,655,462
RAC: 212
United States
Message 1730966 - Posted: 2 Oct 2015, 2:40:09 UTC

If you think it will help.

itcsshelp@berkeley.edu
510-664-9000, ext. 1
Member of the 20 Year Club



ID: 1730966 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1730969 - Posted: 2 Oct 2015, 3:03:14 UTC

It seems like the folks who can't connect at all are mostly those who have built up large numbers of tasks waiting to report. I suspect that means that the scheduler requests are large and perhaps end up getting fragmented on the way to Berkeley, increasing the likelihood of failure. Most of my requests have been to report fewer than 5 tasks at a time and, although about half of those fail, the next attempt usually succeeds.

Looking at some old threads regarding connection problems, I noticed that there's an option available in cc_config.xml for <max_tasks_reported>xx</max_tasks_reported> which essentially cuts the scheduler requests into smaller chunks. Perhaps that's something that would help here. Or perhaps not (but I think it might be worth a try). ;^)
ID: 1730969 · Report as offensive
Profile Oz
Avatar

Send message
Joined: 6 Jun 99
Posts: 233
Credit: 200,655,462
RAC: 212
United States
Message 1730970 - Posted: 2 Oct 2015, 3:10:10 UTC - in response to Message 1730969.  
Last modified: 2 Oct 2015, 3:22:28 UTC

It seems like the folks who can't connect at all are mostly those who have built up large numbers of tasks waiting to report. I suspect that means that the scheduler requests are large and perhaps end up getting fragmented on the way to Berkeley, increasing the likelihood of failure. Most of my requests have been to report fewer than 5 tasks at a time and, although about half of those fail, the next attempt usually succeeds.

Looking at some old threads regarding connection problems, I noticed that there's an option available in cc_config.xml for <max_tasks_reported>xx</max_tasks_reported> which essentially cuts the scheduler requests into smaller chunks. Perhaps that's something that would help here. Or perhaps not (but I think it might be worth a try). ;^)


It may help some folks, but I am sitting on a laptop with ONE task to report - it has not managed to connect since 30/9/15 at 14:05UTC...
I don't think Berkeley IT is aware of the problem as there is no mention of it on their Service Status page (http://systemstatus.berkeley.edu/) which begins with:

The page will be updated whenever there is a change in system status that will affect users for more than 30 minutes. If you need assistance with a system or network problem, call Campus Shared Services IT at 510-664-9000 Option 1, 1, 1 - All Other Technology Requests.
Member of the 20 Year Club



ID: 1730970 · Report as offensive
Profile Zombu2
Volunteer tester

Send message
Joined: 24 Feb 01
Posts: 1615
Credit: 49,315,423
RAC: 0
United States
Message 1730971 - Posted: 2 Oct 2015, 3:12:59 UTC - in response to Message 1730969.  

It seems like the folks who can't connect at all are mostly those who have built up large numbers of tasks waiting to report. I suspect that means that the scheduler requests are large and perhaps end up getting fragmented on the way to Berkeley, increasing the likelihood of failure. Most of my requests have been to report fewer than 5 tasks at a time and, although about half of those fail, the next attempt usually succeeds.

Looking at some old threads regarding connection problems, I noticed that there's an option available in cc_config.xml for <max_tasks_reported>xx</max_tasks_reported> which essentially cuts the scheduler requests into smaller chunks. Perhaps that's something that would help here. Or perhaps not (but I think it might be worth a try). ;^)



yep i got about 600 wu's waiting to upload from all the machines

funny enough one of my machines has no issue at all it s happily crunching away and reporting ...same wan ip
I came down with a bad case of i don't give a crap
ID: 1730971 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1730974 - Posted: 2 Oct 2015, 3:19:40 UTC

One of the old tricks when reporting a large number was to set BOINC Manager to No new tasks in the Projects tab and then hitting the Update button for around a dozen times. If that doesn't work I suppose it's hopeless.
ID: 1730974 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1730975 - Posted: 2 Oct 2015, 3:20:36 UTC - in response to Message 1730969.  

It seems like the folks who can't connect at all are mostly those who have built up large numbers of tasks waiting to report. I suspect that means that the scheduler requests are large and perhaps end up getting fragmented on the way to Berkeley, increasing the likelihood of failure. Most of my requests have been to report fewer than 5 tasks at a time and, although about half of those fail, the next attempt usually succeeds.

Looking at some old threads regarding connection problems, I noticed that there's an option available in cc_config.xml for <max_tasks_reported>xx</max_tasks_reported> which essentially cuts the scheduler requests into smaller chunks. Perhaps that's something that would help here. Or perhaps not (but I think it might be worth a try). ;^)


That was a very good thought and well worth trying but it doesn't seem to work for me even when set to reporting 1 task and updating many times.

The results are all similar to this.

01-Oct-2015 23:15:27 [SETI@home] work fetch resumed by user
01-Oct-2015 23:15:29 [SETI@home] update requested by user
01-Oct-2015 23:15:31 [SETI@home] [sched_op] Starting scheduler request
01-Oct-2015 23:15:31 [SETI@home] Sending scheduler request: Requested by user.
01-Oct-2015 23:15:31 [SETI@home] Reporting 1 completed tasks
01-Oct-2015 23:15:31 [SETI@home] Requesting new tasks for CPU and NVIDIA
01-Oct-2015 23:15:31 [SETI@home] [sched_op] CPU work request: 3257848.37 seconds; 0.00 devices
01-Oct-2015 23:15:31 [SETI@home] [sched_op] NVIDIA work request: 500067.24 seconds; 0.00 devices
01-Oct-2015 23:15:35 [---] Project communication failed: attempting access to reference site
01-Oct-2015 23:15:35 [SETI@home] Scheduler request failed: Couldn't connect to server
01-Oct-2015 23:15:35 [SETI@home] [sched_op] Deferring communication for 2 hr 25 min 18 sec
01-Oct-2015 23:15:35 [SETI@home] [sched_op] Reason: Scheduler request failed
ID: 1730975 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1730977 - Posted: 2 Oct 2015, 3:28:28 UTC - in response to Message 1730971.  
Last modified: 2 Oct 2015, 3:41:10 UTC




yep i got about 600 wu's waiting to upload from all the machines

funny enough one of my machines has no issue at all it s happily crunching away and reporting ...same wan ip


You have two machines on a LAN network behind the same WAN IP address and one works and one doesn't. Is that correct? That really would be strange.

Edit: If that is the case, I would be looking at the configs and anything else I could think of to determine why one works and one doesn't.
ID: 1730977 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1730978 - Posted: 2 Oct 2015, 3:34:08 UTC - in response to Message 1730974.  

One of the old tricks when reporting a large number was to set BOINC Manager to No new tasks in the Projects tab and then hitting the Update button for around a dozen times. If that doesn't work I suppose it's hopeless.



Another good thought, but alas.
01-Oct-2015 23:31:27 [SETI@home] work fetch suspended by user
01-Oct-2015 23:31:29 [SETI@home] update requested by user
01-Oct-2015 23:31:31 [SETI@home] [sched_op] Starting scheduler request
01-Oct-2015 23:31:31 [SETI@home] Sending scheduler request: Requested by user.
01-Oct-2015 23:31:31 [SETI@home] Reporting 1 completed tasks
01-Oct-2015 23:31:31 [SETI@home] Not requesting tasks: scheduler RPC backoff
01-Oct-2015 23:31:31 [SETI@home] [sched_op] CPU work request: 0.00 seconds; 0.00 devices
01-Oct-2015 23:31:31 [SETI@home] [sched_op] NVIDIA work request: 0.00 seconds; 0.00 devices
01-Oct-2015 23:31:34 [---] Project communication failed: attempting access to reference site
01-Oct-2015 23:31:34 [SETI@home] Scheduler request failed: Couldn't connect to server
01-Oct-2015 23:31:34 [SETI@home] [sched_op] Deferring communication for 27 min 8 sec
01-Oct-2015 23:31:34 [SETI@home] [sched_op] Reason: Scheduler request failed
01-Oct-2015 23:31:36 [---] Internet access OK - project servers may be temporarily down.
ID: 1730978 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1730979 - Posted: 2 Oct 2015, 3:34:34 UTC - in response to Message 1730975.  
Last modified: 2 Oct 2015, 3:39:02 UTC

It seems like the folks who can't connect at all are mostly those who have built up large numbers of tasks waiting to report. I suspect that means that the scheduler requests are large and perhaps end up getting fragmented on the way to Berkeley, increasing the likelihood of failure. Most of my requests have been to report fewer than 5 tasks at a time and, although about half of those fail, the next attempt usually succeeds.

Looking at some old threads regarding connection problems, I noticed that there's an option available in cc_config.xml for <max_tasks_reported>xx</max_tasks_reported> which essentially cuts the scheduler requests into smaller chunks. Perhaps that's something that would help here. Or perhaps not (but I think it might be worth a try). ;^)


That was a very good thought and well worth trying but it doesn't seem to work for me even when set to reporting 1 task and updating many times.


Darn! I was trying to puzzle out what the commonality might be that divides those who can't connect at all and those who are having at least some modestly consistent success. Richard kind of shot down my thoughts about using TCP Optimizer earlier, and perhaps it's not the size of the scheduler request, either, based on your results and Oz's post. Oh, well, my machines are still plugging away. A scheduler request on my daily driver failed about an hour and a half ago, then the next one succeeded less than 2 minutes later.

EDIT: Just had another successful request a couple minutes ago, reporting 2 completed tasks and downloading 2 new ones.
ID: 1730979 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1730982 - Posted: 2 Oct 2015, 3:37:17 UTC - in response to Message 1730971.  

yep i got about 600 wu's waiting to upload from all the machines

funny enough one of my machines has no issue at all it s happily crunching away and reporting ...same wan ip

For the one that's successfully reporting, about how many tasks is it reporting in each scheduler request? Also, do you happen to know if the machines have different MTU values?
ID: 1730982 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1853
Credit: 268,616,081
RAC: 1,349
United States
Message 1730985 - Posted: 2 Oct 2015, 3:42:50 UTC - in response to Message 1730936.  

According to David the routing problem is fixed.
Or maybe that's just the routing to the BOINC Domain, as I still can't report my (now) 38 results.

The BOINC route is OK. Still no go on reporting or downloading tasks. setiboinc.ssl.berkeley.edu is still lost in space ...

boinc.berkeley.edu is again unreachable, at least to me:

et3-48.inr-311-ewdc.Berkeley.EDU [128.32.0.101] reports: Destination host unreachable.
ID: 1730985 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1730987 - Posted: 2 Oct 2015, 3:53:30 UTC - in response to Message 1730982.  

yep i got about 600 wu's waiting to upload from all the machines

funny enough one of my machines has no issue at all it s happily crunching away and reporting ...same wan ip

For the one that's successfully reporting, about how many tasks is it reporting in each scheduler request? Also, do you happen to know if the machines have different MTU values?


If changing the MTU is a possible cure, I can tell you 1500 is one value that is not working for me - and now all my APs are gone and I have only a few MBs left :(.
ID: 1730987 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1730991 - Posted: 2 Oct 2015, 3:59:43 UTC - in response to Message 1730985.  
Last modified: 2 Oct 2015, 4:17:30 UTC

et3-47.inr-311-ewdc.Berkeley.EDU 128.32.0.103 Works for me.
setiboinc.ssl.berkeley.edu
Hop	          Hostname	                      IP         Time
6	lag-10.ear2.Miami2.Level3.net	          4.68.71.169	21.090
13	ae-1-60.ear1.LosAngeles1.Level3.net	  4.69.144.18	72.656
14	CENIC.ear1.LosAngeles1.Level3.net	  4.35.156.66	72.503
15	dc-svl-agg4--lax-agg6-100ge.cenic.net	137.164.11.1	79.911
16	dc-oak-agg4--svl-agg4-100ge.cenic.net	137.164.46.144	83.179
17	ucb--oak-agg4-10g.cenic.net	        137.164.50.31	82.773
18	t2-3.inr-201-sut.Berkeley.EDU	        128.32.0.37	82.058
19	et3-47.inr-311-ewdc.Berkeley.EDU	128.32.0.103	82.115
20	et3-47.inr-311-ewdc.Berkeley.EDU	128.32.0.103  1278.088


et3-48.inr-311-ewdc.berkeley.edu (128.32.0.101) is 'download server 2 - vader'
Vader is Not the scheduler.
ID: 1730991 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1730993 - Posted: 2 Oct 2015, 4:08:17 UTC - in response to Message 1730987.  

If changing the MTU is a possible cure, I can tell you 1500 is one value that is not working for me - and now all my APs are gone and I have only a few MBs left :(.

That's really just a guess on my part, possibly one of the things that might differentiate those machines that are getting through and those that aren't. I know that before the move to the co-lo there were a lot of connection issues and running TCP Optimizer, which, among other things adjusted the MTU size, seemed to help a lot of people. For those that haven't been able to get through at all, it might be worth a shot.
ID: 1730993 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1853
Credit: 268,616,081
RAC: 1,349
United States
Message 1730994 - Posted: 2 Oct 2015, 4:09:14 UTC - in response to Message 1730991.  

et3-47.inr-311-ewdc.Berkeley.EDU 128.32.0.103 Works for me.
setiboinc.ssl.berkeley.edu
Hop	          Hostname	                      IP         Time
6	lag-10.ear2.Miami2.Level3.net	          4.68.71.169	21.090
13	ae-1-60.ear1.LosAngeles1.Level3.net	  4.69.144.18	72.656
14	CENIC.ear1.LosAngeles1.Level3.net	  4.35.156.66	72.503
15	dc-svl-agg4--lax-agg6-100ge.cenic.net	137.164.11.1	79.911
16	dc-oak-agg4--svl-agg4-100ge.cenic.net	137.164.46.144	83.179
17	ucb--oak-agg4-10g.cenic.net	        137.164.50.31	82.773
18	t2-3.inr-201-sut.Berkeley.EDU	        128.32.0.37	82.058
19	et3-47.inr-311-ewdc.Berkeley.EDU	128.32.0.103	82.115
20	et3-47.inr-311-ewdc.Berkeley.EDU	128.32.0.103  1278.088

Consistently?
Reason I ask is that I'm consistently:

et3-47.inr-311-ewdc.Berkeley.EDU [128.32.0.103] reports: Destination host unreachable.
ID: 1730994 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1730997 - Posted: 2 Oct 2015, 4:23:24 UTC - in response to Message 1730994.  
Last modified: 2 Oct 2015, 4:59:48 UTC

et3-47.inr-311-ewdc.Berkeley.EDU 128.32.0.103 Works for me.
setiboinc.ssl.berkeley.edu
Hop	          Hostname	                      IP         Time
6	lag-10.ear2.Miami2.Level3.net	          4.68.71.169	21.090
13	ae-1-60.ear1.LosAngeles1.Level3.net	  4.69.144.18	72.656
14	CENIC.ear1.LosAngeles1.Level3.net	  4.35.156.66	72.503
15	dc-svl-agg4--lax-agg6-100ge.cenic.net	137.164.11.1	79.911
16	dc-oak-agg4--svl-agg4-100ge.cenic.net	137.164.46.144	83.179
17	ucb--oak-agg4-10g.cenic.net	        137.164.50.31	82.773
18	t2-3.inr-201-sut.Berkeley.EDU	        128.32.0.37	82.058
19	et3-47.inr-311-ewdc.Berkeley.EDU	128.32.0.103	82.115
20	et3-47.inr-311-ewdc.Berkeley.EDU	128.32.0.103  1278.088

Consistently?
Reason I ask is that I'm consistently:

et3-47.inr-311-ewdc.Berkeley.EDU [128.32.0.103] reports: Destination host unreachable.

The only time I see et3-48.inr-311-ewdc.Berkeley.EDU [128.32.0.101] is when I trace for Vader, http://setiathome.berkeley.edu/forum_thread.php?id=77990&postid=1730852#1730852
et3-47.inr-311-ewdc.Berkeley.EDU 128.32.0.103 has been Synergy all day and clicking on http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi gets you the scheduler.

The machines that see setiboinc.ssl.berkeley.edu at et3-47.inr-311-ewdc.Berkeley.EDU (128.32.0.103) aren't having that much trouble, just an occasional manual update.
Fri 02 Oct 2015 12:18:00 AM EDT | SETI@home | [sched_op] Starting scheduler request
Fri 02 Oct 2015 12:18:00 AM EDT | SETI@home | Sending scheduler request: To report completed tasks.
Fri 02 Oct 2015 12:18:00 AM EDT | SETI@home | Reporting 3 completed tasks
Fri 02 Oct 2015 12:18:00 AM EDT | SETI@home | Requesting new tasks for ATI
Fri 02 Oct 2015 12:18:00 AM EDT | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Fri 02 Oct 2015 12:18:00 AM EDT | SETI@home | [sched_op] ATI work request: 3624.09 seconds; 0.00 devices
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | Scheduler request completed: got 4 new tasks
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] Server version 707
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | Project requested delay of 303 seconds
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] estimated total ATI task duration: 3735 seconds
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 31my11ae.30050.4566.438086664206.12.188_0
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 04jn11ab.7733.370597.438086664196.12.199_1
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 03jn11ae.26137.143636.438086664200.12.5_1
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Fri 02 Oct 2015 12:18:02 AM EDT | SETI@home | [sched_op] Reason: requested by project
Fri 02 Oct 2015 12:18:04 AM EDT | SETI@home | Started download of 02my11ae.16682.213558.438086664204.12.13
Fri 02 Oct 2015 12:18:04 AM EDT | SETI@home | Started download of 02ja11ac.30966.7838.438086664206.12.10
Fri 02 Oct 2015 12:18:04 AM EDT | SETI@home | Started download of 03jn11ae.8366.137808.438086664201.12.166
Fri 02 Oct 2015 12:18:04 AM EDT | SETI@home | Started download of 04jn11ab.31934.362215.438086664197.12.212
Fri 02 Oct 2015 12:18:06 AM EDT | SETI@home | Finished download of 02ja11ac.30966.7838.438086664206.12.10
Fri 02 Oct 2015 12:18:06 AM EDT | SETI@home | Finished download of 04jn11ab.31934.362215.438086664197.12.212
Fri 02 Oct 2015 12:18:09 AM EDT | SETI@home | Finished download of 02my11ae.16682.213558.438086664204.12.13
Fri 02 Oct 2015 12:18:09 AM EDT | SETI@home | Finished download of 03jn11ae.8366.137808.438086664201.12.166
Fri 02 Oct 2015 12:22:35 AM EDT | SETI@home | Computation for task 04jn11ab.7733.377344.438086664196.12.137_1 finished
Fri 02 Oct 2015 12:22:35 AM EDT | SETI@home | Starting task 03jn11ae.26137.146499.438086664200.12.128_0 using setiathome_v7 version 708 (opencl_ati5_sah) in slot 0
Fri 02 Oct 2015 12:22:38 AM EDT | SETI@home | Started upload of 04jn11ab.7733.377344.438086664196.12.137_1_0
Fri 02 Oct 2015 12:22:41 AM EDT | SETI@home | Finished upload of 04jn11ab.7733.377344.438086664196.12.137_1_0
Fri 02 Oct 2015 12:23:08 AM EDT | SETI@home | [sched_op] Starting scheduler request
Fri 02 Oct 2015 12:23:08 AM EDT | SETI@home | Sending scheduler request: To report completed tasks.
Fri 02 Oct 2015 12:23:08 AM EDT | SETI@home | Reporting 1 completed tasks
Fri 02 Oct 2015 12:23:08 AM EDT | SETI@home | Requesting new tasks for ATI
Fri 02 Oct 2015 12:23:08 AM EDT | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Fri 02 Oct 2015 12:23:08 AM EDT | SETI@home | [sched_op] ATI work request: 477.17 seconds; 0.00 devices
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | Scheduler request completed: got 1 new tasks
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | [sched_op] Server version 707
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | Project requested delay of 303 seconds
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | [sched_op] estimated total ATI task duration: 1919 seconds
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 04jn11ab.7733.377344.438086664196.12.137_1
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Fri 02 Oct 2015 12:23:10 AM EDT | SETI@home | [sched_op] Reason: requested by project
Fri 02 Oct 2015 12:23:12 AM EDT | SETI@home | Started download of 03jn11ae.8366.138626.438086664201.12.18
Fri 02 Oct 2015 12:23:14 AM EDT | SETI@home | Finished download of 03jn11ae.8366.138626.438086664201.12.18
Fri 02 Oct 2015 12:39:04 AM EDT | SETI@home | Computation for task 03jn11ae.26137.146499.438086664200.12.128_0 finished
Fri 02 Oct 2015 12:39:04 AM EDT | SETI@home | Starting task 03oc11af.25448.17590.438086664195.12.136_1 using setiathome_v7 version 708 (opencl_ati5_sah) in slot 0
Fri 02 Oct 2015 12:39:06 AM EDT | SETI@home | Started upload of 03jn11ae.26137.146499.438086664200.12.128_0_0
Fri 02 Oct 2015 12:39:09 AM EDT | SETI@home | Finished upload of 03jn11ae.26137.146499.438086664200.12.128_0_0
Fri 02 Oct 2015 12:39:11 AM EDT | SETI@home | [sched_op] Starting scheduler request
Fri 02 Oct 2015 12:39:11 AM EDT | SETI@home | Sending scheduler request: To report completed tasks.
Fri 02 Oct 2015 12:39:11 AM EDT | SETI@home | Reporting 1 completed tasks
Fri 02 Oct 2015 12:39:11 AM EDT | SETI@home | Requesting new tasks for ATI
Fri 02 Oct 2015 12:39:11 AM EDT | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Fri 02 Oct 2015 12:39:11 AM EDT | SETI@home | [sched_op] ATI work request: 699.88 seconds; 0.00 devices
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | Scheduler request completed: got 1 new tasks
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | [sched_op] Server version 707
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | Project requested delay of 303 seconds
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | [sched_op] estimated total ATI task duration: 732 seconds
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 03jn11ae.26137.146499.438086664200.12.128_0
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Fri 02 Oct 2015 12:39:13 AM EDT | SETI@home | [sched_op] Reason: requested by project
Fri 02 Oct 2015 12:39:15 AM EDT | SETI@home | Started download of 02my11ae.16682.217239.438086664204.12.236
Fri 02 Oct 2015 12:39:17 AM EDT | SETI@home | Finished download of 02my11ae.16682.217239.438086664204.12.236
ID: 1730997 · Report as offensive
Previous · 1 . . . 20 · 21 · 22 · 23 · 24 · 25 · 26 . . . 32 · Next

Message boards : Number crunching : Panic Mode On (100) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.