Out of the fire and into the pit of sulfuric acid. (Feb 19, 2010)

Message boards : Technical News : Out of the fire and into the pit of sulfuric acid. (Feb 19, 2010)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 15 · Next

AuthorMessage
Kaylie

Send message
Joined: 26 Jul 08
Posts: 39
Credit: 333,106
RAC: 0
United States
Message 972224 - Posted: 20 Feb 2010, 13:49:52 UTC - in response to Message 972218.  

Boinc won't ask for work from a project, if the the number of uploads for that project exceeds twice the number of CPU cores,
so until you're down to a handfull of uploads, you'll get no downloads from Seti,
But you can still get work from other projects.

Claggy


If you’re looking for an interesting task to sick your GPU’s on, check out Collatz Conjecture at http://boinc.thesonntags.com/collatz/ They’re called Wondrous Numbers (really).

For the CPU, there are projects to help improve the human condition at World Community Grid. http://www.worldcommunitygrid.org/

ID: 972224 · Report as offensive
madasczik
Avatar

Send message
Joined: 13 May 09
Posts: 12
Credit: 1,693,704
RAC: 0
United States
Message 972226 - Posted: 20 Feb 2010, 14:09:41 UTC

It seems like a provider peering issue. Here's my trace route from Palisades Park, NJ on TimeWarner Roadrunner. The trace takes a dump midway on the rr.com network and it gets worse on the Hurrican Electric he.net network.

C:\>tracert 208.68.240.16

Tracing route to setiboincdata.ssl.berkeley.edu [208.68.240.16]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 192.168.101.1
2 6 ms 7 ms 7 ms 10.50.96.1
3 6 ms 7 ms 7 ms ge3-2-njmnyhubp-rtr1.nj.rr.com [24.168.128.146]
4 8 ms 9 ms 8 ms cpe-24-29-150-94.nyc.res.rr.com [24.29.150.94]
5 9 ms 10 ms 9 ms tenge-0-1-0-nwrknjmd-rtr.nyc.rr.com [24.29.119.150]
6 38 ms 9 ms 11 ms ae-4-0.cr0.nyc30.tbone.rr.com [66.109.6.78]
7 10 ms 10 ms 10 ms ae-0-0.cr0.nyc20.tbone.rr.com [66.109.6.27]
8 89 ms 89 ms 91 ms 66.109.6.10
9 159 ms 91 ms 91 ms ae-1-0.pr0.sjc10.tbone.rr.com [66.109.6.137]
10 110 ms 100 ms 101 ms gige-g5-6.core1.sjc2.he.net [216.218.135.225]
11 100 ms 101 ms 100 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92.69]
12 100 ms 100 ms 100 ms 64.71.140.42
13 * 113 ms 105 ms 208.68.243.254
14 106 ms 105 ms 105 ms setiboincdata.ssl.berkeley.edu [208.68.240.16]

Trace complete.
ID: 972226 · Report as offensive
madasczik
Avatar

Send message
Joined: 13 May 09
Posts: 12
Credit: 1,693,704
RAC: 0
United States
Message 972227 - Posted: 20 Feb 2010, 14:15:05 UTC

Also is ping allowed on 208.68.243.254? Is that your router?
ID: 972227 · Report as offensive
madasczik
Avatar

Send message
Joined: 13 May 09
Posts: 12
Credit: 1,693,704
RAC: 0
United States
Message 972228 - Posted: 20 Feb 2010, 14:21:15 UTC

Doing an extended ping to 208.68.243.254 for about a minute I get multiple Request Timed Out, packet loss is among us:

C:\>ping 208.68.243.254 -t

Pinging 208.68.243.254 with 32 bytes of data:
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Reply from 208.68.243.254: bytes=32 time=110ms TTL=242
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=108ms TTL=241
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Request timed out.
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=107ms TTL=241
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=107ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=110ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=107ms TTL=241
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Request timed out.
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Request timed out.
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Reply from 208.68.243.254: bytes=32 time=104ms TTL=242
Reply from 208.68.243.254: bytes=32 time=107ms TTL=242
Reply from 208.68.243.254: bytes=32 time=108ms TTL=242
Reply from 208.68.243.254: bytes=32 time=106ms TTL=241
Request timed out.
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=105ms TTL=241
Reply from 208.68.243.254: bytes=32 time=105ms TTL=242
Reply from 208.68.243.254: bytes=32 time=107ms TTL=242

Ping statistics for 208.68.243.254:
Packets: Sent = 71, Received = 67, Lost = 4 (5% loss),
Approximate round trip times in milli-seconds:
Minimum = 104ms, Maximum = 110ms, Average = 105ms
Control-C
^C
C:\>

ID: 972228 · Report as offensive
madasczik
Avatar

Send message
Joined: 13 May 09
Posts: 12
Credit: 1,693,704
RAC: 0
United States
Message 972230 - Posted: 20 Feb 2010, 14:32:53 UTC

Sorry for the multiple posts, didn't see the Edit button, I'm a noob on this forum :P
ID: 972230 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66348
Credit: 55,293,173
RAC: 49
United States
Message 972231 - Posted: 20 Feb 2010, 14:33:35 UTC

Ping from here can't resolve host target, Sounds like the connection has been severed.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 972231 · Report as offensive
Profile Lint trap

Send message
Joined: 30 May 03
Posts: 871
Credit: 28,092,319
RAC: 0
United States
Message 972239 - Posted: 20 Feb 2010, 14:52:28 UTC - in response to Message 972231.  
Last modified: 20 Feb 2010, 14:58:25 UTC

SJ, it's working from here (Maryland)-7% loss.

Martin

edited//

ping -t to the previous location, 64.71.140.42, are perfect. No loss at all.
ID: 972239 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14679
Credit: 200,643,578
RAC: 874
United Kingdom
Message 972247 - Posted: 20 Feb 2010, 15:16:42 UTC - in response to Message 972242.  

SJ, it's working from here (Maryland)-7% loss.

Martin

edited//

ping -t to the previous location, 64.71.140.42, are perfect. No loss at all.

100% packet loss from me in Sweden.

Looking at the cricket graphs, the bandwidth utilization is much lower than normal. I would say that it's a router or switch issue, either on SETI location, or somewhere in between the users and berkeley.

Sten-Arne

Did you see my observations with Wireshark and a fortuitous download in Panic mode...? I have no prior experience with Wireshark (willing to learn if anyone here can guide me), and it's fearsomely powerful (i.e. complicated), but what I saw led me to the opposite conclusion.
ID: 972247 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66348
Credit: 55,293,173
RAC: 49
United States
Message 972263 - Posted: 20 Feb 2010, 15:39:01 UTC - in response to Message 972252.  

SJ, it's working from here (Maryland)-7% loss.

Martin

edited//

ping -t to the previous location, 64.71.140.42, are perfect. No loss at all.

100% packet loss from me in Sweden.

Looking at the cricket graphs, the bandwidth utilization is much lower than normal. I would say that it's a router or switch issue, either on SETI location, or somewhere in between the users and berkeley.

Sten-Arne

Did you see my observations with Wireshark and a fortuitous download in Panic mode...? I have no prior experience with Wireshark (willing to learn if anyone here can guide me), and it's fearsomely powerful (i.e. complicated), but what I saw led me to the opposite conclusion.


Yes I saw that, but still the bandwidth utilization doesn't lie. The packet loss from Sweden doesn't lie.

IMO most of the traffic doesn't even reach the SETI servers. This started at least a day before the normal weekly outage, and the AC breakdown in the server closet. After a normal weekly outage, the bandwidth is always maxed out until everyone have uploaded their work, and receieved new work. This didn't happen this time.

The normal return rate of MB Wu's to SETI is about 50,000/hour. We're now only at 11,289. Things just doesn't get through, and it's not because of maxed out bandwidth.

We'll see though, I'm in no hurry. If my PC's runs out of WU's to crunch, I'll just shut them down and go out and play in the snow :-)

Sten-Arne

Snow? You lucky Dog You, We only have had rain this year at My places elevation, If You run out of work or if You don't have some fun, As the snow sounds like It would be fun indeed.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 972263 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14679
Credit: 200,643,578
RAC: 874
United Kingdom
Message 972269 - Posted: 20 Feb 2010, 15:49:31 UTC - in response to Message 972252.  

Things just doesn't get through, and it's not because of maxed out bandwidth...

Fully agree with that. But the packets I was wiresharking (from the UK) seemed to have a reasonable chance of reaching Berkeley, getting through all the routers and switches, and starting to communicate with Bruno. It's the RST+ACK from Bruno that seems to be causing the problem (for uploads: I haven't tried probing Anakin - scheduler - yet).
ID: 972269 · Report as offensive
Profile Lint trap

Send message
Joined: 30 May 03
Posts: 871
Credit: 28,092,319
RAC: 0
United States
Message 972271 - Posted: 20 Feb 2010, 15:53:57 UTC - in response to Message 972252.  

I'm in no hurry. If my PC's runs out of WU's to crunch, I'll just shut them down and go out and play in the snow :-)


I agree. I still have a couple days worth of VLARs for the CPU to chew on.

BTW: Are you losing 100% just in Calif or before?

@Richard Yes, we could use some expertise I think.

Martin
ID: 972271 · Report as offensive
rudolfus
Volunteer tester

Send message
Joined: 9 Aug 04
Posts: 13
Credit: 96,158,183
RAC: 27
Russia
Message 972277 - Posted: 20 Feb 2010, 16:04:50 UTC
Last modified: 20 Feb 2010, 16:17:53 UTC

The fourth day is not present connection with project servers. More than 1000 WU-s are finished, do not go. The scheduler does not work. However on page of the status of the project all is fine. Somebody can explain, in what a problem and that to expect?

20-Feb-2010 18:47:59 [---] Project communication failed: attempting access to reference site
20-Feb-2010 18:47:59 [SETI@home] Temporarily failed upload of 09oc06aa.16872.20411.5.10.112_0_0: HTTP error
20-Feb-2010 18:47:59 [SETI@home] Backing off 1 hr 33 min 50 sec on upload of 09oc06aa.16872.20411.5.10.112_0_0
20-Feb-2010 18:48:10 [SETI@home] Temporarily failed upload of 09oc06aa.16872.20411.5.10.118_0_0: HTTP error
20-Feb-2010 18:48:10 [SETI@home] Backing off 42 min 25 sec on upload of 09oc06aa.16872.20411.5.10.118_0_0
20-Feb-2010 18:48:20 [---] Internet access OK - project servers may be temporarily down.

20-Feb-2010 19:08:49 [SETI@home] update requested by user
20-Feb-2010 19:09:01 [SETI@home] Sending scheduler request: Requested by user.
20-Feb-2010 19:09:01 [SETI@home] Reporting 54 completed tasks, not requesting new tasks
20-Feb-2010 19:09:23 [SETI@home] Started upload of 09oc06aa.16872.20411.5.10.136_1_0
20-Feb-2010 19:09:23 [SETI@home] Started upload of 09oc06aa.16872.20411.5.10.143_1_0
20-Feb-2010 19:09:38 [SETI@home] Scheduler request failed: Failure when receiving data from the peer
20-Feb-2010 19:09:49 [SETI@home] Temporarily failed upload of 09oc06aa.16872.20411.5.10.143_1_0: HTTP error
20-Feb-2010 19:09:49 [SETI@home] Backing off 1 min 0 sec on upload of 09oc06aa.16872.20411.5.10.143_1_0
20-Feb-2010 19:10:30 [---] Project communication failed: attempting access to reference site
20-Feb-2010 19:10:32 [---] Internet access OK - project servers may be temporarily down.

And here thus all four days. However 4 WU-s all the same have been sent. When goes WU, (sometimes seldom) sending reaches 100 %, but does not come to an end.
ID: 972277 · Report as offensive
The Jedi Alliance - Ranger
Avatar

Send message
Joined: 27 Dec 00
Posts: 72
Credit: 60,982,863
RAC: 0
United States
Message 972281 - Posted: 20 Feb 2010, 16:09:39 UTC

Pathping has been around since Windows NT and is still there in Windows 7 for those wondering about it.

Look at everyone's posts of pathping results. Notice anything common? Look at the next to last line, 208.68.243.254, they all show a loss at this node. Explanations? Hardware is failing? System is overloaded due to recent a/c caused outage?

If we had pathping data from before the a/c outage showing the same thing we might suggest hardware, but right now we all know that there's a ton of work out there trying to report in. In theory the built-in backoff will result in gradual relief. Anyone up to tracking between now and Tuesday's scheduled maintenance? If it's not hardware we should see close to zero packet loss at this node just about the time they take it down for the scheduled maintenance. IF we continue tracking after they bring it back up we should see zero packet loss by Friday UTC.

If there's a problem on campus let's give them some proof.

ID: 972281 · Report as offensive
Galadriel

Send message
Joined: 24 Jan 09
Posts: 42
Credit: 8,422,996
RAC: 0
Romania
Message 972285 - Posted: 20 Feb 2010, 16:17:10 UTC

C:\Documents and Settings\Administrator>tracert 208.68.240.16

Tracing route to setiboincdata.ssl.berkeley.edu [208.68.240.16]
over a maximum of 30 hops:

1 2 ms 2 ms 2 ms 1.98.79.82.static.cluj.rdsnet.ro [82.79.98.1]
2 2 ms 2 ms 2 ms qr01.cluj.rdsnet.ro [213.154.140.81]
3 4 ms 3 ms 3 ms cr01.cluj.rdsnet.ro [213.154.140.16]
4 2 ms 2 ms 1 ms 213-154-130-76.rdsnet.ro [213.154.130.76]
5 26 ms 26 ms 26 ms 213.154.128.5
6 38 ms 31 ms 31 ms de-cix.he.net [80.81.192.172]
7 39 ms 38 ms 50 ms 10gigabitethernet1-2.core1.par1.he.net [72.52.92
.89]
8 43 ms 51 ms 43 ms 10gigabitethernet1-3.core1.lon1.he.net [72.52.92
.33]
9 117 ms 126 ms 123 ms 10gigabitethernet2-3.core1.nyc4.he.net [72.52.92
.77]
10 201 ms 195 ms 196 ms 10gigabitethernet3-1.core1.sjc2.he.net [72.52.92
.25]
11 205 ms 202 ms 200 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92
.69]
12 200 ms 199 ms 201 ms 64.71.140.42
13 201 ms 201 ms 201 ms 208.68.243.254
14 202 ms 200 ms 201 ms setiboincdata.ssl.berkeley.edu [208.68.240.16]

Trace complete.

---------------------------------------------------------------------



C:\Documents and Settings\Administrator>tracert 208.68.240.18

Tracing route to boinc2.ssl.berkeley.edu [208.68.240.18]
over a maximum of 30 hops:

1 2 ms 2 ms 2 ms 1.98.79.82.static.cluj.rdsnet.ro [82.79.98.1]
2 2 ms 2 ms 2 ms qr01.cluj.rdsnet.ro [213.154.140.81]
3 3 ms 3 ms 5 ms cr01.cluj.rdsnet.ro [213.154.140.16]
4 2 ms 2 ms 2 ms 213-154-130-76.rdsnet.ro [213.154.130.76]
5 22 ms 22 ms 22 ms br01.frankfurt.rdsnet.ro [213.154.126.241]
6 28 ms 31 ms 28 ms de-cix.he.net [80.81.192.172]
7 33 ms 32 ms 32 ms 10gigabitethernet1-2.core1.par1.he.net [72.52.92
.89]
8 47 ms 37 ms 49 ms 10gigabitethernet1-3.core1.lon1.he.net [72.52.92
.33]
9 111 ms 111 ms 110 ms 10gigabitethernet4-4.core1.nyc4.he.net [72.52.92
.241]
10 136 ms 137 ms 137 ms 10gigabitethernet1-2.core1.chi1.he.net [72.52.92
.102]
11 193 ms 194 ms 194 ms 10gigabitethernet3-2.core1.sjc2.he.net [72.52.92
.73]
12 197 ms 196 ms 196 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92
.69]
13 196 ms 195 ms 195 ms 64.71.140.42
14 197 ms 198 ms 197 ms 208.68.243.254
15 199 ms 201 ms 200 ms boinc2.ssl.berkeley.edu [208.68.240.18]

Trace complete.


wil return later on with results fro other machines/isp.

ID: 972285 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66348
Credit: 55,293,173
RAC: 49
United States
Message 972286 - Posted: 20 Feb 2010, 16:17:56 UTC - in response to Message 972281.  

Pathping has been around since Windows NT and is still there in Windows 7 for those wondering about it.

Look at everyone's posts of pathping results. Notice anything common? Look at the next to last line, 208.68.243.254, they all show a loss at this node. Explanations? Hardware is failing? System is overloaded due to recent a/c caused outage?

If we had pathping data from before the a/c outage showing the same thing we might suggest hardware, but right now we all know that there's a ton of work out there trying to report in. In theory the built-in backoff will result in gradual relief. Anyone up to tracking between now and Tuesday's scheduled maintenance? If it's not hardware we should see close to zero packet loss at this node just about the time they take it down for the scheduled maintenance. IF we continue tracking after they bring it back up we should see zero packet loss by Friday UTC.

If there's a problem on campus let's give them some proof.

Ok Folks, You heard Ranger, If You can do the following at the command line:

pathping 208.68.240.16 boinc2.ssl.berkeley.edu

then We'll have more data on this, So please do It, If You can still, If You have already You need not do this right now.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 972286 · Report as offensive
The Jedi Alliance - Ranger
Avatar

Send message
Joined: 27 Dec 00
Posts: 72
Credit: 60,982,863
RAC: 0
United States
Message 972295 - Posted: 20 Feb 2010, 16:27:34 UTC - in response to Message 972281.  

One more thing, and this is just as important: we don't want EVERYONE doing this or we will be the problem. Maybe 5 - 10 people in different geographic regions running the test at different times. Maybe the first person runs on the hour, second person at 10 minutes after and so on.

ID: 972295 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66348
Credit: 55,293,173
RAC: 49
United States
Message 972298 - Posted: 20 Feb 2010, 16:32:15 UTC - in response to Message 972295.  

One more thing, and this is just as important: we don't want EVERYONE doing this or we will be the problem. Maybe 5 - 10 people in different geographic regions running the test at different times. Maybe the first person runs on the hour, second person at 10 minutes after and so on.

Agreed, Otherwise It could result in Too Much Information and no one wants that I'd think.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 972298 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 972301 - Posted: 20 Feb 2010, 16:37:53 UTC - in response to Message 972295.  
Last modified: 20 Feb 2010, 16:50:52 UTC

One more thing, and this is just as important: we don't want EVERYONE doing this or we will be the problem. Maybe 5 - 10 people in different geographic regions running the test at different times. Maybe the first person runs on the hour, second person at 10 minutes after and so on.

Here is just the last three lines from Albuquerque New Mexico:
 11   54ms     0/ 100 =  0%     0/ 100 =  0%  64.71.140.42
                                7/ 100 =  7%   |
 12   52ms     7/ 100 =  7%     0/ 100 =  0%  208.68.243.254
                                1/ 100 =  1%   |
 13   51ms     8/ 100 =  8%     0/ 100 =  0%  setiboincdata.ssl.berkeley.edu [208.68.240.16]

All the earlier lines had 0% loss. My ISP is comcast. As others have reported, I see loss at the last two lines.

[edited to add this] In a quick sanity check to see if this is just usual, I tried "pathping 74.125.19.147" (which is news.google.com). It came up with 0% at all levels. This shows that the 208.68.240.16 behavior is different, and probably that it is a problem. It does not show that it is the problem, and does not show that it differs from the situation two weeks ago.
ID: 972301 · Report as offensive
rudolfus
Volunteer tester

Send message
Joined: 9 Aug 04
Posts: 13
Credit: 96,158,183
RAC: 27
Russia
Message 972304 - Posted: 20 Feb 2010, 16:41:11 UTC - in response to Message 972286.  


Трассировка маршрута к boinc2.ssl.berkeley.edu [208.68.240.13]
с максимальным числом прыжков 30:
0 computer-24743d [192.168.1.2]
1 192.168.1.1
2 lo100.asr1006-1.a73.vsi.ru [80.82.57.58]
3 te2-0-0.818.ne40e-2.a53.hw.vsi.ru [80.82.53.18]
4 crs1-ne40e-2.vsi.ru [80.82.56.161]
5 77.51.255.97
6 ae5-222.RT.V10.MSK.RU.retn.net [87.245.253.237]
7 * ae2-6.RT.TC1.STO.SE.retn.net [87.245.233.134]
8 netnod-ix-ge-a-sth-1500.he.net [194.68.123.187]
9 10gigabitethernet3-3.core1.fra1.he.net [72.52.92.233]
10 10gigabitethernet1-2.core1.par1.he.net [72.52.92.89]
11 10gigabitethernet1-3.core1.lon1.he.net [72.52.92.33]
12 10gigabitethernet2-3.core1.nyc4.he.net [72.52.92.77]
13 * 10gigabitethernet3-1.core1.sjc2.he.net [72.52.92.25]
14 10gigabitethernet3-2.core1.pao1.he.net [72.52.92.69]
15 64.71.140.42
16 208.68.243.254
17 boinc2.ssl.berkeley.edu [208.68.240.13]

Подсчет статистики за: 425 сек. ...


ID: 972304 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 972312 - Posted: 20 Feb 2010, 16:48:54 UTC - in response to Message 972231.  

Ping from here can't resolve host target, Sounds like the connection has been severed.

That is a completely different problem.

The DNS servers are not on SETI@Home infrastructure.
ID: 972312 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 15 · Next

Message boards : Technical News : Out of the fire and into the pit of sulfuric acid. (Feb 19, 2010)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.