Message boards :
Number crunching :
Panic Mode On (97) Server Problems?
Message board moderation
Previous · 1 . . . 15 · 16 · 17 · 18 · 19 · 20 · 21 . . . 33 · Next
Author | Message |
---|---|
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30648 Credit: 53,134,872 RAC: 32 |
Before we all forget, this website setiathome.berkeley.edu is not where the upload/download servers are located. Open your server response file in the BOINC directory and you will see those are at setiboinc.ssl.berkeley.edu The website traffic is carried on the normal campus data paths. The data is carried on the SETI paid for Hurricane Electric path. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Before we all forget, this website setiathome.berkeley.edu is not where the upload/download servers are located. Open your server response file in the BOINC directory and you will see those are at setiboinc.ssl.berkeley.edu Actually, I believe SETI's data traffic is carried over campus links within UCB, but doesn't use their external peering. Instead, it's encapsulated withing a Virtual Private Network (VPN) between Palo Alto and the Space Science Laboratory - thanks to a matched pair of heavy-duty Cisco routers, which were donated some years ago. So, the bits pass through campus routers (we can see them on cricket, under normal circumstances), but the packets are not routed by campus, so we don't see campus routers on a data link tracert. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Before we all forget, this website setiathome.berkeley.edu is not where the upload/download servers are located. Open your server response file in the BOINC directory and you will see those are at setiboinc.ssl.berkeley.edu Yes the data graph we monitored was the traffic over the Hurricane line. However it is routed though the same hardware as the web pages. Until recently setiathome.berkeley.edu traced to t1-3.inr-201-sut.Berkeley.edu [128.32.0.65] t5-4.inr-211-srb.Berkeley.edu [128.32.255.41] Now with it tracing to t2-3.inr-202-reccev.Berkeley.EDU [128.32.0.39] et3-47.inr-311-ewdc.Berkeley.EDU [128.32.0.103] It does seem very likely that the Hurricane line will also be routed through the new hardware. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
Before we all forget, this website setiathome.berkeley.edu is not where the upload/download servers are located. Open your server response file in the BOINC directory and you will see those are at setiboinc.ssl.berkeley.edu Trace to the upload server is this. Tracing route to setiboincdata.ssl.berkeley.edu [208.68.240.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms xxxxxxxxxxxx 2 5 ms 5 ms 8 ms XXXXXXXXXXXX 3 9 ms 7 ms 6 ms 67.136.4.193 4 * 10 ms 11 ms static-65-73-41-1.bras01.blu.wv.integra.net [70.102.100.185] 5 48 ms 37 ms 33 ms 10gigabitethernet7-3.core1.lax1.he.net [198.32.146.50] 6 26 ms 38 ms 27 ms 100ge15-1.core1.sjc2.he.net [184.105.223.249] 7 27 ms 39 ms 27 ms 10ge5-2.core1.pao1.he.net [72.52.92.69] 8 219 ms 121 ms 27 ms 64.71.140.42 9 30 ms 29 ms 32 ms 208.68.243.254 10 31 ms 30 ms 30 ms setiboincdata.ssl.berkeley.edu [208.68.240.16] Trace complete. |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
All I know is it's been a long time since I've seen page loads and database queries as fast as they've been today. Even if the SETI servers weren't directly worked on, looks like there were some infrastructure issues that got resolved that we'll benefit from. Good stuff! |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
At the moment, I'm finding forum & web page response times about the same as usual. Neither great, nor bad. They're both OK. Grant Darwin NT |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
My observation regarding page load/response times 12-16 hours ago was that it was about the same, whilst I was reading about people in Europe having really a really slow experience. Then it started to crawl for me, too, but I then determined that everything was crawling for me. For whatever reason, I suddenly had about a 90% packet loss, and the few packets that went through had a 1500ms ping. Power cycled cable modem and router..twice, and that problem went away. Now that someone mentioned load/response times are very snappy.. I am noticing that, myself. This has happened once or twice before with some upgrades though. I'm not even done clicking a link and the page has already completely loaded. We'll see how long this lasts... Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
betreger Send message Joined: 29 Jun 99 Posts: 11361 Credit: 29,581,041 RAC: 66 |
Since we are now back from the outage I checked to SSP to see how many APs were being split, I'm disappointed. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Since we are now back from the outage I checked to SSP to see how many APs were being split, I'm disappointed. Yeah, I've just had to let my main rig's CPU have another feed of MB's to keep it happy. Cheers. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Since we are now back from the outage I checked to SSP to see how many APs were being split, I'm disappointed. Yea, I REALLY wish there was a setting for "Store X days of MB, and Y days of AP" without having to do it manually. I love AP's but am happy to crunch MB's when they're not available. But since it is a combined project, we can't do that. :( |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Since we are now back from the outage I checked to SSP to see how many APs were being split, I'm disappointed. Just run 2 clients. Set on for a venue with MB only & the other with AP only. Easy as cake. Piece of pie. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Why can't you just do "give me work for APv7, allow for other applications"? I know that'll basically fill your cache up with MB, but when APs are available and you do a work request, you should start getting APs basically right away. Of course, it'll be however-long until your cache churns through the stock-pile of MBs before you start actually crunching the APs.. but that's about as close to "set and forget" as you're going to get whilst preferring AP over MB, but still allowing both. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
Why can't you just do "give me work for APv7, allow for other applications"? I know that'll basically fill your cache up with MB, but when APs are available and you do a work request, you should start getting APs basically right away. Of course, it'll be however-long until your cache churns through the stock-pile of MBs before you start actually crunching the APs.. but that's about as close to "set and forget" as you're going to get whilst preferring AP over MB, but still allowing both. That works fine when abundant AP's are being split, but in the 'lean times' we have seen in the last 6 months, if your cache is filled with MB's when a few AP's are generated you miss out. I also have a separate setting for 'home' versus 'work' where one calls for both and the other for AP's only. I fill my cache with the 'both' setting then switch back to the AP's only and let my cache run down to 70 or so then refill. That method increases the odds of catching a few AP's now and then. Nonetheless AP pickins' is slim....... "Sour Grapes make a bitter Whine." <(0)> |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Exactly dg, It's a guessing game of how much MB's to load and estimate when you think AP's might be coming out. I use 3 profiles actually, and keep flipping them. Well one is for my other computer, so 2 for each computer. |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
Can anyone tell if validation is working, my pendings keep increasing..... "Sour Grapes make a bitter Whine." <(0)> |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
Can anyone tell if validation is working, my pendings keep increasing..... I believe the validator is is are working as I have had over 4000 credits overnight |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Can anyone tell if validation is working, my pendings keep increasing..... Validation is most certainly working here. You may just have a lot of slow wing people atm DG. ;-) Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Since we are now back from the outage I checked to SSP to see how many APs were being split, I'm disappointed. I just had to let my 3570K have another feed of MB's. 100 of them just isn't enough whereas a 100 AP's will keep it occupied for 5 days. :-( Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
I run AP's only nowadays. When there are no AP's (which is often these days), it's idle time. I just like to keep my CPU's happy with AP's while I just letting my GPU's feast on MB's. Damn, most of those MB's I got for the 3570K were shorties. :-( Cheers. |
betreger Send message Joined: 29 Jun 99 Posts: 11361 Credit: 29,581,041 RAC: 66 |
I'll run MBs when I have to and Einstein if Seti has nothing, no cold iron here. Also the cricket is still missing. I shall crunch on. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.