Panic Mode On (95) Server Problems?

Message boards : Number crunching : Panic Mode On (95) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 22 · Next

AuthorMessage
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1637520 - Posted: 4 Feb 2015, 22:52:09 UTC - in response to Message 1637423.  
Last modified: 4 Feb 2015, 22:56:56 UTC

Leaving inr-304:
gigabitethernet8_34: : sslringsut1fes g0/4, SSL_P2P_169.229.0.216/29
vlan591: 169.229.0.217: SSL Firewall Transit net

Those last links indicate slightly under 800 Mbps, but there are a lot more hours now. I haven't recalculated the total size.

Taking that into consideration.. recalculating..

I'm going to call it ~775mbit for the past 28 hours.

775000000/8*3600*28/2^40= 8.881 TiB

edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1637520 · Report as offensive
Dena Wiltsie
Volunteer tester

Send message
Joined: 19 Apr 01
Posts: 1628
Credit: 24,230,968
RAC: 26
United States
Message 1637559 - Posted: 5 Feb 2015, 1:04:00 UTC - in response to Message 1637495.  
Last modified: 5 Feb 2015, 1:21:40 UTC

The reason is more basic than that. The selling number is the total storage ability of the drive. The number your system reports to you is after sector headers and inter record gaps have been removed. The reason for this is because soft sectored drives allow you to change the number of sectors per track changing the usable area. Most of the time, smaller sectors are desired because sectors must be transferred intact. Large sectors require more RAM to hold so small sectors tie up less RAM. Now you probably know more about hard drives than you ever wanted to know.
By the way, a shift right 10 is a divide by 1024.


I believe you are incorrect on that. That basic formatting is built into the size calculation. The drive capacity was always #sectors * 512. If you look at some drive labels, they give the number of sectors ("LBA: nnnnnnn"), so you can multiply it out yourself. NO manufacturer gives drive capacity any other way, to my knowledge. (I assume this applies for the newer 4K sector drives as well, but I don't know that).

Jarvin, I wrote diagnostic software for hard drives and assisted in debugging hard drive controllers. I have spent hours looking at the data coming from the drives on a scope. I have also worked with hard sectored and soft sectored drives. Both drives require the overhead of inter record gaps, headers to ensure the head is positioned correctly and room for the fire code error correction or in older drives an error check. This overhead isn't data or counted in the data count and the drive is always larger than the size the software returns. The difference may be a small precent as in the drives I worked with that had a 640 byte sectors size but the waste goes up when the sector size is smaller
Edit Spare cylinders are counted in the manufacture numbers but not in usable storage.
ID: 1637559 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1637764 - Posted: 5 Feb 2015, 15:29:53 UTC - in response to Message 1637520.  
Last modified: 5 Feb 2015, 15:30:00 UTC

Taking that into consideration.. recalculating..

I'm going to call it ~775mbit for the past 28 hours.

775000000/8*3600*28/2^40= 8.881 TiB

edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit.

And it just keeps going! It is an impressive amount of data. Looks like it might be around 17TB transferred thus far.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1637764 · Report as offensive
David S
Volunteer tester
Avatar

Send message
Joined: 4 Oct 99
Posts: 18352
Credit: 27,761,924
RAC: 12
United States
Message 1637772 - Posted: 5 Feb 2015, 16:07:32 UTC - in response to Message 1637520.  
Last modified: 5 Feb 2015, 16:08:31 UTC

edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit.

Yup, the graph flat-topped at about 952Mbit/s for about 12 hours continuously yesterday. And it's been running not much below that for over 46 hours now.

In the words of Zaphod Beeblebrox, that's a lot. That's a lot a lot.

[edit]
Just how big is the AP database, anyway?
David
Sitting on my butt while others boldly go,
Waiting for a message from a small furry creature from Alpha Centauri.

ID: 1637772 · Report as offensive
Aurora Borealis
Volunteer tester
Avatar

Send message
Joined: 14 Jan 01
Posts: 3075
Credit: 5,631,463
RAC: 0
Canada
Message 1637779 - Posted: 5 Feb 2015, 16:28:29 UTC

I'm starting to doubt that all this data is coming from the SETI servers!!!
ID: 1637779 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1637792 - Posted: 5 Feb 2015, 16:52:14 UTC - in response to Message 1637772.  

edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit.

Yup, the graph flat-topped at about 952Mbit/s for about 12 hours continuously yesterday. And it's been running not much below that for over 46 hours now.

In the words of Zaphod Beeblebrox, that's a lot. That's a lot a lot.

[edit]
Just how big is the AP database, anyway?

It's big. Like really REALLY big!
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1637792 · Report as offensive
Profile Julie
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 28 Oct 09
Posts: 34060
Credit: 18,883,157
RAC: 18
Belgium
Message 1637799 - Posted: 5 Feb 2015, 17:18:49 UTC - in response to Message 1637792.  

edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit.

Yup, the graph flat-topped at about 952Mbit/s for about 12 hours continuously yesterday. And it's been running not much below that for over 46 hours now.

In the words of Zaphod Beeblebrox, that's a lot. That's a lot a lot.

[edit]
Just how big is the AP database, anyway?

It's big. Like really REALLY big!


LOL:)) I have an AP task on Lisa's computer that's been running over 419 hours atm.
rOZZ
Music
Pictures
ID: 1637799 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1637905 - Posted: 5 Feb 2015, 20:55:44 UTC

Just how big is the AP database, anyway?

Not sure. Not too long ago, I noted that the AP DB is ~4.5 TB..at least.

If that was the case, then the data transfer should have been done in roughly 15 hours at ~725Mbit. So... I don't know what's going on now. Could be that the MB DB is being copied, too, and that one is quite likely to be well over 10 TB.

Seeing as our normal inr-211/6_17 link is also carrying the outbound traffic to us, we can't really use that to get a decent estimate of the extra payload.

Thanks to Joe's digging, inr-304/8_34 shows what is actually going up to the lab.

The massive transfer started at ~1100 PST on Tuesday. Just as an eyeballed estimate, I'm going to say the average for the past 50 hours appears to be ~700Mbit.

700000000/8*3600*50/2^40 = 14.32 TiB.

If they are transferring backups of both DBs, then I would imagine it won't last too much longer (4.5 for AP, >10 for MB, and we're nearly at 14.5 now, so..we'll see).
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1637905 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1637922 - Posted: 5 Feb 2015, 21:42:32 UTC - in response to Message 1637905.  

Just how big is the AP database, anyway?

Not sure. Not too long ago, I noted that the AP DB is ~4.5 TB..at least.

If that was the case, then the data transfer should have been done in roughly 15 hours at ~725Mbit. So... I don't know what's going on now. Could be that the MB DB is being copied, too, and that one is quite likely to be well over 10 TB.

Seeing as our normal inr-211/6_17 link is also carrying the outbound traffic to us, we can't really use that to get a decent estimate of the extra payload.

Thanks to Joe's digging, inr-304/8_34 shows what is actually going up to the lab.

The massive transfer started at ~1100 PST on Tuesday. Just as an eyeballed estimate, I'm going to say the average for the past 50 hours appears to be ~700Mbit.

700000000/8*3600*50/2^40 = 14.32 TiB.

If they are transferring backups of both DBs, then I would imagine it won't last too much longer (4.5 for AP, >10 for MB, and we're nearly at 14.5 now, so..we'll see).

I just use the provided "Avg: 889.98 Mbits/sec" from the summary when doing my calculations.
Perhaps they are planning to repartition the storage array & are copying everything from it first
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1637922 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1637930 - Posted: 5 Feb 2015, 22:19:38 UTC - in response to Message 1637922.  

I just use the provided "Avg: 889.98 Mbits/sec" from the summary when doing my calculations.
Perhaps they are planning to repartition the storage array & are copying everything from it first

The average that is listed is only for the most recent 24 hours. The graph itself is 38 hours wide. The transfer has been going for about 51 hours now. So yes, the most recent 24 hours is ~890Mbit, but if you look at the 38 hours that you can see, and remember what the hours before that looked like, the visual average appears to be more like 825 (for inr-211/6_17).

But remember, inr-211/6_17 includes all the MB WUs being sent out and all the scheduler contacts and so forth. The data transfer itself that shows on inr-304/8_34 is a bit less than what's on inr-211/6_17--about 100mbit less, actually, which seems to fit right, because before the transfer started, for 3-4 days, we were hovering right around 110mbit on 6_17.

Point is.. so far, roughly 15 TiB has been transferred up to the lab.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1637930 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1637938 - Posted: 5 Feb 2015, 22:39:20 UTC

They are planning on beaming it all back up into space with a note attached ...

"Can you please sort this mess out for us?"

LMAO
ID: 1637938 · Report as offensive
Profile Julie
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 28 Oct 09
Posts: 34060
Credit: 18,883,157
RAC: 18
Belgium
Message 1637950 - Posted: 5 Feb 2015, 22:59:24 UTC

AP servers are doing a great job imo, thx Cosmic_Ocean!
rOZZ
Music
Pictures
ID: 1637950 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1637967 - Posted: 6 Feb 2015, 0:01:54 UTC - in response to Message 1637930.  

I just use the provided "Avg: 889.98 Mbits/sec" from the summary when doing my calculations.
Perhaps they are planning to repartition the storage array & are copying everything from it first

The average that is listed is only for the most recent 24 hours. The graph itself is 38 hours wide. The transfer has been going for about 51 hours now. So yes, the most recent 24 hours is ~890Mbit, but if you look at the 38 hours that you can see, and remember what the hours before that looked like, the visual average appears to be more like 825 (for inr-211/6_17).

But remember, inr-211/6_17 includes all the MB WUs being sent out and all the scheduler contacts and so forth. The data transfer itself that shows on inr-304/8_34 is a bit less than what's on inr-211/6_17--about 100mbit less, actually, which seems to fit right, because before the transfer started, for 3-4 days, we were hovering right around 110mbit on 6_17.

Point is.. so far, roughly 15 TiB has been transferred up to the lab.

I thought the default average was being calculated for the entire time given like MTRG & RDDtool does. Perhaps cricket does it differently. I am unsure.
Yes it is quite a lot of data. As you stated. Making sure to look at the correct graph will give much more accurate calculations.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1637967 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11408
Credit: 29,581,041
RAC: 66
United States
Message 1637968 - Posted: 6 Feb 2015, 0:02:23 UTC

In addition to the AP problem the pages from this forum are loading slowly, the panic continues
ID: 1637968 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1638110 - Posted: 6 Feb 2015, 9:40:29 UTC

And the transfer is done, whatever it was.

Interestingly, it finished at 2300 (11pm, after starting at 11am a few days ago). Making it a 60-hour transfer. It ramped down a bit near the end.

Using these rough figures (according to the inr-304/8_34 link):

~700mbit for 36 hours
~600mbit for 18 hours
~400mbit for 6 hours

The total transferred comes out to:

(700000000/8*3600*36/2^40)+
(600000000/8*3600*18/2^40)+
(400000000/8*3600*6/2^40) = 15.716 TiB

So if the AP database was ~4.5 TB not too long ago, then new work got assimilated to it finally, so it had to have grown a little bit, and the MB database is presumed (with no actual evidence, beyond what was known about it five+ years ago) to be at least 10 TB, then it seems to suggest that a copy of both were transferred in the past 60 hours.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1638110 · Report as offensive
David S
Volunteer tester
Avatar

Send message
Joined: 4 Oct 99
Posts: 18352
Credit: 27,761,924
RAC: 12
United States
Message 1638209 - Posted: 6 Feb 2015, 15:41:45 UTC - in response to Message 1638110.  
Last modified: 6 Feb 2015, 15:42:42 UTC

And the transfer is done, whatever it was.

Interestingly, it finished at 2300 (11pm, after starting at 11am a few days ago). Making it a 60-hour transfer. It ramped down a bit near the end.

Using these rough figures (according to the inr-304/8_34 link):

~700mbit for 36 hours
~600mbit for 18 hours
~400mbit for 6 hours

And yet the current rate is still ~275, which is at least ~2.5 times normal operations with AP not running.
David
Sitting on my butt while others boldly go,
Waiting for a message from a small furry creature from Alpha Centauri.

ID: 1638209 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1638237 - Posted: 6 Feb 2015, 17:10:32 UTC

I currently have rigs experiencing difficulty getting through to the servers with reporting/work requests.
Either HTTP service unavailable or simply unable to connect to server.
Hopefully it is a transient problem, and will right itself as it has as of late.
Otherwise, it IS that time of day when often the boyz in da lab have had their first cuppa coffee and are starting to twiddle with things....
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1638237 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1638246 - Posted: 6 Feb 2015, 17:45:49 UTC - in response to Message 1638237.  

I currently have rigs experiencing difficulty getting through to the servers with reporting/work requests.
Either HTTP service unavailable or simply unable to connect to server.
Hopefully it is a transient problem, and will right itself as it has as of late.
Otherwise, it IS that time of day when often the boyz in da lab have had their first cuppa coffee and are starting to twiddle with things....

I also had the problem yesterday.
ID: 1638246 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1638247 - Posted: 6 Feb 2015, 17:47:49 UTC - in response to Message 1638246.  

I currently have rigs experiencing difficulty getting through to the servers with reporting/work requests.
Either HTTP service unavailable or simply unable to connect to server.
Hopefully it is a transient problem, and will right itself as it has as of late.
Otherwise, it IS that time of day when often the boyz in da lab have had their first cuppa coffee and are starting to twiddle with things....

I also had the problem yesterday.

Still going on here...
Currently have 4 out of 8 rigs that are in backoff due to non-connects.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1638247 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1638260 - Posted: 6 Feb 2015, 18:11:52 UTC - in response to Message 1638209.  


And yet the current rate is still ~275, which is at least ~2.5 times normal operations with AP not running.

It would appear according to the crickets that bandwidth may be falling back to 'normal' levels for MB only operation....
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1638260 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 22 · Next

Message boards : Number crunching : Panic Mode On (95) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.