Message boards :
Number crunching :
Panic Mode On (95) Server Problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 22 · Next
Author | Message |
---|---|
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Leaving inr-304: Taking that into consideration.. recalculating.. I'm going to call it ~775mbit for the past 28 hours. 775000000/8*3600*28/2^40= 8.881 TiB edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Dena Wiltsie Send message Joined: 19 Apr 01 Posts: 1628 Credit: 24,230,968 RAC: 26 |
The reason is more basic than that. The selling number is the total storage ability of the drive. The number your system reports to you is after sector headers and inter record gaps have been removed. The reason for this is because soft sectored drives allow you to change the number of sectors per track changing the usable area. Most of the time, smaller sectors are desired because sectors must be transferred intact. Large sectors require more RAM to hold so small sectors tie up less RAM. Now you probably know more about hard drives than you ever wanted to know. Jarvin, I wrote diagnostic software for hard drives and assisted in debugging hard drive controllers. I have spent hours looking at the data coming from the drives on a scope. I have also worked with hard sectored and soft sectored drives. Both drives require the overhead of inter record gaps, headers to ensure the head is positioned correctly and room for the fire code error correction or in older drives an error check. This overhead isn't data or counted in the data count and the drive is always larger than the size the software returns. The difference may be a small precent as in the drives I worked with that had a 640 byte sectors size but the waste goes up when the sector size is smaller Edit Spare cylinders are counted in the manufacture numbers but not in usable storage. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Taking that into consideration.. recalculating.. And it just keeps going! It is an impressive amount of data. Looks like it might be around 17TB transferred thus far. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit. Yup, the graph flat-topped at about 952Mbit/s for about 12 hours continuously yesterday. And it's been running not much below that for over 46 hours now. In the words of Zaphod Beeblebrox, that's a lot. That's a lot a lot. [edit] Just how big is the AP database, anyway? David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
Aurora Borealis Send message Joined: 14 Jan 01 Posts: 3075 Credit: 5,631,463 RAC: 0 |
I'm starting to doubt that all this data is coming from the SETI servers!!! |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit. It's big. Like really REALLY big! SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Julie Send message Joined: 28 Oct 09 Posts: 34060 Credit: 18,883,157 RAC: 18 |
edit: also, looking at the normal 6_17 port, it looks like the ceiling has been hit for the past few hours on gigabit. LOL:)) I have an AP task on Lisa's computer that's been running over 419 hours atm. rOZZ Music Pictures |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Just how big is the AP database, anyway? Not sure. Not too long ago, I noted that the AP DB is ~4.5 TB..at least. If that was the case, then the data transfer should have been done in roughly 15 hours at ~725Mbit. So... I don't know what's going on now. Could be that the MB DB is being copied, too, and that one is quite likely to be well over 10 TB. Seeing as our normal inr-211/6_17 link is also carrying the outbound traffic to us, we can't really use that to get a decent estimate of the extra payload. Thanks to Joe's digging, inr-304/8_34 shows what is actually going up to the lab. The massive transfer started at ~1100 PST on Tuesday. Just as an eyeballed estimate, I'm going to say the average for the past 50 hours appears to be ~700Mbit. 700000000/8*3600*50/2^40 = 14.32 TiB. If they are transferring backups of both DBs, then I would imagine it won't last too much longer (4.5 for AP, >10 for MB, and we're nearly at 14.5 now, so..we'll see). Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Just how big is the AP database, anyway? I just use the provided "Avg: 889.98 Mbits/sec" from the summary when doing my calculations. Perhaps they are planning to repartition the storage array & are copying everything from it first SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I just use the provided "Avg: 889.98 Mbits/sec" from the summary when doing my calculations. The average that is listed is only for the most recent 24 hours. The graph itself is 38 hours wide. The transfer has been going for about 51 hours now. So yes, the most recent 24 hours is ~890Mbit, but if you look at the 38 hours that you can see, and remember what the hours before that looked like, the visual average appears to be more like 825 (for inr-211/6_17). But remember, inr-211/6_17 includes all the MB WUs being sent out and all the scheduler contacts and so forth. The data transfer itself that shows on inr-304/8_34 is a bit less than what's on inr-211/6_17--about 100mbit less, actually, which seems to fit right, because before the transfer started, for 3-4 days, we were hovering right around 110mbit on 6_17. Point is.. so far, roughly 15 TiB has been transferred up to the lab. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
They are planning on beaming it all back up into space with a note attached ... "Can you please sort this mess out for us?" LMAO |
Julie Send message Joined: 28 Oct 09 Posts: 34060 Credit: 18,883,157 RAC: 18 |
|
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I just use the provided "Avg: 889.98 Mbits/sec" from the summary when doing my calculations. I thought the default average was being calculated for the entire time given like MTRG & RDDtool does. Perhaps cricket does it differently. I am unsure. Yes it is quite a lot of data. As you stated. Making sure to look at the correct graph will give much more accurate calculations. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
In addition to the AP problem the pages from this forum are loading slowly, the panic continues |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
And the transfer is done, whatever it was. Interestingly, it finished at 2300 (11pm, after starting at 11am a few days ago). Making it a 60-hour transfer. It ramped down a bit near the end. Using these rough figures (according to the inr-304/8_34 link): ~700mbit for 36 hours ~600mbit for 18 hours ~400mbit for 6 hours The total transferred comes out to: (700000000/8*3600*36/2^40)+ (600000000/8*3600*18/2^40)+ (400000000/8*3600*6/2^40) = 15.716 TiB So if the AP database was ~4.5 TB not too long ago, then new work got assimilated to it finally, so it had to have grown a little bit, and the MB database is presumed (with no actual evidence, beyond what was known about it five+ years ago) to be at least 10 TB, then it seems to suggest that a copy of both were transferred in the past 60 hours. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
And the transfer is done, whatever it was. And yet the current rate is still ~275, which is at least ~2.5 times normal operations with AP not running. David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I currently have rigs experiencing difficulty getting through to the servers with reporting/work requests. Either HTTP service unavailable or simply unable to connect to server. Hopefully it is a transient problem, and will right itself as it has as of late. Otherwise, it IS that time of day when often the boyz in da lab have had their first cuppa coffee and are starting to twiddle with things.... "Time is simply the mechanism that keeps everything from happening all at once." |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
I currently have rigs experiencing difficulty getting through to the servers with reporting/work requests. I also had the problem yesterday. |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I currently have rigs experiencing difficulty getting through to the servers with reporting/work requests. Still going on here... Currently have 4 out of 8 rigs that are in backoff due to non-connects. "Time is simply the mechanism that keeps everything from happening all at once." |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
It would appear according to the crickets that bandwidth may be falling back to 'normal' levels for MB only operation.... "Time is simply the mechanism that keeps everything from happening all at once." |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.