Slow network connection to Green Bank

Message boards : News : Slow network connection to Green Bank
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 14 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10097
Credit: 133,380,209
RAC: 84,964
Australia
Message 1917609 - Posted: 8 Feb 2018, 4:30:03 UTC - in response to Message 1917538.  

Related to Green Bank ... What did you find with the Slow splitting and deletion of GB data?
It appears the system has got back up to normal speeds.

The returned-per-hour numbers are way down, by about 30,000.
When they get back up to 130k+ then we'll see how well things hold up.
Grant
Darwin NT
ID: 1917609 · Report as offensive
Stephen "Heretic" Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3514
Credit: 75,050,268
RAC: 109,556
Australia
Message 1917651 - Posted: 8 Feb 2018, 12:52:41 UTC - in response to Message 1917544.  

Qantas flies to Los Angeles with big planes. Maybe some space can be found aboard for our disks, they are not too heavy.
Tullio


. . If there is a problem with the cost of data transfers I wonder how the cost of air freight would be received ...

. . I would expect it would be way less than $100 per terabyte but expensive nonetheless.

Stephen

. .
ID: 1917651 · Report as offensive
Profile Mikenstein

Send message
Joined: 6 Sep 15
Posts: 4
Credit: 5,391,635
RAC: 7,378
United States
Message 1917657 - Posted: 8 Feb 2018, 13:20:40 UTC - in response to Message 1915973.  

Hey now be nice to the network people! I was Network + Certified before arthritis relegated me into early retirement! It's not the easiest job! and tough to get into! Besides it's probably "THE DARK WEB" at work! We just need a Jedi to combat it! LOL!
ID: 1917657 · Report as offensive
Eric Korpela Project Donor
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 3 Apr 99
Posts: 1300
Credit: 25,022,908
RAC: 29,664
United States
Message 1917689 - Posted: 8 Feb 2018, 18:44:44 UTC - in response to Message 1917538.  

Related to Green Bank ... What did you find with the Slow splitting and deletion of GB data?
It appears the system has got back up to normal speeds.


We found no obvious reasons why it occurred, nor why it stopped occurring. It could have been related to slow inserts or queries due to the size of the result and workunit tables in the BOINC (mysql) database. I changed some db_purge settings to increase delete speed, but I don't think that solved the problem.

Fingers crossed it doesn't happen again.
@SETIEric

ID: 1917689 · Report as offensive
Stephen "Heretic" Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3514
Credit: 75,050,268
RAC: 109,556
Australia
Message 1917733 - Posted: 8 Feb 2018, 22:41:27 UTC - in response to Message 1916037.  



. . one of those days :)

Stephen

:)
ID: 1917733 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1045
Credit: 9,038,037
RAC: 361
New Zealand
Message 1917795 - Posted: 9 Feb 2018, 2:15:22 UTC - in response to Message 1917689.  

I hope everything works out as well Eric thanks for the update.

Out of curiosity when everything is working as it should between Greenbank and Berkely are all of the tapes shown on the server status page transferred is one file or are they transferred individually (but all downloaded at the same time)
Thanks in advance. I appreciate you taking the time to answer
ID: 1917795 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10097
Credit: 133,380,209
RAC: 84,964
Australia
Message 1917814 - Posted: 9 Feb 2018, 4:29:45 UTC - in response to Message 1917689.  

Related to Green Bank ... What did you find with the Slow splitting and deletion of GB data?
It appears the system has got back up to normal speeds.


We found no obvious reasons why it occurred, nor why it stopped occurring. It could have been related to slow inserts or queries due to the size of the result and workunit tables in the BOINC (mysql) database. I changed some db_purge settings to increase delete speed, but I don't think that solved the problem.

Fingers crossed it doesn't happen again.

AP & MB WU-awaiting-deletion are both climbing sharply. But at this stage they haven't impacted on the splitter output- Received-last-hour is still only around 115k, not the 130k+ it was with the much shorter running GBT WUs.
Grant
Darwin NT
ID: 1917814 · Report as offensive
Wildkats66

Send message
Joined: 25 Mar 12
Posts: 15
Credit: 5,377,788
RAC: 3,993
United States
Message 1917826 - Posted: 9 Feb 2018, 5:39:50 UTC

So based on the problem with Green Bank, my Dell will not download data but my iMac will. I sthis normal.

Thanks for advising all if us.
ID: 1917826 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4915
Credit: 315,747,780
RAC: 708,809
United States
Message 1917827 - Posted: 9 Feb 2018, 5:43:06 UTC - in response to Message 1917826.  

Not enough information provided. Post the first 30 lines in the Event Log right after startup to see what BOINC reports.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1917827 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10097
Credit: 133,380,209
RAC: 84,964
Australia
Message 1917836 - Posted: 9 Feb 2018, 6:28:01 UTC - in response to Message 1917827.  

Not enough information provided. Post the first 30 lines in the Event Log right after startup to see what BOINC reports.

In Number Crunching, not here...
Grant
Darwin NT
ID: 1917836 · Report as offensive
Profile John Robert Mallernee
Volunteer tester
Avatar

Send message
Joined: 4 Jul 06
Posts: 27
Credit: 1,566,058
RAC: 1,523
United States
Message 1917876 - Posted: 9 Feb 2018, 14:51:45 UTC

I don't see any problems with S.E.T.I. or B.O.I.N.C. on my computer.

Everything appears to be running perfectly.

But, I'm just a volunteer, and not a scientist, so I don't even know what it is that I'm looking at.
John Robert Mallernee
Ashley Valley Shadows
Vernal, Utah 84078
ID: 1917876 · Report as offensive
Eric Korpela Project Donor
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 3 Apr 99
Posts: 1300
Credit: 25,022,908
RAC: 29,664
United States
Message 1917895 - Posted: 9 Feb 2018, 17:49:40 UTC - in response to Message 1917795.  

I hope everything works out as well Eric thanks for the update.

Out of curiosity when everything is working as it should between Greenbank and Berkely are all of the tapes shown on the server status page transferred is one file or are they transferred individually (but all downloaded at the same time)
Thanks in advance. I appreciate you taking the time to answer


I believe Matt uses multiple parallel rsync connections, one from each of the compute nodes on the Greenbank side. So each file goes over a single connection, but separate files may go over separate connections. Each of the names listed on the status page actually consists of multiple files, each no larger than 4GB.
@SETIEric

ID: 1917895 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1045
Credit: 9,038,037
RAC: 361
New Zealand
Message 1917911 - Posted: 9 Feb 2018, 20:28:55 UTC - in response to Message 1917895.  

I hope everything works out as well Eric thanks for the update.

Out of curiosity when everything is working as it should between Greenbank and Berkely are all of the tapes shown on the server status page transferred is one file or are they transferred individually (but all downloaded at the same time)
Thanks in advance. I appreciate you taking the time to answer


I believe Matt uses multiple parallel rsync connections, one from each of the compute nodes on the Greenbank side. So each file goes over a single connection, but separate files may go over separate connections. Each of the names listed on the status page actually consists of multiple files, each no larger than 4GB.

Thank you for the answer. There is lots of compression which would certainly save on bandwidth when transferring files.
ID: 1917911 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 23120
Credit: 38,650,125
RAC: 22,085
United States
Message 1917939 - Posted: 9 Feb 2018, 22:49:21 UTC - in response to Message 1917911.  

I hope everything works out as well Eric thanks for the update.

Out of curiosity when everything is working as it should between Greenbank and Berkely are all of the tapes shown on the server status page transferred is one file or are they transferred individually (but all downloaded at the same time)
Thanks in advance. I appreciate you taking the time to answer


I believe Matt uses multiple parallel rsync connections, one from each of the compute nodes on the Greenbank side. So each file goes over a single connection, but separate files may go over separate connections. Each of the names listed on the status page actually consists of multiple files, each no larger than 4GB.

Thank you for the answer. There is lots of compression which would certainly save on bandwidth when transferring files.

Maybe, maybe not. Random cosmic noise won't compress much.
ID: 1917939 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1045
Credit: 9,038,037
RAC: 361
New Zealand
Message 1917955 - Posted: 9 Feb 2018, 23:45:03 UTC - in response to Message 1917939.  

I hope everything works out as well Eric thanks for the update.

Out of curiosity when everything is working as it should between Greenbank and Berkely are all of the tapes shown on the server status page transferred is one file or are they transferred individually (but all downloaded at the same time)
Thanks in advance. I appreciate you taking the time to answer


I believe Matt uses multiple parallel rsync connections, one from each of the compute nodes on the Greenbank side. So each file goes over a single connection, but separate files may go over separate connections. Each of the names listed on the status page actually consists of multiple files, each no larger than 4GB.

Thank you for the answer. There is lots of compression which would certainly save on bandwidth when transferring files.

Maybe, maybe not. Random cosmic noise won't compress much.

Good point Gary. I originally misinterpreted what Eric had said. I misinterpreted that there are multiple files per file listed on the server status page. I read it as each tape was compressed down to no more than 4 GB from its original size of roughly 51/52 GB
ID: 1917955 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10097
Credit: 133,380,209
RAC: 84,964
Australia
Message 1918180 - Posted: 11 Feb 2018, 1:17:11 UTC - in response to Message 1917689.  

Related to Green Bank ... What did you find with the Slow splitting and deletion of GB data?
It appears the system has got back up to normal speeds.


We found no obvious reasons why it occurred, nor why it stopped occurring. It could have been related to slow inserts or queries due to the size of the result and workunit tables in the BOINC (mysql) database. I changed some db_purge settings to increase delete speed, but I don't think that solved the problem.

Fingers crossed it doesn't happen again.

Looking at the Haveland graphs, the I/O battle between Results returned-per-hour, WU deletion & Splitter output continues.
Grant
Darwin NT
ID: 1918180 · Report as offensive
juan BFP Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 6946
Credit: 394,831,406
RAC: 143,496
Panama
Message 1918377 - Posted: 11 Feb 2018, 23:01:17 UTC

We see a lot of new tapes, was the problem finally fixed?
ID: 1918377 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10097
Credit: 133,380,209
RAC: 84,964
Australia
Message 1918379 - Posted: 11 Feb 2018, 23:05:52 UTC - in response to Message 1918377.  

We see a lot of new tapes, was the problem finally fixed?

I suspect it's Matt & Eric burning the weekend midnight oil to keep the data flowing.
Grant
Darwin NT
ID: 1918379 · Report as offensive
Profile Wiggo "Socialist"
Avatar

Send message
Joined: 24 Jan 00
Posts: 14665
Credit: 189,559,578
RAC: 70,578
Australia
Message 1918380 - Posted: 11 Feb 2018, 23:07:53 UTC

We see a lot of new tapes, was the problem finally fixed?

Message 1917689. ;-)

Cheers.
ID: 1918380 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10097
Credit: 133,380,209
RAC: 84,964
Australia
Message 1918383 - Posted: 11 Feb 2018, 23:17:18 UTC - in response to Message 1918380.  

We see a lot of new tapes, was the problem finally fixed?

Message 1917689. ;-)

Cheers.

That's about the slow splitter/file deletion issue, which is still occurring.
Grant
Darwin NT
ID: 1918383 · Report as offensive
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 14 · Next

Message boards : News : Slow network connection to Green Bank


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.