Tenaya (Feb 24 2009)

Message boards : Technical News : Tenaya (Feb 24 2009)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 870537 - Posted: 28 Feb 2009, 18:50:21 UTC - in response to Message 870485.  

Aloha!
*shrugs* mo better than spending 2-3 days a week with idle machines reading:
"Scheduler request completed: got 0 new tasks" ;)


Better for who? For those not in the top 25 trying to get work (which the server doesn't prioritize based upon position anyway, so those in the top 25 are just as likely having problems getting more work as you do), or those in the top 25 who have to load their workunits via DVD and pay money to send the results back through the mail (because we know SETI@Home doesn't have the money to pay for pre-paid envelopes for these participants).

I would kindly suggest that if you don't like reading those messages from your messages tab to simply not view your messages tab. Its not an error at all, its simply letting you know that it didn't get any work and it will try again in the future. That's the way BOINC was designed to work.
ID: 870537 · Report as offensive
PhonAcq

Send message
Joined: 14 Apr 01
Posts: 1656
Credit: 30,658,217
RAC: 1
United States
Message 870543 - Posted: 28 Feb 2009, 18:57:58 UTC

Isn't the real problem here that BOINC is not scalable? They do not permit (as far as I understand it) the creation of secondary boinc nodes to then re-distribute wu's received from commandcentral. (I could be wrong because I don't keep up with the secondary applications out there, but I know seti classic could do that). So the log jam is ultimately the bandwidth to the mothership (Berkeley) and no one should be surprised. More users--> less available bandwidth, for a fixed wu size. Of course, one could increase the wu size (length of required processing time), but that just shifts the log jam out a ways.
ID: 870543 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 870642 - Posted: 28 Feb 2009, 23:01:00 UTC - in response to Message 870543.  

Einstein has at least one mirror site for downloads, so it CAN be done.
ID: 870642 · Report as offensive
PhonAcq

Send message
Joined: 14 Apr 01
Posts: 1656
Credit: 30,658,217
RAC: 1
United States
Message 870692 - Posted: 1 Mar 2009, 0:56:26 UTC - in response to Message 870642.  

I forgot about mirroring in my question. However, does mirroring actually reduce bandwidth at the orginating hub? All the mirrors have to be filled and synch'd, which consumes bandwidth. And since these wu's are use-and-forget, I wonder if mirroring helps? (I mean if seti were to scale up to, say, 3M active hosts from our present 300K and if splitting and sourcing the data wouldn't be an issue, would mirroring enable or disable the scale-up.)
ID: 870692 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 870779 - Posted: 1 Mar 2009, 4:32:34 UTC - in response to Message 870692.  

I forgot about mirroring in my question. However, does mirroring actually reduce bandwidth at the orginating hub? All the mirrors have to be filled and synch'd, which consumes bandwidth. And since these wu's are use-and-forget, I wonder if mirroring helps? (I mean if seti were to scale up to, say, 3M active hosts from our present 300K and if splitting and sourcing the data wouldn't be an issue, would mirroring enable or disable the scale-up.)


Bandwidth would be reduced by a minimum of 50% at the source. i.e. The WU is downloaded to the mirror and any copies are transmitted from there. The assumption being that the mirror has greater bandwidth than the source site. Two WUs are downloaded from the mirror to the required hosts for an initial reduction of 50% bandwidth. Any reissues are additional bandwidth savings at the source site.

This is basically guido.man's idea below.
ID: 870779 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 870783 - Posted: 1 Mar 2009, 4:40:20 UTC - in response to Message 870779.  

I forgot about mirroring in my question. However, does mirroring actually reduce bandwidth at the orginating hub? All the mirrors have to be filled and synch'd, which consumes bandwidth. And since these wu's are use-and-forget, I wonder if mirroring helps? (I mean if seti were to scale up to, say, 3M active hosts from our present 300K and if splitting and sourcing the data wouldn't be an issue, would mirroring enable or disable the scale-up.)


Bandwidth would be reduced by a minimum of 50% at the source. i.e. The WU is downloaded to the mirror and any copies are transmitted from there. The assumption being that the mirror has greater bandwidth than the source site. Two WUs are downloaded from the mirror to the required hosts for an initial reduction of 50% bandwidth. Any reissues are additional bandwidth savings at the source site.

This is basically guido.man's idea below.


Wouldn't the mirrors have to eventually send/receive data back to the main host (not necessary for static files, but workunits would be a different beast)?

If you mirror the workunits to other servers, they still have to get data from SETI@Home and send it back to turn it in. This will simply move the problem around, but I don't think it will actually solve the problem.
ID: 870783 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 870788 - Posted: 1 Mar 2009, 4:51:11 UTC - in response to Message 870783.  

I forgot about mirroring in my question. However, does mirroring actually reduce bandwidth at the orginating hub? All the mirrors have to be filled and synch'd, which consumes bandwidth. And since these wu's are use-and-forget, I wonder if mirroring helps? (I mean if seti were to scale up to, say, 3M active hosts from our present 300K and if splitting and sourcing the data wouldn't be an issue, would mirroring enable or disable the scale-up.)


Bandwidth would be reduced by a minimum of 50% at the source. i.e. The WU is downloaded to the mirror and any copies are transmitted from there. The assumption being that the mirror has greater bandwidth than the source site. Two WUs are downloaded from the mirror to the required hosts for an initial reduction of 50% bandwidth. Any reissues are additional bandwidth savings at the source site.

This is basically guido.man's idea below.


Wouldn't the mirrors have to eventually send/receive data back to the main host (not necessary for static files, but workunits would be a different beast)?

If you mirror the workunits to other servers, they still have to get data from SETI@Home and send it back to turn it in. This will simply move the problem around, but I don't think it will actually solve the problem.

It does shave some of the bandwidth off. If the WU were sent out and teh cannonical result sent back, the bandwidth would be reduced by about half for those WUs.

Mirroring the executables to be sent out can shave an enormous amount of bandwidth as the mirror sites only need to download one copy in order to send out a hundred thousand.


BOINC WIKI
ID: 870788 · Report as offensive
Profile [AF>France>Bourgogne]Patouchon
Avatar

Send message
Joined: 25 Aug 01
Posts: 7
Credit: 3,461,672
RAC: 4
France
Message 870865 - Posted: 1 Mar 2009, 8:53:36 UTC - in response to Message 870428.  



Hello,

well that your problem is solved.

For the next time if you have a problem, have a look in the 'number crunching' forum:
http://setiathome.berkeley.edu/forum_forum.php?id=10

There are more people around which could help you.. :-)

Nice greetings from Germany! :-)
danke schoen, i will have a lok next time ;)

seti1 was pretty good, seti2 will be better ?
ID: 870865 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 870919 - Posted: 1 Mar 2009, 13:07:01 UTC - in response to Message 870783.  

I forgot about mirroring in my question. However, does mirroring actually reduce bandwidth at the orginating hub? All the mirrors have to be filled and synch'd, which consumes bandwidth. And since these wu's are use-and-forget, I wonder if mirroring helps? (I mean if seti were to scale up to, say, 3M active hosts from our present 300K and if splitting and sourcing the data wouldn't be an issue, would mirroring enable or disable the scale-up.)


Bandwidth would be reduced by a minimum of 50% at the source. i.e. The WU is downloaded to the mirror and any copies are transmitted from there. The assumption being that the mirror has greater bandwidth than the source site. Two WUs are downloaded from the mirror to the required hosts for an initial reduction of 50% bandwidth. Any reissues are additional bandwidth savings at the source site.

This is basically guido.man's idea below.


Wouldn't the mirrors have to eventually send/receive data back to the main host (not necessary for static files, but workunits would be a different beast)?

If you mirror the workunits to other servers, they still have to get data from SETI@Home and send it back to turn it in. This will simply move the problem around, but I don't think it will actually solve the problem.


If the ONLY thing done on the mirror is to send out WUs and Science apps, then the transmissions between the mirror and the Lab would be minimal. Results would be sent directly from the client to the Lab, not back to the mirror. Only the splitters and the deleters would need to talk to the mirror. If there was more than one mirror, the first mirror could transmit to the second and others.
ID: 870919 · Report as offensive
PhonAcq

Send message
Joined: 14 Apr 01
Posts: 1656
Credit: 30,658,217
RAC: 1
United States
Message 870952 - Posted: 1 Mar 2009, 15:07:18 UTC - in response to Message 870919.  

Then mirroring provides an element of enhanced reliability but doesn't reduce the bandwidth problem, as long as every wu is created at Berkeley. With the mirror, Berkeley uses bandwidth to send the wu's to the mirror and Clients can download from the mirror. But Berkeley has used the same amount of bandwidth. Plus if the mirror is a mirror, then the same wu's are at Berkeley. So 2x the storage is needed. And then one needs to tell both sites when each wu is no longer needed. and so on. I see overhead and reliability but no reduction in bandwidth. Perhaps I'm wrong.

ID: 870952 · Report as offensive
Profile Pooh Bear 27
Volunteer tester
Avatar

Send message
Joined: 14 Jul 03
Posts: 3224
Credit: 4,603,826
RAC: 0
United States
Message 870963 - Posted: 1 Mar 2009, 15:27:41 UTC

During new application time mirroring would help because the science application would come from multiple sources. That's the only benefit I could see.

My movie https://vimeo.com/manage/videos/502242
ID: 870963 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 871028 - Posted: 1 Mar 2009, 17:26:01 UTC - in response to Message 870919.  

I forgot about mirroring in my question. However, does mirroring actually reduce bandwidth at the orginating hub? All the mirrors have to be filled and synch'd, which consumes bandwidth. And since these wu's are use-and-forget, I wonder if mirroring helps? (I mean if seti were to scale up to, say, 3M active hosts from our present 300K and if splitting and sourcing the data wouldn't be an issue, would mirroring enable or disable the scale-up.)


Bandwidth would be reduced by a minimum of 50% at the source. i.e. The WU is downloaded to the mirror and any copies are transmitted from there. The assumption being that the mirror has greater bandwidth than the source site. Two WUs are downloaded from the mirror to the required hosts for an initial reduction of 50% bandwidth. Any reissues are additional bandwidth savings at the source site.

This is basically guido.man's idea below.


Wouldn't the mirrors have to eventually send/receive data back to the main host (not necessary for static files, but workunits would be a different beast)?

If you mirror the workunits to other servers, they still have to get data from SETI@Home and send it back to turn it in. This will simply move the problem around, but I don't think it will actually solve the problem.


If the ONLY thing done on the mirror is to send out WUs and Science apps, then the transmissions between the mirror and the Lab would be minimal. Results would be sent directly from the client to the Lab, not back to the mirror. Only the splitters and the deleters would need to talk to the mirror. If there was more than one mirror, the first mirror could transmit to the second and others.

Currently:

Telescope -> splitter -> download server -> client

Proposed:

Telescope -> splitter -> download server -> mirror(s) -> client

The only possible advantage is that you could throttle the mirror process for optimal transfer -- but the same number of work units go from the download server to the mirrors.

... and that's only if downloads come exclusively from the mirror.

You could get nearly the same gain by adding the ability to throttle the standard BOINC client, without the extra complexity.
ID: 871028 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 871030 - Posted: 1 Mar 2009, 17:26:57 UTC - in response to Message 870952.  

Then mirroring provides an element of enhanced reliability but doesn't reduce the bandwidth problem, as long as every wu is created at Berkeley. With the mirror, Berkeley uses bandwidth to send the wu's to the mirror and Clients can download from the mirror. But Berkeley has used the same amount of bandwidth. Plus if the mirror is a mirror, then the same wu's are at Berkeley. So 2x the storage is needed. And then one needs to tell both sites when each wu is no longer needed. and so on. I see overhead and reliability but no reduction in bandwidth. Perhaps I'm wrong.


You are exactly right. It is easier to move a problem around than it is to solve the problem.
ID: 871030 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 871093 - Posted: 1 Mar 2009, 19:33:53 UTC - in response to Message 871028.  


Currently:

Telescope -> splitter -> download server -> client

Proposed:

Telescope -> splitter -> download server -> mirror(s) -> client

No: Telescope -> splitter -> mirror(s) -> client
Although so far as bandwidth is concerned it's a wash, but local storage is reduced.

The only possible advantage is that you could throttle the mirror process for optimal transfer -- but the same number of work units go from the download server to the mirrors.

??? Why throttle the mirror if it runs at 1GB?

... and that's only if downloads come exclusively from the mirror.

That's what's proposed

You could get nearly the same gain by adding the ability to throttle the standard BOINC client, without the extra complexity.

Agree that there is less complexity, but why throttle anything if the mirror is located 'down the hill' from the lab...i.e. at the 1 GB end of the link?
ID: 871093 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 871099 - Posted: 1 Mar 2009, 19:47:05 UTC - in response to Message 871093.  
Last modified: 1 Mar 2009, 19:55:53 UTC


Currently:

Telescope -> splitter -> download server -> client

Proposed:

Telescope -> splitter -> download server -> mirror(s) -> client

No: Telescope -> splitter -> mirror(s) -> client
Although so far as bandwidth is concerned it's a wash, but local storage is reduced.

The only possible advantage is that you could throttle the mirror process for optimal transfer -- but the same number of work units go from the download server to the mirrors.

??? Why throttle the mirror if it runs at 1GB?

... and that's only if downloads come exclusively from the mirror.

That's what's proposed

You could get nearly the same gain by adding the ability to throttle the standard BOINC client, without the extra complexity.

Agree that there is less complexity, but why throttle anything if the mirror is located 'down the hill' from the lab...i.e. at the 1 GB end of the link?


So essentially what you're proposing is not really a mirror, but a separate connection in the lab that runs at the full 1Gb (Gigabit) speed where the workunits are directly loaded.

A mirror is exactly that, a mirror. It would 'mirror' or reflect what's located on another server, and any change to the original would automatically be updated on the mirror. It would need to get its data directly from the original to be considered a mirror.
ID: 871099 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 871123 - Posted: 1 Mar 2009, 21:32:43 UTC

If it's a mirror, there has to be something which it mirrors; the term implies at least two copies.

Simply locating the download server downhill with 1 GB access to the world is what would improve the situation. Each WU or application would only go from the SSL lab to that server once, multiple downloads by hosts would not be seen on the 100 MB link. The issue is maintenance and remote management, not any technical difficulty in having that server at a distance.

On the Server Status page, the counts of 'Results' indicate a completed or expected host download of a WU, the number of WU files is half or less. For S@H Enhanced, there are less than 2.1 downloads of each WU. For AP the ratio had gotten down to about 2.7 prior to last weekend's difficulty.

Each 50.2 GB 'tape' produces around 130 GB of WUs after being split for both Enhanced and Astropulse...
                                                                  Joe
ID: 871123 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 871130 - Posted: 1 Mar 2009, 21:58:19 UTC - in response to Message 871123.  

If it's a mirror, there has to be something which it mirrors; the term implies at least two copies.

Simply locating the download server downhill with 1 GB access to the world is what would improve the situation. Each WU or application would only go from the SSL lab to that server once, multiple downloads by hosts would not be seen on the 100 MB link. The issue is maintenance and remote management, not any technical difficulty in having that server at a distance.


You're right. I guess we should call it a remote download server instead of a mirror.
ID: 871130 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 871226 - Posted: 2 Mar 2009, 4:41:19 UTC - in response to Message 871093.  


The only possible advantage is that you could throttle the mirror process for optimal transfer -- but the same number of work units go from the download server to the mirrors.

??? Why throttle the mirror if it runs at 1GB?

There is a huge difference between throttling the mirror, and throttling the process that moves work to the mirror.

You have to see this to believe it, and most here haven't realized what they're seeing: a connection running at 95% of capacity works very well. A connection running at 105% of capacity works incredibly poorly. When you push past 95% the actual throughput drops to a fraction of the wire capacity.


... and that's only if downloads come exclusively from the mirror.

That's what's proposed

You could get nearly the same gain by adding the ability to throttle the standard BOINC client, without the extra complexity.

Agree that there is less complexity, but why throttle anything if the mirror is located 'down the hill' from the lab...i.e. at the 1 GB end of the link?

This is really important:

It is easier to move a problem around than it is to solve it.

That gigabit connection does not matter if you can't load up the server faster than we can download from it.

If the best connection from the lab to the download server is 100 megabits, then we can pull ten hours of downloads off in an hour.

So, we reduce the client-to-download server limit, and we trade that for a splitter-to-download server limit.

If copying work to the "mirror" uses up all the bandwidth, then uploads and reporting will suffer.

There is no free lunch.
ID: 871226 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 871277 - Posted: 2 Mar 2009, 10:43:32 UTC - in response to Message 871226.  

This is really important:

It is easier to move a problem around than it is to solve it.

There is no free lunch.


You have yet to propose a solution.
ID: 871277 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13715
Credit: 208,696,464
RAC: 304
Australia
Message 871291 - Posted: 2 Mar 2009, 12:21:33 UTC - in response to Message 871277.  

You have yet to propose a solution.

In previous discussion of the subject, he's made a couple of proposals.
Grant
Darwin NT
ID: 871291 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Technical News : Tenaya (Feb 24 2009)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.