Blitzed Again (Jul 02 2009)


log in

Advanced search

Message boards : Technical News : Blitzed Again (Jul 02 2009)

1 · 2 · 3 · 4 . . . 5 · Next
Author Message
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar
Send message
Joined: 1 Mar 99
Posts: 1389
Credit: 74,079
RAC: 0
United States
Message 913347 - Posted: 2 Jul 2009, 18:24:11 UTC

Looks like we're back in another noisy period, or at least the bandwidth is maxed out enough that it's constraining both downloads and uploads. Let's just try to ride this storm out - it should hopefully clear up on its own.

Regarding the videos I linked to yesterday, there were plans to get the powerpoint images linked into the actual camera footage, but I guess that never panned out. That's fine. Or maybe that only happened on the live feed... Anyway, you get the basic gist of what we're trying to say from this footage. I was kind of rushing through my talk - how do you condense 10 years of effort into 20 minutes?

We were hoping to get the NTPCkr pages up this week but I'm finding that I really need to update the FAQ and other informational pages before making this live, lest we get flooded with common questions. Plus we have a little bit of feature creep, which is okay - better to rush and do these things now or they'll probably never get done.

- Matt

____________
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Larry Coolidge
Avatar
Send message
Joined: 16 May 99
Posts: 19
Credit: 13,871,973
RAC: 12,684
United States
Message 913358 - Posted: 2 Jul 2009, 18:38:58 UTC - in response to Message 913347.

Thanks for the update. Things just keep plugging along until the dam breaks.
____________

Profile Geek@PlayProject donor
Volunteer tester
Avatar
Send message
Joined: 31 Jul 01
Posts: 2467
Credit: 86,130,222
RAC: 14,589
United States
Message 913365 - Posted: 2 Jul 2009, 18:52:37 UTC
Last modified: 2 Jul 2009, 19:03:47 UTC

But..........I thought changes had been made at Berkeley so that noisy work being split out could not overwhelm the system.
____________
Boinc....Boinc....Boinc....Boinc....

Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar
Send message
Joined: 1 Mar 99
Posts: 1389
Credit: 74,079
RAC: 0
United States
Message 913389 - Posted: 2 Jul 2009, 20:37:25 UTC - in response to Message 913365.

But..........I thought changes had been made at Berkeley so that noisy work being split out could not overwhelm the system.


By noisy I mostly meant "chaotic and busy." Yes, there are mechanisms in place to work on multiple raw data files simultaneously to avoid getting deluged with one particularly noisy data file. It's not perfect, though. I'm not exactly sure what's going on currently but I've been busy with other things to do much sleuthing. We're currently pegged on our network - that means workunits are being downloaded as fast as we possibly can send them - so there's not much I can do.

- Matt
____________
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Jörg
Send message
Joined: 10 Dec 02
Posts: 51
Credit: 1,547,286
RAC: 0
Germany
Message 913393 - Posted: 2 Jul 2009, 20:59:37 UTC - in response to Message 913389.
Last modified: 2 Jul 2009, 21:03:23 UTC

[quote]We're currently pegged on our network - that means workunits are being downloaded as fast as we possibly can send them - so there's not much I can do.

- Matt


Good evening,

why do you not send out "bigger" workunits of all applications, lets say 4 times the size you did in the past, to get rid of the pressure to Seti´s network capacity?

If you dont want to send such bigger WU´s to all participants, make it an option in the account settings, as long as you give fair credits, most of the participants with good hardware will crunch this bigger WU´s and the guys with older hardware are able to crunch the "normal" WU´s in time.
____________
Am Ende ist nur Verwirrung

Profile Geek@PlayProject donor
Volunteer tester
Avatar
Send message
Joined: 31 Jul 01
Posts: 2467
Credit: 86,130,222
RAC: 14,589
United States
Message 913395 - Posted: 2 Jul 2009, 21:15:04 UTC - in response to Message 913389.

But..........I thought changes had been made at Berkeley so that noisy work being split out could not overwhelm the system.


By noisy I mostly meant "chaotic and busy." Yes, there are mechanisms in place to work on multiple raw data files simultaneously to avoid getting deluged with one particularly noisy data file. It's not perfect, though. I'm not exactly sure what's going on currently but I've been busy with other things to do much sleuthing. We're currently pegged on our network - that means workunits are being downloaded as fast as we possibly can send them - so there's not much I can do.

- Matt


Thanks for taking the time to answer me and thanks for your hard work there at Berkeley.

____________
Boinc....Boinc....Boinc....Boinc....

Profile Dr. C.E.T.I.
Avatar
Send message
Joined: 29 Feb 00
Posts: 15993
Credit: 690,597
RAC: 0
United States
Message 913417 - Posted: 2 Jul 2009, 22:53:50 UTC


Thanks for All the Efforts Berkeley - it is Sincerely appreciated

[and Thanks for the Updates Matt . . .]

____________
BOINC Wiki . . .

Science Status Page . . .

Profile Jon Golding
Avatar
Send message
Joined: 20 Apr 00
Posts: 56
Credit: 365,254
RAC: 4
United Kingdom
Message 913711 - Posted: 3 Jul 2009, 21:00:51 UTC

It seems to me that the log jams and bottlenecks in the system are ever increasing and are not going to go away.
This isn't a criticism: it indicates the overwhelming success, progress, and participant interest in the project.
I come back to a suggestion that I and several others have raised before; that it may be high time to "franchise" SETI and have several download/upload servers distributed across various universities, perhaps under the auspices of the SETI Institute. This would lighten the overall load and increase the capacity of SETI, especially considering the higher throughput wish-list of analysing a larger frequency domain, as described in Dan Werthimer's endnote speech to the recent 10 year celebratory meeting.
Are the obstacles to this solution something that the volunteer base of SETI can help to address? (e.g. funding, testing, "political" pressure).
Just trying to help.
____________

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913743 - Posted: 3 Jul 2009, 21:56:03 UTC - in response to Message 913711.

It seems to me that the log jams and bottlenecks in the system are ever increasing and are not going to go away.
This isn't a criticism: it indicates the overwhelming success, progress, and participant interest in the project.
I come back to a suggestion that I and several others have raised before; that it may be high time to "franchise" SETI and have several download/upload servers distributed across various universities, perhaps under the auspices of the SETI Institute. This would lighten the overall load and increase the capacity of SETI, especially considering the higher throughput wish-list of analysing a larger frequency domain, as described in Dan Werthimer's endnote speech to the recent 10 year celebratory meeting.
Are the obstacles to this solution something that the volunteer base of SETI can help to address? (e.g. funding, testing, "political" pressure).
Just trying to help.

Keep in mind that the SETI Institute may not want to help, for political reasons.

SETI@Home likely could move equipment out of their closet at SSL, and put it at the other end of the wire. The problem is that the splitters need to write to the download server(s), and the validators need to pull from the upload server(s) -- all it really does is move the bottleneck from one end of the current wire to the other.

There might be an advantage in doing that, but it might not be the advantage we all really want.
____________

CryptokiD
Avatar
Send message
Joined: 2 Dec 00
Posts: 134
Credit: 2,814,936
RAC: 0
United States
Message 913782 - Posted: 3 Jul 2009, 23:17:00 UTC - in response to Message 913743.

i believe they are paying for a gigabit line yet because the proper cables havent been installed, they are only able to use 100mbit. that's what 12 megabytes a second? most hard drives today easily top 50 megabytes/second, so i would understand why the cricket graph stays pegged for days.

maybe it's time to trench the lawn and install that gigabit cable. i know its expensive and labour consuming. too bad berkley is 4 time zones away. i have a 300hp trenching machine. i wish i could help more.

i do try to suspend my machines from contacting seti when the cricket graph is pegged. my handfull of computers probably wont matter though.

Profile alphax
Volunteer tester
Send message
Joined: 17 May 99
Posts: 74
Credit: 1,266,810
RAC: 0
United States
Message 913784 - Posted: 3 Jul 2009, 23:22:51 UTC

I think there is merit to the idea of sharing the SETI@Home database to data servers across the Internet and leveraging the distributed servers' bandwidth. P2P networks are very good at demonstrating the power of collective bandwidth for spreading information from a single source :)

But it will require substantial redesign of the way S@H works and will require trustworthy servers to ensure that the data coming back is verifiable.
____________

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5823
Credit: 59,052,982
RAC: 47,875
Australia
Message 913787 - Posted: 3 Jul 2009, 23:28:45 UTC - in response to Message 913784.
Last modified: 3 Jul 2009, 23:29:05 UTC

I think there is merit to the idea of sharing the SETI@Home database to data servers across the Internet and leveraging the distributed servers' bandwidth. P2P networks are very good at demonstrating the power of collective bandwidth for spreading information from a single source :)

The problem is you still have to get that data from & too the Seti Servers. And that is where the present network bottlneck is.
____________
Grant
Darwin NT.

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913788 - Posted: 3 Jul 2009, 23:29:03 UTC - in response to Message 913784.

I think there is merit to the idea of sharing the SETI@Home database to data servers across the Internet and leveraging the distributed servers' bandwidth. P2P networks are very good at demonstrating the power of collective bandwidth for spreading information from a single source :)

But it will require substantial redesign of the way S@H works and will require trustworthy servers to ensure that the data coming back is verifiable.

Actually it doesn't require trustworthy machines, because the work still has to validate.

The problem is that the work is split at the Space Sciences Lab, and to distribute SETI@Home, you'd still have to move work in and out of the Lab.

Sooner or later, the existing bandwidth runs out.
____________

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913789 - Posted: 3 Jul 2009, 23:30:56 UTC - in response to Message 913782.

maybe it's time to trench the lawn and install that gigabit cable. i know its expensive and labour consuming. too bad berkley is 4 time zones away. i have a 300hp trenching machine.

It's more than a short trench, it's a good distance from the lab to main campus.

The other problem: gigabit-capable routers and interfaces.
____________

Cosmic_Ocean
Avatar
Send message
Joined: 23 Dec 00
Posts: 2266
Credit: 8,686,024
RAC: 4,192
United States
Message 913829 - Posted: 4 Jul 2009, 1:11:20 UTC - in response to Message 913789.

maybe it's time to trench the lawn and install that gigabit cable. i know its expensive and labour consuming. too bad berkley is 4 time zones away. i have a 300hp trenching machine.

It's more than a short trench, it's a good distance from the lab to main campus.

The other problem: gigabit-capable routers and interfaces.

If my memory serves me properly, I thought it was somewhere near a mile? I know it's quite a way uphill.
____________

Linux laptop uptime: 1484d 22h 42m
Ended due to UPS failure, found 14 hours after the fact

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5823
Credit: 59,052,982
RAC: 47,875
Australia
Message 913831 - Posted: 4 Jul 2009, 1:19:31 UTC - in response to Message 913829.


No need for a trencher- go wireless.
____________
Grant
Darwin NT.

Profile Gary CharpentierProject donor
Volunteer tester
Avatar
Send message
Joined: 25 Dec 00
Posts: 12525
Credit: 6,823,872
RAC: 5,191
United States
Message 913847 - Posted: 4 Jul 2009, 2:08:32 UTC - in response to Message 913789.

maybe it's time to trench the lawn and install that gigabit cable. i know its expensive and labour consuming. too bad berkley is 4 time zones away. i have a 300hp trenching machine.

It's more than a short trench, it's a good distance from the lab to main campus.

The other problem: gigabit-capable routers and interfaces.

IIRC there is a dark fiber already in place for the run. But it belongs to the campus though not SETI@Home. There is some considerable politics so it isn't just hardware and cash.

As to the bandwidth I don't know how much is being chewed by VLAR killers, but I think the project needs to be tuned to direct those machines to their backup projects or get the owners to remove the VLAR killer and instead use a re-scheduler to turn the CUDA task into a GPU task. Bandwidth is too important to waste.

For the future BOINC may need a re-write so it simply sends the task and the user side decides if it is a CUDA or GPU. May need to send a hint along with the data. Failing that the splitters would need to be re-written so they split CUDA/GPU better based on expected task length. But that is a per-project fix rather than a BOINC wide fix.

Looks like it going to be a crunch 'em if you've got 'em weekend.

And Happy Birthday USA.

____________

Profile Xen
Avatar
Send message
Joined: 22 Jul 00
Posts: 86
Credit: 2,087,751
RAC: 1,040
United States
Message 913849 - Posted: 4 Jul 2009, 2:12:41 UTC - in response to Message 913417.


Thanks for All the Efforts Berkeley - it is Sincerely appreciated

[and Thanks for the Updates Matt . . .]

I second this =)
____________
Nobody is nobody. Everyone has something to offer

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913854 - Posted: 4 Jul 2009, 2:25:35 UTC - in response to Message 913831.


No need for a trencher- go wireless.

Since connectivity on campus is provided by the IST department (the folks who brought you Cricket) they have to maintain it.

... and if there isn't a good clean line-of-sight from the lab to the right building(s) on Campus, then RF isn't a good choice.
____________

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913856 - Posted: 4 Jul 2009, 2:26:55 UTC - in response to Message 913847.
Last modified: 4 Jul 2009, 2:27:18 UTC

maybe it's time to trench the lawn and install that gigabit cable. i know its expensive and labour consuming. too bad berkley is 4 time zones away. i have a 300hp trenching machine.

It's more than a short trench, it's a good distance from the lab to main campus.

The other problem: gigabit-capable routers and interfaces.

IIRC there is a dark fiber already in place for the run. But it belongs to the campus though not SETI@Home. There is some considerable politics so it isn't just hardware and cash.

I think it is just hardware and cash. IST is willing to do it, but they don't have the money in their budget -- they want SETI (or SSL) to fund it.

... at least that's what has been said here in the past.
____________

1 · 2 · 3 · 4 . . . 5 · Next

Message boards : Technical News : Blitzed Again (Jul 02 2009)

Copyright © 2014 University of California