Message boards :
Technical News :
Blitzed Again (Jul 02 2009)
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13727 Credit: 208,696,464 RAC: 304 |
... and if there isn't a good clean line-of-sight from the lab to the right building(s) on Campus, then RF isn't a good choice. That would be the biggest impediment IMHO. Grant Darwin NT |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30637 Credit: 53,134,872 RAC: 32 |
Screw LOS between the Lab and a campus building, is there LOS between the LAB and their ISP's node? (Calren?) Remove campus entirely. Get them an OC-768 link! |
Cameron Send message Joined: 27 Nov 02 Posts: 110 Credit: 5,082,471 RAC: 17 |
All the talks were great Matt. There was a bit of a bad pickup from the microphone on the camera during your pipeline talk [popping and hissing] that didn't appear to be noticable, or at least as bad during the other talks. A good effort by everyone though. I assume you meant there isn't the money for another set of hands for technical assistance to help you and Jeff. Or perhaps you meant someone to field the frequently asked setup questions. I say this because you and Jeff seem to be the Tech support currently administring and kicking (when needed) the servers. And you do a very good job at it. |
insomnia Send message Joined: 27 Nov 04 Posts: 1 Credit: 226,439 RAC: 0 |
Storm? What storm? The last few days I get (on Windows and Linux): "Message from server: (Project has no jobs available)". I've got BOINC 6.6.36 under Windows 2000 and if I remember correctly 6.4.xyz under Ubuntu 9.04 x86_64. I guessed there's allways something to compute until LGM have been found ;-) |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
ugh... my computers are dry. I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ Go Georgia Tech. |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
it's not the fact that the cpu is dry, it's the fact that the gpu's are dry..... 2 9400 gt's.... 32 sp each and they can run but probably won't finish a gpugrid wu.... cpu's are running, WCG,MW,ROSSETA, and a few others to keep them going.... i had those cpu do vtu@home but it(project leaders and servers have been unresponive as to why they were down for 2 weeks and i'm not crunching for them as i fear they will go down again.... I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ Go Georgia Tech. |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
i'll just go to the beta seti as their are 217,000 wu's waiting to be sent..... I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ Go Georgia Tech. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Range (and background noise) becomes an issue. Not saying it isn't possible, just suggesting that it might cost more than the fiber. ... and being on campus means that they have to use what IST will support. |
Jon Golding Send message Joined: 20 Apr 00 Posts: 105 Credit: 841,861 RAC: 0 |
I see what you mean, but how about a system in which download servers are distributed around various universities (just split data and download to clients), whilst the ONLY functions that the Berkeley servers do is to receive all uploaded completed tasks, do validation, and result storage/archiving. Wouldn't that ease the bandwidth/server pain? Of course, there may be logistical problems in that the raw data disks from Aricebo would need to be sent out the different participating universities, but maybe this happens anyway for some observing projects. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
I see what you mean, but how about a system in which download servers are distributed around various universities (just split data and download to clients), whilst the ONLY functions that the Berkeley servers do is to receive all uploaded completed tasks, do validation, and result storage/archiving. Wouldn't that ease the bandwidth/server pain? Of course, there may be logistical problems in that the raw data disks from Aricebo would need to be sent out the different participating universities, but maybe this happens anyway for some observing projects. Its a good idea in theory, but there's still some problems with it. There's still a cost issue there. SETI@Home leases space from the University, and has to work within the constraints set forth by the University's dictates, which include power requirements for both the servers and the air conditioning to cool the servers. Then there's the issue of staff wages (they do try to get paid for their work). I mention this because if the plan is to get other universities involved, you are effectively doubling the financial strain on the project. Other scientists at other universities will have to purchase servers (or look for donations), lease space, pay for their power usage, pay for the connection to the internet, etc. And of course those scientists will want to get paid as well. Its like a company with a poor business model that's extra-busy due to their popularity. Suggesting that the company branch out when there's no finances to do so will only make things worse. Unfortunate for SETI, they do not have a "business model" because they are not a business - they are a science project. |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
Unfortunate for SETI, they do not have a "business model" because they are not a business - they are a science project. actually that makes me happy that it's a science project b/c we have seen what happens to company's that run on a bussiness model and has a spot on wallstreet....... Bankruptcy I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ Go Georgia Tech. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
I think there is merit to the idea of sharing the SETI@Home database to data servers across the Internet and leveraging the distributed servers' bandwidth. P2P networks are very good at demonstrating the power of collective bandwidth for spreading information from a single source :) The thing that makes P2P really work is that each file (each bootleg copy of the latest DVD) is uploaded once, and basically spreads: each downloader also becomes a new upload site. That doesn't work for SETI because each result goes to ONE user. Sure, you could come up with a "nano-splitter" that took one result and turned it into two (one for each wingman) but even that wouldn't help that much. ... and not at all for uploads. |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
maybe, we should explore what drugdiscovery(A) and hydrogen(A) have done...... use 7zip to send files.... it makes 10 mb into 4-5 mb or less. 7zip works ok with boinc.... it will reduce the bandwidth a little. and may help with the congestion.... gamers also use it.... alot better at compressing than windows zip. I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ Go Georgia Tech. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13727 Credit: 208,696,464 RAC: 304 |
maybe, we should explore what drugdiscovery(A) and hydrogen(A) have done...... use 7zip to send files.... it makes 10 mb into 4-5 mb or less. Compression as been suggested before. But given the nature of the data, it's not very compressible. Grant Darwin NT |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
maybe, we should explore what drugdiscovery(A) and hydrogen(A) have done...... use 7zip to send files.... it makes 10 mb into 4-5 mb or less. It ought to compress fairly nicely, but there is a trade off with CPU time on the server to do the compression vs the transfer time to transfer the uncompressed bytes. BOINC WIKI |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30637 Credit: 53,134,872 RAC: 32 |
maybe, we should explore what drugdiscovery(A) and hydrogen(A) have done...... use 7zip to send files.... it makes 10 mb into 4-5 mb or less. I'm not so sure about compression. My understanding was that random noise wouldn't compress very much at all as there are no patterns in it to exploit to compress. Now a WU with a signal might compress because of the pattern of the signal. |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
perhaps it should be beta tested on the seti beta...... just a 1-2 week trial... it couldn't hurt.... may loose a little time if it doesn't pan out, but it would be an option explored... I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ Go Georgia Tech. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
maybe, we should explore what drugdiscovery(A) and hydrogen(A) have done...... use 7zip to send files.... it makes 10 mb into 4-5 mb or less. Correct me if I am wrong, but isn't it a text file that is sent each way? BOINC WIKI |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13727 Credit: 208,696,464 RAC: 304 |
Lossless_data_compression By operation of the pigeonhole principle, no lossless compression algorithm can efficiently compress all possible data, and completely random data streams cannot be compressed. I would expect that most of the data downloaded would be mostly ramdom data, hence unlikely to compress much at all. The result data would probably be more compressable, but how much more? And given how small it already is any savings in bandwidth could well be offset by the prcessing required at the other end to expand it all again. Grant Darwin NT |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Rather than speculating in a vacuum, I suggest any of you could simply try compressing a WU or two. You'd find that setiathome_enhanced WUs are moderately compressible, astropulse_v505 only slightly. If compressed downloads were implemented, it would be akin to adding another 25 MBits/second to the download bandwidth; enough to help short term, but hardly a permanent fix. BOINC 5.x and later have libCurl which is prepared to decompress files sent with gzip compression. The download servers run Apache, which can be configured to gzip data being sent. All that would be needed is to set the warning that all users must upgrade to BOINC 5.x or later, give it long enough to be seen and adopted, then reconfigure the download servers. The real issue is whether the download servers can handle the extra load of doing the compression; an earlier suggestion for a trial at SETI Beta wouldn't test that effectively. And although the 7zip format does compress better than gzip, it takes more memory and time to do the compression. It would also take project-specific coding to implement, rather than using the features BOINC already has. Edit: Correct me if I am wrong, but isn't it a text file that is sent each way? The payload in Enhanced work is 256KB of nearly random data encoded in uue fashion, so can be sent as text. The payload in AP work is 8MB of pure 8 bit nearly random data. In both cases, it's the task of the science applications to ferret out the few cases where the data deviates from random noise. Joe |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.