Message boards :
Number crunching :
@admins: overall project progress
Message board moderation
Author | Message |
---|---|
B-Roy Send message Joined: 4 May 03 Posts: 220 Credit: 260,955 RAC: 1 |
1) do we crunch faster than wus are created, or are we not able to work as fast as the data is generated? 2)are we working on the same wus as the classic project? and if yes, does it make sense? 3)when will the old classic project have to switch to boinc, and on what does it depend (only on stability issues?) 4)how far have we worked through in terms of percentage? I am grateful for every answer. |
B-Roy Send message Joined: 4 May 03 Posts: 220 Credit: 260,955 RAC: 1 |
up. |
B-Roy Send message Joined: 4 May 03 Posts: 220 Credit: 260,955 RAC: 1 |
last attempt to get an answer. Is none of you interested or informed? |
Toby Send message Joined: 26 Oct 00 Posts: 1005 Credit: 6,366,949 RAC: 0 |
1) Looks like they are currently keeping up with demand pretty good. There are as of now almost 400,000 work units ready to send out 2) I don't think so. The splitter is what takes data from the tapes and turns them into work units. There is currently one running on BOINC. 3) Once BOINC can handle 500,000 users. They are somewhere in the process of getting a new database server that should be able to handle this. 4) 0% - the data is a constant stream. As long as it is being recorded, there will be more work for us. A member of The Knights Who Say NI! For rankings, history graphs and more, check out: My BOINC stats site |
Benher Send message Joined: 25 Jul 99 Posts: 517 Credit: 465,152 RAC: 0 |
>>4) 0% - the data is a constant stream. As long as it is being recorded, there will be more work for us. To an extent only. There is a constant stream of data going into the arecibo receiver. And there are a specific set of tests to perform on each block of data from the receiever. This will require [time] = [# tests] * [blocks of data] / [# of cpus crunching]. At some point, the number of CPUs will exceed the number needed. This was partially why BOINC was created. Other projects can get people to volunteer their CPU time there also, so if/when seti is saturated, the users still get worthwhile use from their idle CPU time. Note: A new antenna and recording equipment is being prepared to be sent to Arecibo. Broader range of data gathering ability, larger tapes to store data, new tests will be formulated to leverage the new data. So in the future seti will be able to keep additional CPUs busy as well. |
B-Roy Send message Joined: 4 May 03 Posts: 220 Credit: 260,955 RAC: 1 |
useless to note that pcs are getting faster and faster. when i compare my first seti computer (amd k6 300 MHz) to my actual one (3400+) and if I assume that the stream is constant, there should be a point in time, were we crunch faster than data is produced. that's why i've asked if this is already the case or not. adding a new antenna and the southern hemisphere project may be a remedy. |
1202 Program Alarm Send message Joined: 16 Jun 99 Posts: 239 Credit: 19,164,944 RAC: 38 |
> Note: A new antenna and recording equipment is being prepared to be sent to > Arecibo. Broader range of data gathering ability, larger tapes to store data, > new tests will be formulated to leverage the new data. That's great news. Have there been any developments on using Parkes (or other observatories) for data collection? SetiUK - The Offical UK Seti Site - Team Lookers The Space Directory Visit Seti.org.uk SETI News Mailing List S@h Berkeley's Staff Friends Club © [/url] |
SwissNic Send message Joined: 27 Nov 99 Posts: 78 Credit: 633,713 RAC: 0 |
> useless to note that pcs are getting faster and faster. when i compare my > first seti computer (amd k6 300 MHz) to my actual one (3400+) and if I assume > that the stream is constant, there should be a point in time, were we crunch > faster than data is produced. that's why i've asked if this is already the > case or not. adding a new antenna and the southern hemisphere project may be a > remedy. This is correct if you assume we are doing the same calculations on the same size of data as in the past. If reality, SETI have massively increased the amount of calculations we do on the data, and now are in the process of increasing the band range of the data for us to process. I read somewhere that they want to keep WU calc times around the 2-3 hours for fast machines, so as machines get faster, we wil do more work on a WU and SETI will get more accurate results... |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
> 1) do we crunch faster than wus are created, or are we not able to work as > fast as the data is generated? Currently the splitters are keeping up. There are three splitters, and only one is usually running. This means that there is some spare capacity in that process. > 2)are we working on the same wus as the classic project? and if yes, does it > make sense? Yes, at the moment we are running the same WUs as classic. The splitters are producing WUs for both. The dual crunching is to verify that the BOINC S@H code is not broken somehow. > 3)when will the old classic project have to switch to boinc, and on what does > it depend (only on stability issues?) Supposedly when all of the M3 bugs have been fixed, and the servers have enough capacity to deal with 500K users. > 4)how far have we worked through in terms of percentage? Not certain I understand this question. Percentage of what? If you are talking about Aricebo data, we are more than half done with the currently recorded data, but more is generated on many nights of observation. > > I am grateful for every answer. > BOINC WIKI |
Ingleside Send message Joined: 4 Feb 03 Posts: 1546 Credit: 15,832,022 RAC: 13 |
> > 3)when will the old classic project have to switch to boinc, and on what > does > > it depend (only on stability issues?) > Supposedly when all of the M3 bugs have been fixed, and the servers have > enough capacity to deal with 500K users. M3-tasks is planned done after "classic" is killed off, M2 = "Decommission old SETI@home". Also, it's not all bugs needs to be fixed, only all high/critical. ;) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.