Questions and Answers :
Wish list :
Wish List - Reduce Expiration Period
Message board moderation
Author | Message |
---|---|
![]() Send message Joined: 24 Dec 99 Posts: 19 Credit: 5,056,116 RAC: 1 ![]() |
My wish list item would be to reduce the expiration period on Seti work units. Why does it have to be 2 months? If a result becomes a "ghost" it takes two months for it to get regenerated. I understand that some users have some very slow machines, but could this period be between 2 to 4 weeks? Regardless of this issue, I am a devoted cruncher! I can't wait for the system get become fully functional again! |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 ![]() |
SETI is not a time critical application. It really does not matter if the task is varified today or next year. Slower machines that are not on full time can take more than a month to complete a task. ![]() ![]() BOINC WIKI |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away. From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR. |
![]() Send message Joined: 24 Dec 99 Posts: 19 Credit: 5,056,116 RAC: 1 ![]() |
Understood, as I am not a credit-seeker. Having millions of results waiting to be verified is not a problem for the servers? |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 ![]() |
The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away. Actually, the long term throughput is increased somewhat if you do not discourage the owners of the slower computers from participating. Reducing the deadlines will decrease turnaround slightly, but will not increase overall productivity. ![]() ![]() BOINC WIKI |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away. That's kinda what I said. Maybe I didn't make it very clear. |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
Having millions of results waiting to be verified is not a problem for the servers? Not being a qualified DBA (Database Administrator, I'm simply a Server Admin) I can only make an educated guess that having all those results could become an issue. However, as I understand it, increasing the throughput of the crunchers by reducing the deadlines (and thus having people report in more often) would probably create more of a load on the DB (database for those reading who don't know the abbreviations) because the servers would have to handle more transactions per second. Simply having data occupying a row in the database is only a space concern, and space is dirt cheap. Having all those computers pound the servers harder with more results more often, that would require more CPU and I/O power. Given the project's funding, I don't think even the new servers would be able to handle that - though that last statement is just a guess. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 ![]() |
Having millions of results waiting to be verified is not a problem for the servers? The current bottleneck is often the 100Mbit connection to the lab. Yes, having extra work units does cost space. Having ghost tasks costs that same space for longer. Maxing out the bandwidth causes extra ghost WUs to be created, so hitting the servers harder can have the effect of actually increasing the space used because of the extra ghost WUs created as well as hitting the DB harder and having more space taken in the DB for ghost WO rows. Note that the WOrk Unit data is stored outside of the DB in a directory. The WU file is sent to the client directly from this file via http Get after the project update (that hits the DB). The reverse is true for completed results. The result file is uploaded via http post and then the contact is made to the DB via the update to report that the upload had been done earlier. ![]() ![]() BOINC WIKI |
![]() Send message Joined: 24 Dec 99 Posts: 19 Credit: 5,056,116 RAC: 1 ![]() |
Thanks for your insights! |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.