Wish List - Reduce Expiration Period

Questions and Answers : Wish list : Wish List - Reduce Expiration Period
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Jeff Bakle
Volunteer tester

Send message
Joined: 24 Dec 99
Posts: 19
Credit: 5,056,116
RAC: 1
United States
Message 1051683 - Posted: 28 Nov 2010, 19:16:02 UTC

My wish list item would be to reduce the expiration period on Seti work units. Why does it have to be 2 months? If a result becomes a "ghost" it takes two months for it to get regenerated.

I understand that some users have some very slow machines, but could this period be between 2 to 4 weeks?

Regardless of this issue, I am a devoted cruncher! I can't wait for the system get become fully functional again!
ID: 1051683 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1051701 - Posted: 28 Nov 2010, 21:04:44 UTC

SETI is not a time critical application. It really does not matter if the task is varified today or next year. Slower machines that are not on full time can take more than a month to complete a task.


BOINC WIKI
ID: 1051701 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1051722 - Posted: 28 Nov 2010, 22:14:35 UTC - in response to Message 1051683.  

The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away.

From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR.
ID: 1051722 · Report as offensive
Profile Jeff Bakle
Volunteer tester

Send message
Joined: 24 Dec 99
Posts: 19
Credit: 5,056,116
RAC: 1
United States
Message 1051729 - Posted: 28 Nov 2010, 22:44:55 UTC - in response to Message 1051722.  

Understood, as I am not a credit-seeker.

Having millions of results waiting to be verified is not a problem for the servers?
ID: 1051729 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1051735 - Posted: 28 Nov 2010, 23:39:22 UTC - in response to Message 1051722.  

The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away.

From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR.

Actually, the long term throughput is increased somewhat if you do not discourage the owners of the slower computers from participating. Reducing the deadlines will decrease turnaround slightly, but will not increase overall productivity.


BOINC WIKI
ID: 1051735 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1051766 - Posted: 29 Nov 2010, 1:33:13 UTC - in response to Message 1051735.  

The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away.

From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR.

Actually, the long term throughput is increased somewhat if you do not discourage the owners of the slower computers from participating. Reducing the deadlines will decrease turnaround slightly, but will not increase overall productivity.


That's kinda what I said. Maybe I didn't make it very clear.
ID: 1051766 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1051768 - Posted: 29 Nov 2010, 1:37:48 UTC - in response to Message 1051729.  

Having millions of results waiting to be verified is not a problem for the servers?


Not being a qualified DBA (Database Administrator, I'm simply a Server Admin) I can only make an educated guess that having all those results could become an issue.

However, as I understand it, increasing the throughput of the crunchers by reducing the deadlines (and thus having people report in more often) would probably create more of a load on the DB (database for those reading who don't know the abbreviations) because the servers would have to handle more transactions per second. Simply having data occupying a row in the database is only a space concern, and space is dirt cheap. Having all those computers pound the servers harder with more results more often, that would require more CPU and I/O power. Given the project's funding, I don't think even the new servers would be able to handle that - though that last statement is just a guess.
ID: 1051768 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1051791 - Posted: 29 Nov 2010, 3:34:09 UTC - in response to Message 1051768.  

Having millions of results waiting to be verified is not a problem for the servers?


Not being a qualified DBA (Database Administrator, I'm simply a Server Admin) I can only make an educated guess that having all those results could become an issue.

However, as I understand it, increasing the throughput of the crunchers by reducing the deadlines (and thus having people report in more often) would probably create more of a load on the DB (database for those reading who don't know the abbreviations) because the servers would have to handle more transactions per second. Simply having data occupying a row in the database is only a space concern, and space is dirt cheap. Having all those computers pound the servers harder with more results more often, that would require more CPU and I/O power. Given the project's funding, I don't think even the new servers would be able to handle that - though that last statement is just a guess.

The current bottleneck is often the 100Mbit connection to the lab. Yes, having extra work units does cost space. Having ghost tasks costs that same space for longer. Maxing out the bandwidth causes extra ghost WUs to be created, so hitting the servers harder can have the effect of actually increasing the space used because of the extra ghost WUs created as well as hitting the DB harder and having more space taken in the DB for ghost WO rows.

Note that the WOrk Unit data is stored outside of the DB in a directory. The WU file is sent to the client directly from this file via http Get after the project update (that hits the DB). The reverse is true for completed results. The result file is uploaded via http post and then the contact is made to the DB via the update to report that the upload had been done earlier.


BOINC WIKI
ID: 1051791 · Report as offensive
Profile Jeff Bakle
Volunteer tester

Send message
Joined: 24 Dec 99
Posts: 19
Credit: 5,056,116
RAC: 1
United States
Message 1051839 - Posted: 29 Nov 2010, 11:00:30 UTC

Thanks for your insights!
ID: 1051839 · Report as offensive

Questions and Answers : Wish list : Wish List - Reduce Expiration Period


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.