Wish List - Reduce Expiration Period


log in

Advanced search

Questions and Answers : Wish list : Wish List - Reduce Expiration Period

Author Message
Profile Jeff Bakle
Volunteer tester
Send message
Joined: 24 Dec 99
Posts: 19
Credit: 4,158,888
RAC: 434
United States
Message 1051683 - Posted: 28 Nov 2010, 19:16:02 UTC

My wish list item would be to reduce the expiration period on Seti work units. Why does it have to be 2 months? If a result becomes a "ghost" it takes two months for it to get regenerated.

I understand that some users have some very slow machines, but could this period be between 2 to 4 weeks?

Regardless of this issue, I am a devoted cruncher! I can't wait for the system get become fully functional again!
____________

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24292
Credit: 519,558
RAC: 29
United States
Message 1051701 - Posted: 28 Nov 2010, 21:04:44 UTC

SETI is not a time critical application. It really does not matter if the task is varified today or next year. Slower machines that are not on full time can take more than a month to complete a task.
____________


BOINC WIKI

OzzFan
Volunteer tester
Avatar
Send message
Joined: 9 Apr 02
Posts: 13577
Credit: 29,861,062
RAC: 16,358
United States
Message 1051722 - Posted: 28 Nov 2010, 22:14:35 UTC - in response to Message 1051683.

The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away.

From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR.

Profile Jeff Bakle
Volunteer tester
Send message
Joined: 24 Dec 99
Posts: 19
Credit: 4,158,888
RAC: 434
United States
Message 1051729 - Posted: 28 Nov 2010, 22:44:55 UTC - in response to Message 1051722.

Understood, as I am not a credit-seeker.

Having millions of results waiting to be verified is not a problem for the servers?
____________

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24292
Credit: 519,558
RAC: 29
United States
Message 1051735 - Posted: 28 Nov 2010, 23:39:22 UTC - in response to Message 1051722.

The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away.

From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR.

Actually, the long term throughput is increased somewhat if you do not discourage the owners of the slower computers from participating. Reducing the deadlines will decrease turnaround slightly, but will not increase overall productivity.
____________


BOINC WIKI

OzzFan
Volunteer tester
Avatar
Send message
Joined: 9 Apr 02
Posts: 13577
Credit: 29,861,062
RAC: 16,358
United States
Message 1051766 - Posted: 29 Nov 2010, 1:33:13 UTC - in response to Message 1051735.

The only reason to reduce the deadline is to quicken the credit-granting process. That motivation only helps the credit-seekers but does little in the way of the science since the data we're processing is from light-years away.

From the project's perspective the only thing it would accomplish is getting through more data quicker. However, with nVidia's CUDA and ATi's OpenCL I'm not sure they need to sift quicker. It's not even a race against time. It would also discourage anyone with a slower computer to not contribute, and the project is trying to be as inclusive as possible. Excluding anyone who can't afford a newer computer wouldn't do much good for the project's PR.

Actually, the long term throughput is increased somewhat if you do not discourage the owners of the slower computers from participating. Reducing the deadlines will decrease turnaround slightly, but will not increase overall productivity.


That's kinda what I said. Maybe I didn't make it very clear.

OzzFan
Volunteer tester
Avatar
Send message
Joined: 9 Apr 02
Posts: 13577
Credit: 29,861,062
RAC: 16,358
United States
Message 1051768 - Posted: 29 Nov 2010, 1:37:48 UTC - in response to Message 1051729.

Having millions of results waiting to be verified is not a problem for the servers?


Not being a qualified DBA (Database Administrator, I'm simply a Server Admin) I can only make an educated guess that having all those results could become an issue.

However, as I understand it, increasing the throughput of the crunchers by reducing the deadlines (and thus having people report in more often) would probably create more of a load on the DB (database for those reading who don't know the abbreviations) because the servers would have to handle more transactions per second. Simply having data occupying a row in the database is only a space concern, and space is dirt cheap. Having all those computers pound the servers harder with more results more often, that would require more CPU and I/O power. Given the project's funding, I don't think even the new servers would be able to handle that - though that last statement is just a guess.

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24292
Credit: 519,558
RAC: 29
United States
Message 1051791 - Posted: 29 Nov 2010, 3:34:09 UTC - in response to Message 1051768.

Having millions of results waiting to be verified is not a problem for the servers?


Not being a qualified DBA (Database Administrator, I'm simply a Server Admin) I can only make an educated guess that having all those results could become an issue.

However, as I understand it, increasing the throughput of the crunchers by reducing the deadlines (and thus having people report in more often) would probably create more of a load on the DB (database for those reading who don't know the abbreviations) because the servers would have to handle more transactions per second. Simply having data occupying a row in the database is only a space concern, and space is dirt cheap. Having all those computers pound the servers harder with more results more often, that would require more CPU and I/O power. Given the project's funding, I don't think even the new servers would be able to handle that - though that last statement is just a guess.

The current bottleneck is often the 100Mbit connection to the lab. Yes, having extra work units does cost space. Having ghost tasks costs that same space for longer. Maxing out the bandwidth causes extra ghost WUs to be created, so hitting the servers harder can have the effect of actually increasing the space used because of the extra ghost WUs created as well as hitting the DB harder and having more space taken in the DB for ghost WO rows.

Note that the WOrk Unit data is stored outside of the DB in a directory. The WU file is sent to the client directly from this file via http Get after the project update (that hits the DB). The reverse is true for completed results. The result file is uploaded via http post and then the contact is made to the DB via the update to report that the upload had been done earlier.
____________


BOINC WIKI

Profile Jeff Bakle
Volunteer tester
Send message
Joined: 24 Dec 99
Posts: 19
Credit: 4,158,888
RAC: 434
United States
Message 1051839 - Posted: 29 Nov 2010, 11:00:30 UTC

Thanks for your insights!
____________

Questions and Answers : Wish list : Wish List - Reduce Expiration Period

Copyright © 2014 University of California