Cannot get new tasks

Questions and Answers : Windows : Cannot get new tasks
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212499 - Posted: 31 Mar 2012, 19:13:08 UTC

My main Seti computer has completely run out of tasks now even though it's set to keep 10 days worth.

I have no limits on my network connection and it just keeps backing off of downloading the new tasks?

I guess this must be an issue to do witht the 100mb Seti Boinc servers but is there nothing than can do to improve the main connection or is that where the donations help?

Hopefully I can get some more tasks on the go soon :)

Thanks
ID: 1212499 · Report as offensive
Profile Gatekeeper
Avatar

Send message
Joined: 14 Jul 04
Posts: 887
Credit: 176,479,616
RAC: 0
United States
Message 1212536 - Posted: 31 Mar 2012, 20:15:43 UTC
Last modified: 31 Mar 2012, 20:16:14 UTC

Yes, everyone is experiencing the same thing. My three rigs are about 3000 units pending download at the moment.

It IS because of the demand on the 100MB pipe, and there is nothing that you can do about it. It isn't even so much about donations, though those certainly do help in other areas of the project. It's more about University politics and protocol.
ID: 1212536 · Report as offensive
Bear

Send message
Joined: 22 Mar 12
Posts: 1
Credit: 0
RAC: 0
United States
Message 1212546 - Posted: 31 Mar 2012, 20:45:38 UTC - in response to Message 1212536.  

Thanks for the update, I am suck as well as was wondering what the heck was going on!!
ID: 1212546 · Report as offensive
taslehoff

Send message
Joined: 28 Sep 02
Posts: 3
Credit: 2,938,934
RAC: 0
United Kingdom
Message 1212577 - Posted: 31 Mar 2012, 22:08:29 UTC

I think i will bin it then, no point having the comp on wating for something thats not comming :(
ID: 1212577 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212584 - Posted: 31 Mar 2012, 22:24:50 UTC - in response to Message 1212577.  

Join a backup project in the meantime. There are many other worthwhile BOINC projects that could use the help.
ID: 1212584 · Report as offensive
Gary Hardester

Send message
Joined: 21 Feb 00
Posts: 3
Credit: 709,750
RAC: 0
United States
Message 1212655 - Posted: 1 Apr 2012, 4:08:08 UTC

I just re-joined the project and downloaded the program on my new machine after several years away. Seti@home was mentioned on another message board and it reminded me that I have a much faster machine now and I wanted to play again.

I was actually surprised that the system remembered me and kept my stats. And I have been playing with the preferences thinking that a setting was preventing me from getting a new task.

So, no one is getting new tasks? Any idea how long it will take before a new task is offered to me?

Is the real message one of too many computers and not enough pipe? I sounds like I should just un-install the program again.
ID: 1212655 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212657 - Posted: 1 Apr 2012, 4:14:58 UTC - in response to Message 1212655.  

Its not that no one is getting tasks, its that there's a traffic jam at the 100Mbit connection on SETI's end and it simply takes a while to download work and return it. Its recommended to simply let BOINC keep retrying to download work on its own and it will eventually fetch what it requests.

If you want to keep your CPU busy while waiting for work from SETI, join another worthwhile project. In fact, having a backup project can keep you working even if the other project has a server crash (don't put all your eggs in one basket).

I'd hate to see anyone give up and leave out of frustration - you won't even notice if you don't micro-manage BOINC and just let it run in the background. That way there's no frustration to be had. Note that your computer doesn't care if it has to keep trying and it never gets frustrated.

I have BOINC on 12 computers and I simply let it run while I do my own stuff. I never even notice if I'm out of work.
ID: 1212657 · Report as offensive
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212676 - Posted: 1 Apr 2012, 5:07:00 UTC
Last modified: 1 Apr 2012, 5:08:43 UTC

For me I now have a list of probably 50+ tasks all 'Download Pending', the transfer will start, get to about 50% and back off:

01/04/2012 06:02:25 | SETI@home | Backing off 30 min 53 sec on download of 15jn11ab.11167.11524.3.10.8

Kind of confusing when these files are 366kb as to why it can't just download it in a second of two on a 10mb connection, guess it just can't get the resources needed.

It's certainly got worse this last week though because I've never had this much of a struggle to get tasks.



Are these normal issues in busy periods?


01/04/2012 00:34:02 | SETI@home | Temporarily failed download of 15jn11ab.11167.11524.3.10.7: HTTP error
01/04/2012 00:34:02 | SETI@home | Backing off 1 hr 0 min 49 sec on download of 15jn11ab.11167.11524.3.10.7
01/04/2012 00:34:05 | | Project communication failed: attempting access to reference site
01/04/2012 00:34:06 | | Internet access OK - project servers may be temporarily down.
ID: 1212676 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1212683 - Posted: 1 Apr 2012, 5:25:43 UTC - in response to Message 1212676.  

For me I now have a list of probably 50+ tasks all 'Download Pending', the transfer will start, get to about 50% and back off:

01/04/2012 06:02:25 | SETI@home | Backing off 30 min 53 sec on download of 15jn11ab.11167.11524.3.10.8

Kind of confusing when these files are 366kb as to why it can't just download it in a second of two on a 10mb connection, guess it just can't get the resources needed.

It's certainly got worse this last week though because I've never had this much of a struggle to get tasks.



Are these normal issues in busy periods?


01/04/2012 00:34:02 | SETI@home | Temporarily failed download of 15jn11ab.11167.11524.3.10.7: HTTP error
01/04/2012 00:34:02 | SETI@home | Backing off 1 hr 0 min 49 sec on download of 15jn11ab.11167.11524.3.10.7
01/04/2012 00:34:05 | | Project communication failed: attempting access to reference site
01/04/2012 00:34:06 | | Internet access OK - project servers may be temporarily down.

That looks pretty normal for a busy period.

I would recommend that you add another couple of projects to your set of BOINC projects.


BOINC WIKI
ID: 1212683 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212696 - Posted: 1 Apr 2012, 5:58:15 UTC - in response to Message 1212676.  
Last modified: 1 Apr 2012, 6:04:03 UTC

Kind of confusing when these files are 366kb as to why it can't just download it in a second of two on a 10mb connection, guess it just can't get the resources needed.


Imagine 250,000 cars all trying to get onto a 4 lane highway at the same time. There will be a huge traffic jam as they all slowly work their way onto the freeway.

With computer networking, if you overload a server like that, it will drop connections. If too many hosts all pound a server for any reason, including trying to load a webpage, the TCP session can terminate as a "webpage not found" error. We call this a Distributed Denial of Service (DDoS) attack. There are ways to share the load using multiple servers, but SETI@Home lacks the kind of funds necessary to have multiple download servers to support 125,000 accounts (or about 500,000 total hosts).

The important thing to remember is that SETI never promised anyone when they signed up that there would be work available 100% of the time, nor did they promise that there wouldn't be any problems getting work. As a volunteer, we should pick projects whose goals we agree with and let the BOINC framework do its job.

This is why backup projects are so important, and they're even recommended on SETI@Home's front page

It's certainly got worse this last week though because I've never had this much of a struggle to get tasks.

Are these normal issues in busy periods?


Problem is, with so many hosts, there's almost never a non-busy period. By the time all the hosts catch up with requests for work, we're right back at the Tuesday server maintenance outage, then we're back to several days of catching up after the outage.
ID: 1212696 · Report as offensive
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212881 - Posted: 1 Apr 2012, 17:11:02 UTC

Is there no way they could somehow peer-to-peer share the files so if they exist on other users computers then allow them to be downloaded from there, if the user allows?

That way the faster users could be given more access to the main server if they allow sharing of the files.

The highest downloading and sharing users would then distribute most of the work?

Kind of like torrenting?
ID: 1212881 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212884 - Posted: 1 Apr 2012, 17:15:53 UTC - in response to Message 1212881.  
Last modified: 1 Apr 2012, 17:21:44 UTC

Is there no way they could somehow peer-to-peer share the files so if they exist on other users computers then allow them to be downloaded from there, if the user allows?

That way the faster users could be given more access to the main server if they allow sharing of the files.

The highest downloading and sharing users would then distribute most of the work?

Kind of like torrenting?


That only masks the issue by moving the problem around. It doesn't solve it. Eventually, all the workunits would still need to start at Berkeley before they could be cached (downloaded) by clients and all the results would need to end back up at Berkeley (uploaded).

Essentially, all you've done is break down the freeway into multiple two-lane roads, but everyone still needs to move in and out of Berkeley.
ID: 1212884 · Report as offensive
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212922 - Posted: 1 Apr 2012, 18:35:23 UTC

But surely you'd reduce the download bandwidth if 10% of users downloaded from the main server but the other 90% downloaded tasks from other users?

Of course you can't change the 100% upload because that always has to go back to Berkeley to give them the results.

Maybe they could just torrent the task files so you could download them faster from other locations? Enable some sort of config line to check first in the torrent download folder else try the torrent task download url.

Just a thought! I'm a programmer not a networker so it's hard to know the best way to solve issues like this.
ID: 1212922 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212928 - Posted: 1 Apr 2012, 18:48:18 UTC - in response to Message 1212922.  

But surely you'd reduce the download bandwidth if 10% of users downloaded from the main server but the other 90% downloaded tasks from other users?


But those peer computers that those 10% get their work from need to download the work from Berkeley. Again, you're just moving the problem from Berkeley to another computer. The time spent downloading work to be peer-shared could just as easily be downloaded directly.

Imagine there's a single water spigot that everyone must share. Now imagine we had a few users who tried to peer-share their water, so they spend extra-long at the spigot filling up an extra large bucket to be shared (and some to be used for themselves).

Sure, they could then share the water, but they spent 10x as long at the spigot, when those 10 users could have just each spent the same amount of time directly at the spigot themselves.

The question becomes: is it really worth all the extra complexity to program such a peer-network into BOINC when it doesn't solve the problem of only a single source of data?

The scenario has been reviewed and the answer is a resounding "no". The complexity of building in a peer-network isn't worth the minimal (if any) return on investment.

Just a thought! I'm a programmer not a networker so it's hard to know the best way to solve issues like this.


This is my area of expertise being a Systems Administrator and Network Engineer.
ID: 1212928 · Report as offensive
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212943 - Posted: 1 Apr 2012, 19:33:51 UTC
Last modified: 1 Apr 2012, 19:47:08 UTC

I just thought it would work the following way?

I don't want to argue about it just proposing possible alternates (where there's a will there's a way)

Example
Now

1,000,000 users downloaded 100 files from main server
-average 300kb filesize

1,000,000 * 100 * 300 = 27.93968 TB data to be downloaded from main server

With 10% download from main server

100,000 users downloaded 100 files from main server
-average 300kb filesize

100,000 * 100 * 300 = 2.79397 TB data to be downloaded from main server

Outcome

Seems to reduce the main traffic? You could look at it like the distributed Seti telescopes for a different kind of analogy. Could also only allow users with 'fast' download speeds to be allowed to download from the main server else their redirect to source from another location, would get rid of the slower users who take longer to download the files?
ID: 1212943 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212956 - Posted: 1 Apr 2012, 20:26:42 UTC - in response to Message 1212943.  
Last modified: 1 Apr 2012, 21:17:38 UTC

I just thought it would work the following way?


No, you've still ignored the fact that it's not just a reduction in amount of work that needs to be downloaded, but the amount of time each client is trying to get the server's attention. The longer a peer-shared client holds the server's attention, the longer it prevents others from getting attention. So you've moved the problem around but still create the same scenario.

Your example doesn't take time into the equation at all. You're also not taking into account the fact that AstroPulse workunits are 8MB large, but we'll stick with the 300KB size for this analogy. Let me modify it for you:

Example
Now

1,000,000 users downloaded 100 files from main server
-average 300kb filesize

1,000,000 [users] * 100 [files] * 300KB = 30 TB data to be downloaded from main server @ 95Mbit/second (over 95% saturation drops connections) = about 3,157.89 seconds total download time or 52.63 minutes (not counting TCP / server overhead).

Outcome:
A total of 100,000,000 workunits downloaded total. Each user gets 100 total files and it took almost an hour to do this.


Now you wish to reduce the number of users connecting to the server to just 10%, each client needs to download 1,000 workunits each so they can be shared with the other 90%:

With 10% download from main server

100,000 users download 1,000 files from main server (10% of the users download extra files so that peer-clients can still get 100 files ea.)
-average 300kb filesize

100,000 [users] * 1,000 [files] * 300KB = 30 TB of data downloaded @ 95Mbit = about 3,157 total seconds or 52.63 minutes (again, not counting overhead).

Now the peer-clients aren't going to download all of the work from the peer servers, just their share of 100 workunits:

900,000 [peer users] * 100 [files] * 300KB = 27 TB from the main 30 TB of data downloaded by peers, but it doesn't matter because this isn't bandwidth "saved" since it was already used by the 100,000 users above getting extra work, so this figure can be completely ignored.

Outcome:
Every client still gets their 100 workunits each, and 100,000,000 total workunits were still downloaded to achieve this. Note that no time has been saved and the network is saturated for the same amount of time to achieve this.

This brings us back to my point: you've only moved the problem around, but the network is still saturated and you're spending the same amount of time getting the work. All the programming time it took to write the peer-sharing client, and test it, and constantly fix bugs in it later, isn't worth the lack of savings.

You could look at it like the distributed Seti telescopes for a different kind of analogy


SETI@Home doesn't use distributed telescopes. SETI@Home uses two telescopes, and only records whenever the two are already running. The data is stored on hard drives and shipped to Berkeley to be divided up into chunks to be downloaded by clients.
ID: 1212956 · Report as offensive
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212972 - Posted: 1 Apr 2012, 20:53:46 UTC

But if you had only 10% of users getting data from the absolute top main server then those 10% users could share the tasks to the other 90%.

Could then call those 10% super users because their machines help spread the tasks out to reduce the main servers workload.

Surely you would only need a couple of algorithms to work out which best users are suited to being 'super users' - network speed/computers/availability/permission to then block all other users from the main server and force them to peer the files.

90% of users would then never download from the main server but only upload to it?

I meant the Allen Telescope Array
ID: 1212972 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212975 - Posted: 1 Apr 2012, 20:55:05 UTC - in response to Message 1212972.  
Last modified: 1 Apr 2012, 20:57:12 UTC

But if you had only 10% of users getting data from the absolute top main server then those 10% users could share the tasks to the other 90%.


I updated my figures while you typed a response to reflect only 10% of the users getting files.

The bottom line is that bittorrent/peer-sharing only saves time if the files need to be replicated constantly, like sharing documents or music. It doesn't work so well with files that need to be downloaded from a main server.

I meant the Allen Telescope Array


The Allen Telescope Array is used by the SETI Institute, which has no affiliation with SETI@home at all.
ID: 1212975 · Report as offensive
Profile Dan T

Send message
Joined: 12 Aug 11
Posts: 14
Credit: 2,294,194
RAC: 0
United Kingdom
Message 1212985 - Posted: 1 Apr 2012, 21:09:37 UTC
Last modified: 1 Apr 2012, 21:25:10 UTC

Fair enough then, just seems kind of ironic that this project spreads the workload out to thousands of computers to speed it up yet they only have two download/upload server.
ID: 1212985 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1212987 - Posted: 1 Apr 2012, 21:10:21 UTC - in response to Message 1212985.  
Last modified: 1 Apr 2012, 21:13:16 UTC

Actually they have two download servers, but they only have one connection to the internet, which is all that really matters. They need a way to better manage the limited connection they have by forcing clients to back off for random periods of time so that each client can get a turn downloading files - which is exactly what BOINC does now. More bandwidth would be the biggest help.

This is why people who pound the retry button are essentially forcing themselves to the front of the queue and (in my view) rather selfishly grabbing work while making everyone else wait longer.

Its not so ironic really, when you consider that you can't distribute a single internet connection like you can workunits themselves.
ID: 1212987 · Report as offensive
1 · 2 · Next

Questions and Answers : Windows : Cannot get new tasks


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.