Lunatics Help

Message boards : Number crunching : Lunatics Help
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1779854 - Posted: 17 Apr 2016, 7:44:57 UTC - in response to Message 1779849.  

P.S. Maybe they need to shorten task deadlines ??

And then older system, or lower powered ones, or ones that are only running for a few hours a day, or a few days a week won't be able to participate.
Grant
Darwin NT
ID: 1779854 · Report as offensive
Profile Rich Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 27 Oct 14
Posts: 4
Credit: 49,285,910
RAC: 0
United States
Message 1779875 - Posted: 17 Apr 2016, 11:12:35 UTC - in response to Message 1779854.  

I think our comments about the unnecessary queue size doesn't exclude people from participating. It is meant more at having a reasonable queue size. If it takes 50 hours to do a WU is a queue filled appropriate. After 24 hours or 50% done the next set of WU's could be downloaded. A little playing around with the preferences would fine tune the queue size and a reasonable number of units would be ready to run for what ever type of equipment the person may have.
ID: 1779875 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1779878 - Posted: 17 Apr 2016, 12:15:09 UTC - in response to Message 1779854.  

P.S. Maybe they need to shorten task deadlines ??

And then older system, or lower powered ones, or ones that are only running for a few hours a day, or a few days a week won't be able to participate.



. . Well even on the old Pentium 4 before it popped a power supply I could get through a job or two a day, even part time it could churn out 2 or 3 results a week. (If I had known about Lunatics then maybe 4 or 5 a day) All it takes is to be sensible and show some forethought. Shortening deadlines wouldn't preclude anyone from participating unless maybe they were running a slow 286. Anyone using technology from this century could still take part :)

. . But the advantage would be that it would prevent those with massive caches of WU's and who are not returning results from causing backlogs of incompleted jobs. The tasks would not have to hang in limbo for months until they eventually ended up on a host that would actually process them. It seems highly inefficient to me and that is all I am saying. But from what I have observed many of the individuals who conduct their participation in that way have systems more powerful than any I have. i7s with dual GTX770s. Things like that.
ID: 1779878 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1779879 - Posted: 17 Apr 2016, 12:16:16 UTC - in response to Message 1779875.  
Last modified: 17 Apr 2016, 12:19:28 UTC

I think our comments about the unnecessary queue size doesn't exclude people from participating. It is meant more at having a reasonable queue size. If it takes 50 hours to do a WU is a queue filled appropriate. After 24 hours or 50% done the next set of WU's could be downloaded. A little playing around with the preferences would fine tune the queue size and a reasonable number of units would be ready to run for what ever type of equipment the person may have.



. . Exactly!

. . But I have probably said enough on the point
ID: 1779879 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1780034 - Posted: 18 Apr 2016, 5:50:14 UTC - in response to Message 1779875.  

I think our comments about the unnecessary queue size doesn't exclude people from participating. It is meant more at having a reasonable queue size. If it takes 50 hours to do a WU is a queue filled appropriate.

And when Seti is down for several days? It has been down for weeks in the past.
The whole idea of a cache is to get through outages.

Your post mentioned a WU from Dec25. The reason it was still there has nothing to do with the size of people's cache, as I posted previously it has to do with the deadline of the WU.


Part of the problem may be terminology.
You say you "had one in my queue since December 25".
What you're talking about there isn't a queue, it's just a record of WUs you have done. Once they have validated (or once all 10 issues have errored or timed out) they will be cleared from the database.
When people talk about a queue, it's usually talking about a cache- a form of buffer against disruptions to the flow of data.
Grant
Darwin NT
ID: 1780034 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1780094 - Posted: 18 Apr 2016, 14:22:40 UTC - in response to Message 1780034.  

I think our comments about the unnecessary queue size doesn't exclude people from participating. It is meant more at having a reasonable queue size. If it takes 50 hours to do a WU is a queue filled appropriate.

And when Seti is down for several days? It has been down for weeks in the past.
The whole idea of a cache is to get through outages.

Your post mentioned a WU from Dec25. The reason it was still there has nothing to do with the size of people's cache, as I posted previously it has to do with the deadline of the WU.


Part of the problem may be terminology.
You say you "had one in my queue since December 25".
What you're talking about there isn't a queue, it's just a record of WUs you have done. Once they have validated (or once all 10 issues have errored or timed out) they will be cleared from the database.
When people talk about a queue, it's usually talking about a cache- a form of buffer against disruptions to the flow of data.



. . I think it is safe to say he is not talking about a validated result. He is talking about a job that has been open since December 25th because the other host that received the task failed to process it and it "timed out", thereby being sent to another host, who also failed to process it and then it went to a 3rd host and was cleared a few days ago. While you are taking "queue" to mean the list of jobs that are either being processed on the local host or are waiting to be, the list of jobs that have been processed by the local host but are waiting to be validated is still a queue of incompleted tasks.

. . And I think you may be feeling on the defensive about the number of WUs waiting to be done on your machine, but that is not the thing that bothers me nor I believe my ally. We are talking about disproportionate queue lengths on hosts that are not doing anything with them.

. . But even as a safety measure against web server outages, the numbers "cached" should be in proportion to the numbers that are being successfully crunched. A host that is processing 10 or 12 WUs a month does not need a "cache" of 400 WUs. But a host that is crunching 200 WUs a day can justify a cache of that size. And so I repeat, there is room for the deadlines to be tightened a little
ID: 1780094 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1780312 - Posted: 19 Apr 2016, 6:44:55 UTC - in response to Message 1780094.  

But even as a safety measure against web server outages, the numbers "cached" should be in proportion to the numbers that are being successfully crunched. A host that is processing 10 or 12 WUs a month does not need a "cache" of 400 WUs. But a host that is crunching 200 WUs a day can justify a cache of that size. And so I repeat, there is room for the deadlines to be tightened a little

And as I keep pointing out, but it doesn't appear to be sinking in- cache size and WU deadlines are 2 unrelated issues. No matter how much smaller you make caches there will still be WUs that take a couple of months, or more, to eventually validate & clear from your Validation pending or Inconclusive list.
Yes, large caches do have an impact on the number of WUs outstanding, and if people were able to cache several weeks or even a month or more of work then caches sizes would have a significant impact on the time for WUs to Validate. However that isn't the case, so the impact of large caches on slow machines is so small as to be of no consequence and so irrelevant.

So arguing about cache size is pointless if you are concerned about how long it takes for some WUs to eventually validate; that period of time is due to deadlines, not cache size. And the deadlines are what they are for the reasons I mentioned previously.

If you were to be concerned about systems that download work but don't return any, or systems that for some reason have a full cache yet return mostly errors then you'd be one of many.
But as cache size has no impact on the number of WUs that take extended periods to validate, it doesn't make any sense to make an issue out of cache sizes.


To summarise- as things are, large caches on slow machines have no significant impact on the time it takes for WUs to eventually validate.
Grant
Darwin NT
ID: 1780312 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34754
Credit: 261,360,520
RAC: 489
Australia
Message 1780365 - Posted: 19 Apr 2016, 10:49:16 UTC
Last modified: 19 Apr 2016, 10:54:04 UTC

But even as a safety measure against web server outages, the numbers "cached" should be in proportion to the numbers that are being successfully crunched. A host that is processing 10 or 12 WUs a month does not need a "cache" of 400 WUs. But a host that is crunching 200 WUs a day can justify a cache of that size. And so I repeat, there is room for the deadlines to be tightened a little

And as I keep pointing out, but it doesn't appear to be sinking in- cache size and WU deadlines are 2 unrelated issues. No matter how much smaller you make caches there will still be WUs that take a couple of months, or more, to eventually validate & clear from your Validation pending or Inconclusive list.
Yes, large caches do have an impact on the number of WUs outstanding, and if people were able to cache several weeks or even a month or more of work then caches sizes would have a significant impact on the time for WUs to Validate. However that isn't the case, so the impact of large caches on slow machines is so small as to be of no consequence and so irrelevant.

So arguing about cache size is pointless if you are concerned about how long it takes for some WUs to eventually validate; that period of time is due to deadlines, not cache size. And the deadlines are what they are for the reasons I mentioned previously.

If you were to be concerned about systems that download work but don't return any, or systems that for some reason have a full cache yet return mostly errors then you'd be one of many.
But as cache size has no impact on the number of WUs that take extended periods to validate, it doesn't make any sense to make an issue out of cache sizes.


To summarise- as things are, large caches on slow machines have no significant impact on the time it takes for WUs to eventually validate.

As Grant said, the size of of a person's cache isn't the the problem.

The problem is those that join, get their max fill of work and then have a problem. Then they uninstall without detaching from the project 1st or asking about it leaving hundreds of tasks to timeout (& I've mentioned this before here that if someone does do an uninstall then it should notify all the servers that it's connected to at the same time as the uninstall).

This has been a problem since the beginning and hopefully someone may finally address this problem Stephen, but you'll soon learn that extreme patience is required for this project if you wish to support it. ;-)

But who am I to give advice? ROFL

Cheers.
ID: 1780365 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1780429 - Posted: 19 Apr 2016, 15:27:46 UTC - in response to Message 1780312.  

But even as a safety measure against web server outages, the numbers "cached" should be in proportion to the numbers that are being successfully crunched. A host that is processing 10 or 12 WUs a month does not need a "cache" of 400 WUs. But a host that is crunching 200 WUs a day can justify a cache of that size. And so I repeat, there is room for the deadlines to be tightened a little

And as I keep pointing out, but it doesn't appear to be sinking in- cache size and WU deadlines are 2 unrelated issues. No matter how much smaller you make caches there will still be WUs that take a couple of months, or more, to eventually validate & clear from your Validation pending or Inconclusive list.
Yes, large caches do have an impact on the number of WUs outstanding, and if people were able to cache several weeks or even a month or more of work then caches sizes would have a significant impact on the time for WUs to Validate. However that isn't the case, so the impact of large caches on slow machines is so small as to be of no consequence and so irrelevant.

So arguing about cache size is pointless if you are concerned about how long it takes for some WUs to eventually validate; that period of time is due to deadlines, not cache size. And the deadlines are what they are for the reasons I mentioned previously.

If you were to be concerned about systems that download work but don't return any, or systems that for some reason have a full cache yet return mostly errors then you'd be one of many.
But as cache size has no impact on the number of WUs that take extended periods to validate, it doesn't make any sense to make an issue out of cache sizes.


To summarise- as things are, large caches on slow machines have no significant impact on the time it takes for WUs to eventually validate.

Very slow machines that only return 10-12 task a month will also not download the maximum 100 tasks. An old Pentium III 850MHz I was running ran tasks in ~30 hours. With cache settings of 10+10 days it would cache ~15 tasks before no longer asking for work.

To me it seems like there are some machines out there that ask for work, never processes it, and then ask for more once the deadline passes.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1780429 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22204
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1780445 - Posted: 19 Apr 2016, 21:40:56 UTC

No "seems to be", there are a few, not many, who only collect units, do no processing, just wait for them to timeout:

http://setiathome.berkeley.edu/hosts_user.php?userid=363147
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1780445 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1780449 - Posted: 19 Apr 2016, 21:46:25 UTC

we were speculating someplace else how this could happen, but without access to the hosts/users in question, it's a tad difficult to debug...
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1780449 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1780471 - Posted: 19 Apr 2016, 22:38:57 UTC - in response to Message 1780449.  

we were speculating someplace else how this could happen, but without access to the hosts/users in question, it's a tad difficult to debug...

Set BOINC activity to suspend & it will happily continue to request work. I'm not sure if tasks that timeout reduce the value of Max tasks per day.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1780471 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1780514 - Posted: 20 Apr 2016, 2:27:44 UTC - in response to Message 1780365.  



As Grant said, the size of of a person's cache isn't the the problem.

The problem is those that join, get their max fill of work and then have a problem. Then they uninstall without detaching from the project 1st or asking about it leaving hundreds of tasks to timeout (& I've mentioned this before here that if someone does do an uninstall then it should notify all the servers that it's connected to at the same time as the uninstall).

This has been a problem since the beginning and hopefully someone may finally address this problem Stephen, but you'll soon learn that extreme patience is required for this project if you wish to support it. ;-)

But who am I to give advice? ROFL




. . Firstly, I think you know you have been very helpful with your advice, and I assure you it has been greatly appreciated.

. . Secondly, despite the thorn under Grant's saddle about "cache" size the original comment was about participants who take massive queues of work and do nothing with them, it was never about "cache" size of itself. And despite several attempts to make that clear that doesn't seem to get noticed.

. . Grant seems to believe that the phenomenon is rare and caused mainly or only by newbies who like you said have problems, or maybe just get disappointed/disillusioned and drop out. While I have seen some evidence of that too mostly the participants who are part of the syndrome are hosts with large accumulations of credits and with "caches" that reflect their halcyon days of high productivity, on machines more powerful than any I have, who for whatever reason have ceased or decreased their activity leaving large numbers of WU's to time out. Apart from limiting the amount of WU's you can take in advance (is there any current limit at all?) the only thing that can help is to shorten deadlines, it won't stop WU completion from being delayed but it will shorten the delay.

. . Your suggestion of having BOINC manager notify the servers if a host uninstalls is sound, though I would have caused it some grief when I was trying to get these rigs set up. I uninstalled and re-installed several times and sadly have not finished yet. If I go the SSD route I will have to do it all over again. But maybe a simple flash screen when a user starts to uninstall reminding them to abort any uncompleted WU's and to suspend their SETI profile before carrying it out. Just a thought. Not knowing better is often the cause of some big blunders. I know from experience. :)

. . Lastly, while some may see silence and/or inaction as patience, I see it more as indifference :) And you strike me as anything but indifferent. :)
ID: 1780514 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1780518 - Posted: 20 Apr 2016, 2:49:41 UTC - in response to Message 1780429.  



Very slow machines that only return 10-12 task a month will also not download the maximum 100 tasks. An old Pentium III 850MHz I was running ran tasks in ~30 hours. With cache settings of 10+10 days it would cache ~15 tasks before no longer asking for work.

To me it seems like there are some machines out there that ask for work, never processes it, and then ask for more once the deadline passes.



. . Exactly! As I have tried to explain to Grant the machines involved are NOT slow machines, but he is locked on that idea, they are often more powerful rigs than any of mine. And they have large "caches" (over 400 WUs) that reflect that power. But for whatever reason they are only returning results sporadically and in very small numbers so that many of the WU's time out.

. . Is that the workload maximum? 10 plus 10 days ??? I haven't tried setting mine larger than 1.5 days (1.0 plus 0.5). The first machine I used on SETI was a Pentium 4 3.0GHz which would complete a WU in about 13 to 14 hours. :) Ah the good old days ... LOL!

. . So the thing is then, how can that time wastage on WUs be avoided and does anyone else care? :)
ID: 1780518 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1780519 - Posted: 20 Apr 2016, 2:55:46 UTC - in response to Message 1780445.  

No "seems to be", there are a few, not many, who only collect units, do no processing, just wait for them to timeout:

http://setiathome.berkeley.edu/hosts_user.php?userid=363147



. . Whoa!

. . That is the worst example I have ever seen. That one seems more like deliberate sabotage :(

. . But I disagree about them being few. Though they may be only a small percentage of all users their numbers are large enough to have an impact.
ID: 1780519 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1780520 - Posted: 20 Apr 2016, 2:57:06 UTC - in response to Message 1780449.  

we were speculating someplace else how this could happen, but without access to the hosts/users in question, it's a tad difficult to debug...



. . And therein lies the problem.
ID: 1780520 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1780521 - Posted: 20 Apr 2016, 3:00:12 UTC - in response to Message 1780471.  

we were speculating someplace else how this could happen, but without access to the hosts/users in question, it's a tad difficult to debug...

Set BOINC activity to suspend & it will happily continue to request work. I'm not sure if tasks that timeout reduce the value of Max tasks per day.



. . Maybe that would be a good idea though.
ID: 1780521 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1780538 - Posted: 20 Apr 2016, 4:39:11 UTC

I think one problem is (but not confirmed) is new users to SETI likely have a default cache size of 10 days, so they download a boatload of files, then disappear.

It should be 1 or 2 days default, then change it as you see fit.
ID: 1780538 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1780546 - Posted: 20 Apr 2016, 5:24:03 UTC - in response to Message 1780538.  

I think one problem is (but not confirmed) is new users to SETI likely have a default cache size of 10 days, so they download a boatload of files, then disappear.

It should be 1 or 2 days default, then change it as you see fit.

The default cache setting for BOINC is 0.25+0 days.
However it is fairly easy for a mid range machines to hit the limits with a setting of 1 day.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1780546 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1780568 - Posted: 20 Apr 2016, 6:28:24 UTC - in response to Message 1780514.  
Last modified: 20 Apr 2016, 6:30:38 UTC

. . Secondly, despite the thorn under Grant's saddle about "cache" size the original comment was about participants who take massive queues of work and do nothing with them,

Please, don't rewrite history.

These were the posts I was responding to
Rich
"I have to agree with you Stephen. I had one in my queue since December 25 and it was finally validated last night. No reason to have a queue that large, the servers are not down for more then a day or two at most."

Stephen
". . That is how I see it too. I am guessing that one WU from December had at least one host time out on the job. Maybe two in that time frame. In my limited experience the servers are not usually down for even a full day let alone much more. I must admit my cache is too small for the longer outages but I like to keep the turn around snappy :)"

Rich used the word "queue" twice, and each time it had a different meaning. The first usage wasn't correct as what he wasn't referring to isn't a queue. The second usage was correct, although the technical term is cache.
That is what I responded to.
It was implied that the size of the cache was the cause of the delay in validation. That is not the case. It's the deadlines for the WUs.

You suggested shortening the deadlines, I pointed out why not.


If your response had been that you weren't concerned about peoples cache size, just about the delay in validation of work, that would have been the end of it. However you kept associating large caches with delayed validation.

Stephen
1779878
". . But the advantage would be that it would prevent those with massive caches of WU's and who are not returning results from causing backlogs of incompleted jobs.

1780094
". . And I think you may be feeling on the defensive about the number of WUs waiting to be done on your machine, but that is not the thing that bothers me nor I believe my ally. We are talking about disproportionate queue lengths on hosts that are not doing anything with them.
". . But even as a safety measure against web server outages, the numbers "cached" should be in proportion to the numbers that are being successfully crunched. A host that is processing 10 or 12 WUs a month does not need a "cache" of 400 WUs. But a host that is crunching 200 WUs a day can justify a cache of that size. And so I repeat, there is room for the deadlines to be tightened a little"

1780518
". . Exactly! As I have tried to explain to Grant the machines involved are NOT slow machines, but he is locked on that idea, they are often more powerful rigs than any of mine. And they have large "caches" (over 400 WUs) that reflect that power. But for whatever reason they are only returning results sporadically and in very small numbers so that many of the WU's time out."


I was only responding to your posts in the hope of helping you understand that cache size isn't responsible for delayed validation. It's the WU deadlines. Yet you kept going on about queues (caches).
I was responding to you.
Yet you now claim I've got a "thorn under Grant's saddle about "cache" size. I respond to statements you make, and I have a thorn in my saddle???



. . Grant seems to believe that the phenomenon is rare and caused mainly or only by newbies who like you said have problems, or maybe just get disappointed/disillusioned and drop out.

Where did I post that?
I've looked & can't find the post.



the only thing that can help is to shorten deadlines, it won't stop WU completion from being delayed but it will shorten the delay.

And I've pointed out before why deadlines are what they are and why.
Grant
Darwin NT
ID: 1780568 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 · Next

Message boards : Number crunching : Lunatics Help


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.