5.10.20 now recommended Boinc version

Message boards : Number crunching : 5.10.20 now recommended Boinc version
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 641665 - Posted: 15 Sep 2007, 5:24:43 UTC - in response to Message 634406.  

No... I have 10 days set since a month or two and when i have 5.10.13, BOING only request a few WU's every time when it needed to complete the cache...

If the new version runs like apeears... when BOINC decides to request more WU's my cache could be empty and that means a lot of WU's to request...

I've converted four hosts to 5.10.20.

One of the hosts is a Q6600 Quad. To my dismay, despite my General Preferences for "work" being set to connect every .002 days with "maintain enough work for an additional" 4.16 days, it has already run my SETI queue down to exactly zero before requesting more work, and seems set to do it again (one result 92% done, none unstarted, no requests)

I'm scratching my head for reasons why:
1. short term and long term debt are balanced to within three seconds
2. Result Duration Correction Factor is .1367
3. Einstein, which gets 92% resource share on this same host, is maintaining a four day queue, posting a new request for a few tens of seconds of work and getting one new result several time a day.

I must have something different about this host. Any guesses as to what, and how I might fix it? This is not a catastrophe, but it is not what I want.
ID: 641665 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19129
Credit: 40,757,560
RAC: 67
United Kingdom
Message 641674 - Posted: 15 Sep 2007, 6:05:14 UTC - in response to Message 641665.  

No... I have 10 days set since a month or two and when i have 5.10.13, BOING only request a few WU's every time when it needed to complete the cache...

If the new version runs like apeears... when BOINC decides to request more WU's my cache could be empty and that means a lot of WU's to request...

I've converted four hosts to 5.10.20.

One of the hosts is a Q6600 Quad. To my dismay, despite my General Preferences for "work" being set to connect every .002 days with "maintain enough work for an additional" 4.16 days, it has already run my SETI queue down to exactly zero before requesting more work, and seems set to do it again (one result 92% done, none unstarted, no requests)

I'm scratching my head for reasons why:
1. short term and long term debt are balanced to within three seconds
2. Result Duration Correction Factor is .1367
3. Einstein, which gets 92% resource share on this same host, is maintaining a four day queue, posting a new request for a few tens of seconds of work and getting one new result several time a day.

I must have something different about this host. Any guesses as to what, and how I might fix it? This is not a catastrophe, but it is not what I want.

What is the Seti LTD?
And where did you set connect to network setting?
Does this computer have "global_prefs_override.xml" file, so that the prefences need to be set in BOINC/Advanced/Preferences/Network Usage.

Andy
ID: 641674 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 641745 - Posted: 15 Sep 2007, 11:38:27 UTC - in response to Message 641665.  

One of the hosts is a Q6600 Quad. To my dismay, despite my General Preferences for "work" being set to connect every .002 days with "maintain enough work for an additional" 4.16 days, it has already run my SETI queue down to exactly zero before requesting more work, and seems set to do it again (one result 92% done, none unstarted, no requests)

I'm scratching my head for reasons why:
1. short term and long term debt are balanced to within three seconds
2. Result Duration Correction Factor is .1367
3. Einstein, which gets 92% resource share on this same host, is maintaining a four day queue, posting a new request for a few tens of seconds of work and getting one new result several time a day.

I must have something different about this host. Any guesses as to what, and how I might fix it? This is not a catastrophe, but it is not what I want.

Well, don't know exactly how the math add-up on a quad-core, but atleast on a single-core, with only 8% SETI@home-resource-share, you're telling BOINC to only run SETI@home 1.92 hours/day. For quad-core this would maybe be 7.68 hours/day, but it's possible I've made a mistake...

Meaning, after running some hours of SETI@home-work, your computer, even 3 cores is continuously running Einstein@home, it still "owns" Einstein@home even more work. Before your computer has "paid-back" most of this it "owns" Einstein@home, it won't ask SETI@home for more work, as long as Einstein@home manages keeping the cache full.

Well, my guess is SETI@home won't run a wu to the end, but will instead on one core run 1 hour (or your switch-beetween-projects-setting), before 1 or maybe 2 hours Einstein@home on the same core, back to 1 hour seti and keep on switching back and forth... My guess is the 3 other cores will continuously run Einstein@home all the time.

With the 2nd. method there'll likely only be a short delay from seti-wu finished before asking for more work, but, with so low SETI-resource-share, it's still my guess there will be the ocassional gap with empty seti-queue, before it's time to ask for more work.

If you want a steady supply of SETI@home-work, increase the SETI@home resource-share. With 25% share, on a quad-core 3 cores should always run Einstein@home, while the 4th should always run SETI@home. In practice there'll be some variations, but basically it will work this way.

"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
ID: 641745 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 641768 - Posted: 15 Sep 2007, 13:25:13 UTC - in response to Message 641745.  

To my dismay, despite my General Preferences for "work" being set to connect every .002 days with "maintain enough work for an additional" 4.16 days, it has already run my SETI queue down to exactly zero before requesting more work, and seems set to do it again (one result 92% done, none unstarted, no requests).

I must have something different about this host. Any guesses as to what, and how I might fix it? This is not a catastrophe, but it is not what I want.


If you want a steady supply of SETI@home-work, increase the SETI@home resource-share.

I was getting a steady supply of work at this same resource share last week before I upgraded to 5.10.20. I'm still getting a steady supply of work on a Core 2 Duo with about the same resource share which I switched to 5.10.20 nearly at the same time. Just last night, as its SETI queue dipped below about three days, it requested 357 seconds of new work and downloaded one new result, putting its computed SETI queue back up to 70 hours.

But my quad, as feared, did indeed wait until it had finished its very last queued SETI result, 12:34:49, uploaded finished at 12:34:55, then waited another half hour before requesting 112650 seconds of new work and reporting 3 tasks at 01:10:56.

Over the next hour, downloads (sometimes failing in the middle presumably due to the current difficulties), went on, finishing at 02:07:50 with 63 results in queue representing over 116 hours of work.

I'm aware that a short-term, long-term debt mismatch is dealt with by stopping work fetch until the queue goes to zero, but my debts were extremely closely matched.

I do notice that the short-term debt is one second longer than the long-term on the Duo that is working as I wish, so, as an experiment, I'll make the offending quad's new slight imbalance of that sign, to see if the behavior changes. However, in the past, either sign was OK, so either something has changed in 5.10.20, or there is another setting on my host which has gotten adrift.

Aside from debt mismatch, does anyone know of a parameter which switches the pre-fetch behavior from "steady supply on dipping below requested queue length" to "huge burst a while after going to zero"?


ID: 641768 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 642388 - Posted: 16 Sep 2007, 15:56:58 UTC - in response to Message 641768.  

[quote][quote]To my dismay, despite my General Preferences for "work" being set to connect every .002 days with "maintain enough work for an additional" 4.16 days, it has already run my SETI queue down to exactly zero before requesting more work, and seems set to do it again (one result 92% done, none unstarted, no requests).

I do notice that the short-term debt is one second longer than the long-term on the Duo that is working as I wish, so, as an experiment, I'll make the offending

As I said, I tried tipping the short-term long-term mismatch the other way by a few seconds, and, whether coincidence or not, within a few hours, with the queue still having several days of work, my system asked for and received a small additional amount of SETI work.

If this continues to work for a few days, I'll try tipping the imbalance back the other way, to get a better signal of cause and effect.

ID: 642388 · Report as offensive
Mark Henderson
Volunteer tester

Send message
Joined: 9 Mar 02
Posts: 41
Credit: 3,964,939
RAC: 0
United States
Message 642405 - Posted: 16 Sep 2007, 16:34:24 UTC
Last modified: 16 Sep 2007, 16:36:55 UTC

I noticed something strange on my Boinc 5.10.20. Under the Transfer tab the colum with the download Speed shoes a very large number when downloading such as 409638445948385910000000000000. Its usually about 17 numbers followed by lots of 0s after, the column wont even pull out enough to get to the end of the 0s. Downloads work fine, just this odd problem on both mt AMD and Intel boxes.
ID: 642405 · Report as offensive
Profile Logan
Volunteer tester
Avatar

Send message
Joined: 26 Jan 07
Posts: 743
Credit: 918,353
RAC: 0
Spain
Message 642407 - Posted: 16 Sep 2007, 16:39:42 UTC - in response to Message 642405.  
Last modified: 16 Sep 2007, 16:40:24 UTC

I noticed something strange on my Boinc 5.10.20. Under the Transfer tab the colum with the download Speed shoes a very large number when downloading such as 409638445948385910000000000000. Its usually about 17 numbers followed by lots of 0s after, the column wont even pull out enough to get to the end of the 0s. Downloads work fine, just this odd problem on both mt AMD and Intel boxes.

don't worry, is a little mistake... Let BOINC to do his work...
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)
ID: 642407 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 642477 - Posted: 16 Sep 2007, 17:50:35 UTC - in response to Message 642388.  

[quote][quote]To my dismay, despite my General Preferences for "work" being set to connect every .002 days with "maintain enough work for an additional" 4.16 days, it has already run my SETI queue down to exactly zero before requesting more work, and seems set to do it again (one result 92% done, none unstarted, no requests).

I do notice that the short-term debt is one second longer than the long-term on the Duo that is working as I wish, so, as an experiment, I'll make the offending

As I said, I tried tipping the short-term long-term mismatch the other way by a few seconds, and, whether coincidence or not, within a few hours, with the queue still having several days of work, my system asked for and received a small additional amount of SETI work.

If this continues to work for a few days, I'll try tipping the imbalance back the other way, to get a better signal of cause and effect.

No.

Short Term Debt (STD) merely determines which project(s) will be run next if the host can complete work in Round Robin (RR) mode. Long Term Debt (LTD) determines which project will be asked for work next (highest LTD that is contactable is next) as well as whether the project will be contacted at all. If the LTD is below the negative of the project switch interval (default of 60 minutes) the project will not be contacted unless there is no contactable project with a higher LTD, and the total queue is too small.

If a project is in Earliest Deadline First (EDF) mode, it will not be contacted to supply work. This prevents a situation where a long deadline task is starved. With a queue of more than 4 days and potential deadlines as short as 4.5 days in S@H along with a low resource share, you are quite possibly in EDF for S@H which reduces your LTD and at the same time blocks download of new work while processing. The best approach is to just leave it alone.


BOINC WIKI
ID: 642477 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 642749 - Posted: 17 Sep 2007, 0:46:10 UTC - in response to Message 642477.  


If this continues to work for a few days, I'll try tipping the imbalance back the other way, to get a better signal of cause and effect.

No.

Short Term Debt (STD) merely determines which project(s) will be run next if the host can complete work in Round Robin (RR) mode. Long Term Debt (LTD) determines which project will be asked for work next (highest LTD that is contactable is next) as well as whether the project will be contacted at all. If the LTD is below the negative of the project switch interval (default of 60 minutes) the project will not be contacted unless there is no contactable project with a higher LTD, and the total queue is too small.

If a project is in Earliest Deadline First (EDF) mode, it will not be contacted to supply work. This prevents a situation where a long deadline task is starved. With a queue of more than 4 days and potential deadlines as short as 4.5 days in S@H along with a low resource share, you are quite possibly in EDF for S@H which reduces your LTD and at the same time blocks download of new work while processing. The best approach is to just leave it alone.

Thanks for the detail on how it is intended to work.

As it happens, I am highly confident my system never went EDF in this period. The execution order and debt effects of that are pretty obvious, and I've seen them several times before.

With my current queue length, it surely would have gone EDF given the deadline distribution before the transition to Multi-Beam, but my supply of Einstein work had uniform 3-week deadlines, and the shortest MB SETI units I got in this period had about an 8.5 day deadline.

Combining your operational description and my observations, I have a vague guess of what may be happening, but will restrain my speculation until I've seen more data.

ID: 642749 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65828
Credit: 55,293,173
RAC: 49
United States
Message 642818 - Posted: 17 Sep 2007, 3:10:43 UTC - in response to Message 642749.  


If this continues to work for a few days, I'll try tipping the imbalance back the other way, to get a better signal of cause and effect.

No.

Short Term Debt (STD) merely determines which project(s) will be run next if the host can complete work in Round Robin (RR) mode. Long Term Debt (LTD) determines which project will be asked for work next (highest LTD that is contactable is next) as well as whether the project will be contacted at all. If the LTD is below the negative of the project switch interval (default of 60 minutes) the project will not be contacted unless there is no contactable project with a higher LTD, and the total queue is too small.

If a project is in Earliest Deadline First (EDF) mode, it will not be contacted to supply work. This prevents a situation where a long deadline task is starved. With a queue of more than 4 days and potential deadlines as short as 4.5 days in S@H along with a low resource share, you are quite possibly in EDF for S@H which reduces your LTD and at the same time blocks download of new work while processing. The best approach is to just leave it alone.

Thanks for the detail on how it is intended to work.

As it happens, I am highly confident my system never went EDF in this period. The execution order and debt effects of that are pretty obvious, and I've seen them several times before.

With my current queue length, it surely would have gone EDF given the deadline distribution before the transition to Multi-Beam, but my supply of Einstein work had uniform 3-week deadlines, and the shortest MB SETI units I got in this period had about an 8.5 day deadline.

Combining your operational description and my observations, I have a vague guess of what may be happening, but will restrain my speculation until I've seen more data.

Me I'm not afraid of EDF, I ignore It, If I didn't I wouldn't have enough cache as My Quads would chew through anything less as they are doing about 102-107crhr, But I like to complete the science very fast. Still doing 5.9.0.64 or 5.9.0.32 on PC2 as that PC runs XP Pro and not XP x64. :D
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 642818 · Report as offensive
Profile KWSN - MajorKong
Volunteer tester
Avatar

Send message
Joined: 5 Jan 00
Posts: 2892
Credit: 1,499,890
RAC: 0
United States
Message 642828 - Posted: 17 Sep 2007, 4:05:24 UTC - in response to Message 639418.  

5.10.20 running good on 2 of my rigs.


For Windows only. This time, the Linux development is lagging behind with no stable 5.10.x available.

I have been running 5.10.8 on Linux (Slackware) for several months now. It's still listed as a development version but I've had no problems with it at all.



But thats the beauty of using linux... You can build it yourself.

Just checked out the source and built 5.10.20 on my laptop (boinc_5.10.20_i686-pc-linux-gnu.sh). So far, so good.
ID: 642828 · Report as offensive
Profile Frosted
Avatar

Send message
Joined: 11 Jul 99
Posts: 83
Credit: 3,898,641
RAC: 0
Canada
Message 643355 - Posted: 18 Sep 2007, 2:44:58 UTC - in response to Message 634406.  

I'm testing the new '24 hours' feature since two days... My cache is decreasing and BOINC don't ask for new work... Only reports every 24 hours after complete next WU before last connect... I have set the preferences 0 days to connect and 10 days of extra work...

I send you more news tomorrow after connecting, but this appears BOINC new version don´t request new work until the cache is empty...


Stone the crows!! That's where all the WU's went - 10 day's worth on a C2D.

No... I have 10 days set since a month or two and when i have 5.10.13, BOING only request a few WU's every time when it needed to complete the cache...

If the new version runs like apeears... when BOINC decides to request more WU's my cache could be empty and that means a lot of WU's to request...



What the...? You currently have 223 work units cached and your system is not even that fast!
ID: 643355 · Report as offensive
Profile Pilot
Avatar

Send message
Joined: 18 May 99
Posts: 534
Credit: 5,475,482
RAC: 0
Message 644420 - Posted: 19 Sep 2007, 18:53:59 UTC - in response to Message 633184.  

Do any of the recent versions affect the speed by which a WU is crunched. My assumption is no since the SETI application does the actual crunching but, hey, I have to ask.

All I know is that 5.10.13 was making errors and 5.9.0.64 wasn't so back I went. When Crunch3r makes a 5.10.x version of His Boinc files I'll try that and see If It works or not, Until then I'm staying put at 5.9.0.64 until further notice.

I guess You could say I like Crunchy Chicken. ;)


I have noticed that on boinc_5.10.20_windows_x86_x64 that WU completed on my VISTA Core2 Duo hang on my system for as much as a day untill I select the "Projects" tab and click on update. Then the results are transfered up fairly quickly. This was not neccessary with boinc_5.10.7_windows_x86_64 as completed work transfered up as soon as completed unless there was some outtage at SETI.
I guess I will switch back till next version.

When we finally figure it all out, all the rules will change and we can start all over again.
ID: 644420 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 644432 - Posted: 19 Sep 2007, 19:22:18 UTC - in response to Message 644420.  

I have noticed that on boinc_5.10.20_windows_x86_x64 that WU completed on my VISTA Core2 Duo hang on my system for as much as a day untill I select the "Projects" tab and click on update. Then the results are transfered up fairly quickly. This was not neccessary with boinc_5.10.7_windows_x86_64 as completed work transfered up as soon as completed unless there was some outtage at SETI.
I guess I will switch back till next version.

This is a feature. It reduces the load on SETI's database server.
ID: 644432 · Report as offensive
Profile Pilot
Avatar

Send message
Joined: 18 May 99
Posts: 534
Credit: 5,475,482
RAC: 0
Message 644459 - Posted: 19 Sep 2007, 20:13:33 UTC - in response to Message 644432.  

I have noticed that on boinc_5.10.20_windows_x86_x64 that WU completed on my VISTA Core2 Duo hang on my system for as much as a day untill I select the "Projects" tab and click on update. Then the results are transfered up fairly quickly. This was not neccessary with boinc_5.10.7_windows_x86_64 as completed work transfered up as soon as completed unless there was some outtage at SETI.
I guess I will switch back till next version.

This is a feature. It reduces the load on SETI's database server.


Hmmm I don't see how it reduces the load since the server will still recieve the same amount of work over a given period of time. I do see how it could help smooth out the the work load by leaving the results on the client machine, but at the increased risk of failure or loss. I would think the data could be more efficiently and securely scheduled for processing in a queue at a system that has Raid type storage instead of clients drive which are usually IDE or SATA non Raid type.

When we finally figure it all out, all the rules will change and we can start all over again.
ID: 644459 · Report as offensive
Alinator
Volunteer tester

Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 644462 - Posted: 19 Sep 2007, 20:19:57 UTC - in response to Message 644420.  
Last modified: 19 Sep 2007, 20:31:52 UTC

I have noticed that on boinc_5.10.20_windows_x86_x64 that WU completed on my VISTA Core2 Duo hang on my system for as much as a day untill I select the "Projects" tab and click on update. Then the results are transfered up fairly quickly. This was not neccessary with boinc_5.10.7_windows_x86_64 as completed work transfered up as soon as completed unless there was some outtage at SETI.
I guess I will switch back till next version.


Well, don't expect that behaviour to change anytime soon.

I've said it before and I'll say it again, the database load on any projects' backend is not my hosts' problem.

As long as the fate of a completed result relies on the integrity of the client_state file, I want them gone off of my machines at the earliest possible moment, without having to 'babysit' them and punch the update button. I learned the hard way in the early days of BOINC not to rely on it that way. As a compromise, I could go along with a 'sit on' interval of an hour or even ~2 since that was the defacto minimum CI (0.01 days) before cache decoupling came around, but 24 hours by preference override is not acceptable IMHO.

If a project has a problem with that, then reduce the daily quota to avoid 'overbooking' or tell the host to 'buzz off' for a while (rather than 11 seconds) after scheduler contact sessions. So like you, 5.10.13 is far as I'm going on mine at this point.

Alinator
ID: 644462 · Report as offensive
Astro
Volunteer tester
Avatar

Send message
Joined: 16 Apr 02
Posts: 8026
Credit: 600,015
RAC: 0
Message 644463 - Posted: 19 Sep 2007, 20:21:22 UTC
Last modified: 19 Sep 2007, 20:27:06 UTC

Pilot, Rom wrote an article about "the evils of return results immediately" in which he give examples of how it increases traffic. I posted it's contents in this post. Hope this helps you understand it.

tony

[edit] Here's a link to the article on his blogsite.
ID: 644463 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 644472 - Posted: 19 Sep 2007, 20:34:38 UTC - in response to Message 644459.  


This is a feature. It reduces the load on SETI's database server.


Hmmm I don't see how it reduces the load since the server will still recieve the same amount of work over a given period of time. I do see how it could help smooth out the the work load by leaving the results on the client machine, but at the increased risk of failure or loss. I would think the data could be more efficiently and securely scheduled for processing in a queue at a system that has Raid type storage instead of clients drive which are usually IDE or SATA non Raid type.

Each time BOINC connects, the server process connects to the database server, logs in, opens the relevant tables, updates them, closes the tables and disconnects.

If you report one work unit per connection, it is:

connect, log in, open, update, close, disconnect

connect, log in, open, update, close, disconnect

connect, log in, open, update, close, disconnect

connect, log in, open, update, close, disconnect

If you report four work units, it's:

connect, log in, open, update, update, update, update, close, disconnect

There is actually a little bit more to it than that, because each transaction takes a thread on the scheduler as well, but you get the idea.
ID: 644472 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 644475 - Posted: 19 Sep 2007, 20:37:58 UTC - in response to Message 644462.  


As long as the fate of a completed result relies on the integrity of the client_state file, I want them gone off of my machines at the earliest possible moment, without having to 'babysit' them and punch the update button. I learned the hard way in the early days of BOINC not to rely on it that way. As a compromise, I could go along with a 'sit on' interval of an hour or even ~2 since that was the defacto minimum CI (0.01 days) before cache decoupling came around, but 24 hours by preference override is not acceptable IMHO.

If a project has a problem with that, then reduce the daily quota to avoid 'overbooking' or tell the host to 'buzz off' for a while (rather than 11 seconds) after scheduler contact sessions. So like you, 5.10.13 is far as I'm going on mine at this point.

Alinator

It works out to about the same thing.

Reduce the load and work gets reported promptly on a connect.

Leave things as-is, and the server has trouble staying ahead of the load after even a short outage.

So, you can back off voluntarily, or you can be backed-off by the fact that everyone is contributing to the higher average load.
ID: 644475 · Report as offensive
Critter
Avatar

Send message
Joined: 16 Dec 02
Posts: 17
Credit: 5,950,975
RAC: 9
United States
Message 644482 - Posted: 19 Sep 2007, 21:01:19 UTC - in response to Message 633576.  

blimey, more updates than me having hot lunches


You do know that you don't have to update to every new version (it's good practice in general, but it isn't required), right?

, wots with the servers, everytime i seem to log onto this site it always seems to be down?


SETI@Home is, and always has been a work in progress. Progress is good.


Since Pro is the opposite of Con and Progress is good....

Then what does that make Congress?

:)

ID: 644482 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : 5.10.20 now recommended Boinc version


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.