Is there a way to have SETI@Home send data more often?

Questions and Answers : Preferences : Is there a way to have SETI@Home send data more often?
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829104 - Posted: 8 Nov 2016, 2:19:24 UTC

Hi,

Is there a way to have SETI@Home send data more often?

and

Is there a place to look this up other than asking questions in a forum? I've looked around for a technical manual, but haven't found one.


Thank-you,

Micky Badgero
ID: 1829104 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1829130 - Posted: 8 Nov 2016, 5:57:33 UTC

What are your cache settings?
These have an impact on how often your computer contacts the servers for new work, I run with 5 days cache and and extra 0.1 or 0.01 on mine. This gives me the full 100 tasks for each CPU (not core or thread, but physical device) and 100 per GPU.
There is a limit of 5 minutes between server contacts for new work.


If you want more in-depth discussion about the workings of SETI@Home's software then head to the Number Crunching forum (http://setiathome.berkeley.edu/forum_forum.php?id=10
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1829130 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829202 - Posted: 8 Nov 2016, 14:38:19 UTC - in response to Message 1829130.  

Thanks Bob,

My cache is set to 10 days with 5 extra. I am not running CPU tasks, just GPU, and my CPU is still 100% most of the time. But I am running eight tasks at once on the GPU and on Tuesday, I have to jack the tasks up to 100 with manual updates. I have hit 28,000 credits a day a couple of times, but can't sustain the speeds without the feeds:)

If it updates automatically, it will send 16-20 tasks and get back 8-12, so the count keeps going down. Last week the server went down for maintenance before I could jack up the task number and I only had about 35 tasks in the queue. That is only enough for 3.5 hours if they are blc vlars, and if they are the short ones (some of the 22mr and other tasks), they run out much faster. So if I leave my computer on when I go to work on Tuesday, it can sit idle for six or seven hours waiting for the servers to come back up.

I have looked at the Number Crunching posts, and they are very useful. They taught me how to change the app_config.xml file to use my GPU for more than one task. And that may be how to have SETI@Home update every half hour or so, but I haven't been able to find any documentation on the settings themselves. The Client Configuration page (https://boinc.berkeley.edu/wiki/Client_configuration) has some info, but few details. The details have to come from somewhere. I just haven't found them yet.

BTW you have a very nice computer:)


Thanks again,

Micky Badgero
ID: 1829202 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1829208 - Posted: 8 Nov 2016, 15:33:16 UTC

One thing that does give the sort of problems you are seeing is having the "additional days" far too big. This may sound strange, even perverse, but having this value small actually improves cache management, and allows it to settle at the "100 per" level. Five days additional is far too big, try a fraction of a day.
Just now credits are in one of their periodic tumble sessions (its a "design feature"), and one of my systems that was regularly topping 60k is struggling to stay over 50k just now, and it is actually going through work faster than when it was at 60k...

(BTW - The one that reports as having three gtx1080 actually has one gtx1080 and a pair of gtx970s, the reporting error is another "feature" of the way BOINC reports such things...)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1829208 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1829213 - Posted: 8 Nov 2016, 15:55:40 UTC - in response to Message 1829208.  

In BOINC 7 not the additional days amount decides when to contact the project, but the store at least amount. Having it set to 10 days isn't really useful here, although I understand the thought behind it: because of the maximum of 100 tasks per hardware source, setting store at least to 10 days will probably bring more contact points asking for more work (10 days) than a way lower amount does.

However, because Seti has the 100 tasks per hardware resource, what it does is tell BOINC to back off for an X-amount of time when it's reached the 100 tasks in progress. Now BOINC will probably run the cache empty before asking for more work, because of the 10 days being the low water mark and only at that point will BOINC report the uploaded tasks and ask for new work.

Reducing the store at least value to a lower value allows the storing of 100 tasks and will top it off again once more tasks are run, uploaded and reported.
ID: 1829213 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829262 - Posted: 9 Nov 2016, 7:00:59 UTC

Thanks, both of you,

Yes, 10 days and 5 days makes no difference when 100 tasks can last no more than 10 hours. I changed it to 2 days and .2 days. You manage what you measure, and this is not a good measure.

I hope the 100 limit will be changed. It made sense when CPUs were as fast as GPUs, but it makes no sense now that GPUs are 10 to 100 times faster than CPUs.

Bob, your computer makes me wonder as to the other computers and GPUs being reported. I was wondering why there is such a broad range of credits for similar systems. This could be one of the reasons. I don't understand how they are calculating the credits, or why they change the value for the same processing time on the same computer. This would seem to devalue future work and discourage people who don't have high-end equipment from even trying to help here.


Best regards,

Micky Badgero
ID: 1829262 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1829281 - Posted: 9 Nov 2016, 9:14:21 UTC

There has been a lot of discussion about the credits system in Number Crunching - there is almost always at least one thread running. The general consensus is that the basic concept is OK, but there are a number of detail design and implementation flaws that cause the wild fluctuations and progressive degradation in credits per "standard task".
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1829281 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1829337 - Posted: 9 Nov 2016, 16:02:11 UTC - in response to Message 1829262.  

I hope the 100 limit will be changed. It made sense when CPUs were as fast as GPUs, but it makes no sense now that GPUs are 10 to 100 times faster than CPUs.


The 100 limit isn't a byproduct of old times with slow GPUs and fast CPUs. The 100 limit was put in place because the project's infrastructure (namely the database) was unable to manage the number of tasks in progress for all 500,000 hosts times the amount of in the wild workunits. The strain was crippling their systems and causes database crashes nearly every week for months. The crashes stopped as soon as they put the 100 limit in place and stability returned to the project.

There's discussion on increasing the 100 limit, which was chosen somewhat arbitrarily, but the Project Scientists are working on far bigger plans for the infrastructure right now (see David Anderson's news item here).
ID: 1829337 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829606 - Posted: 10 Nov 2016, 15:43:23 UTC - in response to Message 1829337.  

I hope the 100 limit will be changed. It made sense when CPUs were as fast as GPUs, but it makes no sense now that GPUs are 10 to 100 times faster than CPUs.


The 100 limit isn't a byproduct of old times with slow GPUs and fast CPUs. The 100 limit was put in place because the project's infrastructure (namely the database) was unable to manage the number of tasks in progress for all 500,000 hosts times the amount of in the wild workunits. The strain was crippling their systems and causes database crashes nearly every week for months. The crashes stopped as soon as they put the 100 limit in place and stability returned to the project.

There's discussion on increasing the 100 limit, which was chosen somewhat arbitrarily, but the Project Scientists are working on far bigger plans for the infrastructure right now (see David Anderson's news item here).


Hi OzzFan,

Thanks for the info and the very interesting links. So the problem is millions of computers doing the front-end calculations and overwhelmed servers trying to keep up on the back-end.

Hasn't the funding situation changed now with Breakthrough Listen? Would GPU back end processing help at the servers? (Something like nVidia's DGX-1)

At the end of 'Nebula: architecture', I am getting an error for the three code links from Firefox:

"The owner of setisvn.ssl.berkeley.edu has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website."

So I cannot see what any of the algorithms are.

It sounds like a lot of stuff is going on in the back end. I don't know what the AEI Atlas cluster is, but is this the database that was crashing before the 100 limit?

It also sounds like signals are only being looked for if they are on one bandwidth. Is this correct? Even my wireless router uses frequency hopping and spread-spectrum now, so it would seem that you would have to comb multiple bandwidths at once. I am pretty sure you are using fast Fourier transforms. Are you also using something like principle component analysis? Seems to me FFT was mentioned when I started seti@home in 2001. Then they closed the code. (Because people were cheating it to get high 'scores' without doing the calculations?) That was before BOINC and about the time I stopped doing seti@home. For many years, as it turned out, but I am back now, which goes well to Anderson's point of users appearing and disappearing unpredictably.

I have one of the fastest current GPUs and one day's processing would be about 240 blc2's or the equivalent. It also appears that the fastest CPU cores can only do about 25 a day. You might want to consider number of cores, instead of number of CPUs. So it seems to me that dropping the CPU limit to 25 per core and upping the GPU limit to 250 should balance it out.


Best regards,

Micky Badgero
ID: 1829606 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1829622 - Posted: 10 Nov 2016, 17:22:14 UTC - in response to Message 1829606.  

Hasn't the funding situation changed now with Breakthrough Listen?


Most of that money is not going to SETI@home but to SETI research in general (there are dozens of SETI projects besides SETI@home). It is my understanding that some of the money will go to S@H. I don't know what portion or how much it will help.

Would GPU back end processing help at the servers? (Something like nVidia's DGX-1)


That would require a significant rewrite of the backend, which is something that David wanted to avoid having to do. Various processes would have to be recompiled to support GPGPU processing.

At the end of 'Nebula: architecture', I am getting an error for the three code links from Firefox:

"The owner of setisvn.ssl.berkeley.edu has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website."

So I cannot see what any of the algorithms are.


That's due to an improperly implemented security certificate. You should be able to click on an Advanced option then click Continue Anyway (or something similar).

It sounds like a lot of stuff is going on in the back end. I don't know what the AEI Atlas cluster is, but is this the database that was crashing before the 100 limit?


No, the AEI Atlas cluster is what they are going to be moving to, which has far more horsepower than the existing infrastructure.

It also sounds like signals are only being looked for if they are on one bandwidth. Is this correct? Even my wireless router uses frequency hopping and spread-spectrum now, so it would seem that you would have to comb multiple bandwidths at once.


SETI@home MultiBeam searches for narrowband signals. AstroPulse searches for broadband signals. They are searching around the waterhole frequency as the logical starting point for any civilization targeting a signal to another civilization. Yes, more advanced civilizations would have more advanced communication methods, but SETI@home isn't designed to "listen in" on their existing communications infrastructure. Those signals would likely be too hard to detect and would have dissipated before it had a chance to reach us. Rather, SETI@home is looking for a targeted broadcast signal that happens to sweep by our location.

I am pretty sure you are using fast Fourier transforms. Are you also using something like principle component analysis? Seems to me FFT was mentioned when I started seti@home in 2001. Then they closed the code. (Because people were cheating it to get high 'scores' without doing the calculations?) That was before BOINC and about the time I stopped doing seti@home. For many years, as it turned out, but I am back now, which goes well to Anderson's point of users appearing and disappearing unpredictably.


For clarity's sake, you are using the term 'you'. I am not a member of the Project staff or administration. I am a volunteer helper, like you. The information I'm responding with is from data I've gleaned over the years. My information shouldn't be taken as authoritative.

That said, I can't answer your question if they are using principle component analysis. I do not recall that question being asked before, or if it was, what the answer to that question may be.

I have one of the fastest current GPUs and one day's processing would be about 240 blc2's or the equivalent. It also appears that the fastest CPU cores can only do about 25 a day. You might want to consider number of cores, instead of number of CPUs. So it seems to me that dropping the CPU limit to 25 per core and upping the GPU limit to 250 should balance it out.


I don't have any input or sway into changing that limit, though I doubt any changes are going to be made before the backend changes are in place to ensure stability. Users with more powerful machines are encouraged to join other BOINC projects to keep their systems crunching data.
ID: 1829622 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1829624 - Posted: 10 Nov 2016, 17:23:55 UTC

There are a number of problems with the servers, none of which would be cured by using hardware like the DGX-1. Two come immediately to mind, one is that the main database is now far to large to reside in memory, thus its performance is dominated by the access speed to the mountain of hard disks, second is the rate at which data can be transferred through to the internet without crashing the rest of the campus.

It is well worth reading David Anderson's paper talked about in this thread:http://setiathome.berkeley.edu/forum_thread.php?id=80469 as it gives some of the backgorund as to what is going on behind the scenes and look into the future (where such devices as the DGX-1 may be of use)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1829624 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829654 - Posted: 10 Nov 2016, 19:24:35 UTC - in response to Message 1829622.  

Thanks OzzFan,

...
That's due to an improperly implemented security certificate. You should be able to click on an Advanced option then click Continue Anyway (or something similar).


I could and probably will as long as the certificate is the only problem.

That said, I can't answer your question if they are using principle component analysis. I do not recall that question being asked before, or if it was, what the answer to that question may be.


Sorry, you (and rob smith) are shown as Volunteer testers, you both seem to be very knowledgeable, and I thought maybe you had influence with the group.


Best regards
ID: 1829654 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829666 - Posted: 10 Nov 2016, 19:59:35 UTC - in response to Message 1829624.  

There are a number of problems with the servers, none of which would be cured by using hardware like the DGX-1. Two come immediately to mind, one is that the main database is now far to large to reside in memory, thus its performance is dominated by the access speed to the mountain of hard disks, second is the rate at which data can be transferred through to the internet without crashing the rest of the campus.

It is well worth reading David Anderson's paper talked about in this thread:http://setiathome.berkeley.edu/forum_thread.php?id=80469 as it gives some of the background as to what is going on behind the scenes and look into the future (where such devices as the DGX-1 may be of use)


Thanks Bob,

Yes, hard disk speed is dominant for the DB. But it does sound like there is a lot of processing on the back end too.

Thanks for the link. This is the link that OzzFan gave me earlier.


Best regards,

Micky Badgero
ID: 1829666 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829670 - Posted: 10 Nov 2016, 20:04:13 UTC - in response to Message 1829622.  
Last modified: 10 Nov 2016, 20:50:15 UTC

Btw OzzFan,

I don't have any input or sway into changing that limit, though I doubt any changes are going to be made before the backend changes are in place to ensure stability. Users with more powerful machines are encouraged to join other BOINC projects to keep their systems crunching data.


I have tried other projects and was surprised to find that the climate project does not use GPUs.

Sometime in the future, I might start my own BOINC project for deep learning or something similar in AI to use the extra time. Doesn't look like there is an AI project out there that uses GPU processing.


Best regards,

Micky Badgero
ID: 1829670 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829683 - Posted: 10 Nov 2016, 21:24:48 UTC
Last modified: 10 Nov 2016, 21:43:31 UTC

The server status page (http://setiathome.berkeley.edu/sah_status.html) refers to Breakthrough Listen under Splitter Status.

Are all the blc tasks Breakthrough Listen?

Best regards,

Micky Badgero
ID: 1829683 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1829696 - Posted: 10 Nov 2016, 21:55:11 UTC

Yes
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1829696 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1829708 - Posted: 10 Nov 2016, 22:40:01 UTC - in response to Message 1829696.  

Thanks, Bob.
ID: 1829708 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1830044 - Posted: 12 Nov 2016, 10:41:42 UTC - in response to Message 1829654.  

I could and probably will as long as the certificate is the only problem.

If this means you are just afraid (and not have a technical problem (?) to choose [Proceed anyway] or whatever button exists in your browser):

This is general warning from the browser because it don't know what this page will do - e.g. will it ask for credit card info.

Page like this is mostly just text (and IMHO should be accessible by http but unfortunately changes automatically to https):
https://setisvn.ssl.berkeley.edu/trac/browser/seti_science/nebula/tables.h

On the warning page in Chromium 35 (SRWare Iron 35) there are ~20 lines of explanations but the main essence is:
"The site's security certificate is not trusted!"
"In this case, the certificate has not been verified by a third party that your computer trusts."

They (at berkeley.edu) self-sign the certificate (because else it costs money):
"
Issued to: setisvn.ssl.berkeley.edu
Issued by: setisvn.ssl.berkeley.edu

This CA Root certificate is not trusted because it is not in the Trusted Root Certification Authorities store.
"
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1830044 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1830046 - Posted: 12 Nov 2016, 10:44:30 UTC - in response to Message 1829683.  

Are all the blc tasks Breakthrough Listen?


"Understanding what the BLC name means":
http://setiathome.berkeley.edu/forum_thread.php?id=80380
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1830046 · Report as offensive
Micky Badgero

Send message
Joined: 26 Jul 16
Posts: 44
Credit: 21,373,673
RAC: 83
United States
Message 1830221 - Posted: 13 Nov 2016, 5:43:48 UTC

Thanks, BilBg,

"If this means you are just afraid (and not have a technical problem (?) to choose [Proceed anyway] or whatever button exists in your browser):"

It means I am concerned about the security of the web site. I have fairly good antivirus and antispyware software. But I also work some with Kali Linux, so I know how easy it is to get in if a user makes a small error. I work from home occasionally on this computer and I don't want to have to rebuild it because someone configured their server wrong.

And thank-you very much for the BLC link. Very interesting.


Best regards,

Micky Badgero
ID: 1830221 · Report as offensive
1 · 2 · Next

Questions and Answers : Preferences : Is there a way to have SETI@Home send data more often?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.