SETI orphans

Message boards : Number crunching : SETI orphans
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 . . . 43 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13832
Credit: 208,696,464
RAC: 304
Australia
Message 2035918 - Posted: 5 Mar 2020, 4:48:15 UTC - in response to Message 2035913.  
Last modified: 5 Mar 2020, 4:50:32 UTC

Strange, MW is one of the few set and forget and never have to mess with it projects I run. Has a fixed limit on the number of tasks it allows on a host and reasonable deadlines I have never missed.
If it had a reasonable resource share it would behave, but if I gave it a tiny one just to keep it around in the background, it wouldn't. With shares down to 2% it would get 300+ tasks then go into panic mode to make the deadline when it approached. Maybe I didn't give it enough time, but after about 2 months it would get on my nerves. Wound up setting NNT and manually connecting every so often to get about 150 tasks and set it back to NNT.
Running more than one project, you are better off with next to no cache. The larger the cache, the longer it takes for the Manager to figure out how long it takes to process WUs and do things in accordance with your resource share settings. Particularly so since it takes 6-8 weeks here at Seti for your RAC to get to it's nominal level, and joining up to anew project will result in debt being owed to the project, but no work history and only a very rough guess as to how long work will take to process. And the more people try to micro-manage things, the longer it takes for them to (eventually) settle down.

But as you say, a few more weeks and it won't matter.
Grant
Darwin NT
ID: 2035918 · Report as offensive     Reply Quote
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19308
Credit: 40,757,560
RAC: 67
United Kingdom
Message 2035924 - Posted: 5 Mar 2020, 5:53:52 UTC - in response to Message 2035913.  

Strange, MW is one of the few set and forget and never have to mess with it projects I run. Has a fixed limit on the number of tasks it allows on a host and reasonable deadlines I have never missed.
If it had a reasonable resource share it would behave, but if I gave it a tiny one just to keep it around in the background, it wouldn't. With shares down to 2% it would get 300+ tasks then go into panic mode to make the deadline when it approached. Maybe I didn't give it enough time, but after about 2 months it would get on my nerves. Wound up setting NNT and manually connecting every so often to get about 150 tasks and set it back to NNT.

You're right it has never been greedy to the point of over committing what the system is capable of, just not respecting its boundaries. Guess it's more of a BOINC client problem than MW, but I don't think that will be an issue in the future because I wouldn't be as protective of other projects as I am of SETI. As long as they work it out in the end, I'll let them fight it out among themselves for the most part.

Einstein will do the same if you give it a small resource share, because it has a 7 day deadline. It insists on filling up to your cache size, then doesn't run for 5 days or so then panics and has to do more work than the resource share allows. Then come the Seti outrage and it would refill the cache, wash, rinse, repeat.
ID: 2035924 · Report as offensive     Reply Quote
halfempty
Avatar

Send message
Joined: 2 Jun 99
Posts: 97
Credit: 35,236,901
RAC: 114
United States
Message 2035940 - Posted: 5 Mar 2020, 7:37:08 UTC - in response to Message 2035918.  

Running more than one project, you are better off with next to no cache.

Einstein will do the same if you give it a small resource share

Thanks for the suggestions and the warnings. I'll be doing more research and once the SETI tasks stop flowing I'll probably add the other projects all with the same resource share.
ID: 2035940 · Report as offensive     Reply Quote
Cameron
Avatar

Send message
Joined: 27 Nov 02
Posts: 110
Credit: 5,082,471
RAC: 17
Australia
Message 2035942 - Posted: 5 Mar 2020, 7:56:03 UTC - in response to Message 2035924.  

Or Einstein guesses after a too small initial controlled sample of like 1 or 2 and then fills the cache.

I attached my new machine to Einstein just before learning of the announcement just for GPU work times.

Currently got about 80 still to work through by Monday. I'll see by the weekend how many I may have to abort [I've done that before after inital connections with Einstein.] I've Usually hung around and done an equivelent number to the aborts before nnting.
Not that it maters if it was just part of the process previously.

I intend to use Milkyway like previously for the spare CPU threads.

Milkyway I think now has a 30 wu per thread limit and it's been pretty reliable at managing its deadlines.
Always was good even in the early days with the limit at 6.
ID: 2035942 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2035993 - Posted: 5 Mar 2020, 12:47:05 UTC - in response to Message 2035736.  

I've been looking at Einstein as a new home. They were friendly enough to open a spot in their boards to let us chat on Tuesdays, so that earns them bonus points in my book. How is their uptime? do they have a weekly outage? do I need to configure my system differently?


I have E@H setup as a backup project.
1) The gpu tasks are not major time burners.
2) The cpu tasks run huge amounts of hours.

If you just add the project it will "run" but you can fine-tune it.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2035993 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2035995 - Posted: 5 Mar 2020, 12:53:57 UTC - in response to Message 2035993.  

1) The gpu tasks are not major time burners.

Seems like we need petri take a visit on their code.
ID: 2035995 · Report as offensive     Reply Quote
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 2036035 - Posted: 5 Mar 2020, 15:09:17 UTC - in response to Message 2035742.  

I'll go to a project that has an open source code for Nvidia GPUS and an off-line test suite similar to Seti@home.

Hey! I'm sure your optimizations will be greatly welcomed!!

To follow my own interests: Go Einstein@Home? ;-)


Keep searchin',
Martin


Hi,
Einstein seems to have open source code and it uses 45-88% GPU on my system.
Seems like it could need some optimizing. But but,... if it is OpenCL then I'd have to turn it to CUDA first ...
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 2036035 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2036038 - Posted: 5 Mar 2020, 15:20:25 UTC - in response to Message 2036035.  

I'll go to a project that has an open source code for Nvidia GPUS and an off-line test suite similar to Seti@home.

Hey! I'm sure your optimizations will be greatly welcomed!!

To follow my own interests: Go Einstein@Home? ;-)


Keep searchin',
Martin


Hi,
Einstein seems to have open source code and it uses 45-88% GPU on my system.
Seems like it could need some optimizing. But but,... if it is OpenCL then I'd have to turn it to CUDA first ...


Hi petri, keep in mind they have two different types of tasks there, the low GPU utilized tasks must be the Gravity Wave task. it requires a lot of spare CPU support. I've seen 150% CPU thread per GPU. The Gamma Ray tasks pull only 100% CPU thread and have good 98% GPU utilization.

I would love to see if optimizations are possible there! tilt the scales back towards the nvidia cards :). as it stands, the project heavily favors AMD cards.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2036038 · Report as offensive     Reply Quote
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22436
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2036043 - Posted: 5 Mar 2020, 15:34:21 UTC

From what I hear the reason for AMD being favoured over nVidia on Einstein is that AMD are have a better DP float processor than nVidia.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2036043 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2036048 - Posted: 5 Mar 2020, 15:57:26 UTC - in response to Message 2036043.  
Last modified: 5 Mar 2020, 15:58:02 UTC

From what I hear the reason for AMD being favoured over nVidia on Einstein is that AMD are have a better DP float processor than nVidia.


that's only a small portion of the WU process. the bulk of it is SP.

in *general* AMD usually offers better SP:DP performance ratio, especially at the same price point, but that still varies from card to card. and AMD has started to reduce the SP:DP performance ratio in their more modern mainstream cards (their RX cards only at a 1:16 ratio)

An Nvidia Titan V for example has more DP performance (7.45 TFlops, 1:2 ratio) than any AMD card. cost is about $2500
The Nvidia Quadro GP100 also has more DP performance (5.16 TFlops, 1:2 ratio) than any AMD card. can be had on ebay for less than $2000
The AMD Radeon VII is the best card for DP in their lineup at 3.36 TFlops, 1:4 ratio. can be had for reasonable prices ~$5-600
However the Nvidia Tesla K40 is a strong DP contender, even though its very old, at 1.62 TFlops, 1:3 ratio and can be had for about $100.

but you don't really seen these strong nvidia DP cards on their leaderboards because it is only a small component.

Milkyway however is all DP, and DP is king there, that's why you see Titan Vs dominating there.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2036048 · Report as offensive     Reply Quote
Profile tazzduke
Volunteer tester

Send message
Joined: 15 Sep 07
Posts: 190
Credit: 28,269,068
RAC: 5
Australia
Message 2036208 - Posted: 6 Mar 2020, 8:49:51 UTC - in response to Message 2036048.  

Well I aint shutting up shop now that Seti is sleeping as they say, already over at Einstein, testing the NVIDIA cards and doing okay.

Will give GPUGRID a go, its medical research so thats a good thing.

Might give Milkway a go, but really should get a couple of 2nd hand RX570s - maybe

Just for kicks and a bit of fun - Primegrid - Like maths a bit lol.

Happy Crunching and Cheerio :-)
ID: 2036208 · Report as offensive     Reply Quote
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 2036277 - Posted: 6 Mar 2020, 16:29:28 UTC

Now I remember why I stopped doing Einstein... one of my Android devices - for which every project ever just got 4 tasks in because it only has 4 cores and the BOINC version on it has no options to change the amount of work asked - just got 17 Binary Pulsar tasks in from Einstein. Absolutely outrageous. It'll be swamped with those for the next days. I immediately set E@H back to NNT on all devices.
ID: 2036277 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2036322 - Posted: 6 Mar 2020, 19:34:56 UTC - in response to Message 2036277.  

The trick to managing work at E@H is to set a really, really small cache. Like 0.1days. Then you basically run on a "turn one in - get one" schedule.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2036322 · Report as offensive     Reply Quote
Profile Freewill Project Donor
Avatar

Send message
Joined: 19 May 99
Posts: 766
Credit: 354,398,348
RAC: 11,693
United States
Message 2036339 - Posted: 6 Mar 2020, 21:05:36 UTC - in response to Message 2036322.  

Seems like quite the opposite problem at S@H as of late. I saw the same thing. E@H was like drinking from a fire hose when I allowed new tasks. Must have my cache set too high.
ID: 2036339 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2036348 - Posted: 6 Mar 2020, 21:47:25 UTC - in response to Message 2036339.  

I took my slowest host over to exclusive E@H last year with a 0.2 day cache. Running the GW cpu app and the GW and GRP gpu app on the host and it doesn't need to be babysat.

I have around half a days worth of cpu work on it and same for the gpus.

It is actually coexisting quite well with GPUGrid running concurrently on it. The resource share is E@H(50) to GPUGrid(75) and the the GPUGrid app runs mostly on 2 of the 3 gpus.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2036348 · Report as offensive     Reply Quote
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 2036360 - Posted: 6 Mar 2020, 22:24:00 UTC - in response to Message 2036322.  
Last modified: 6 Mar 2020, 22:24:18 UTC

The trick to managing work at E@H is to set a really, really small cache. Like 0.1days. Then you basically run on a "turn one in - get one" schedule.
The 7.4.53 Android client does not have the option to set a cache, and asks by default 0.1 day plus additional 0.2 days worth of work. I cannot upgrade to the 7.16.3 version that allows setting a cache, because for me that version doesn't work (crashes immediately).
ID: 2036360 · Report as offensive     Reply Quote
Cameron
Avatar

Send message
Joined: 27 Nov 02
Posts: 110
Credit: 5,082,471
RAC: 17
Australia
Message 2036367 - Posted: 6 Mar 2020, 23:11:39 UTC

Well it looks like my cache of Einsten GPU GWs will be handled by deadline. without having to do anything drastic.

Cache currently set to 5 days so I'll wind that back to 2 or 3 when done and just crunching SETI for what time is available.
won't really need to smooth over scheduled weekly outages.
ID: 2036367 · Report as offensive     Reply Quote
Cavalary

Send message
Joined: 15 Jul 99
Posts: 104
Credit: 7,507,548
RAC: 38
Romania
Message 2036491 - Posted: 7 Mar 2020, 11:47:20 UTC

I'm wondering about this. In these nearly 21 years of SETI@home, I've only quickly dabbled in a few other projects when S@h didn't generate work for a longer time, and those were climateprediction.net, MilkyWay@home and World Community Grid's Clean Energy Project - Phase 2 (which has since finished).
Really the only reason I'll continue crunching is out of my commitment to SETI@home, in the idea that it created BOINC and made distributed computing known and I should honor that, but I am only looking for something that works just the same. Just do CPU work, so I go through WUs slowly, but also recently can't be connected that much, so need even more of a buffer. Always liked to have a full quota anyway, which lasted 4-5 days or so when it was 100, more now that it's 150.
Also, considering the computer I have, low resource use is also very important, SETI@home uses some 75 Mb RAM for the 2 WUs it can process, just having 2 cores, and there are tiny amounts written to disk, so the idea would be similar limits. May make an exception for an environmental project, in terms of disk space at least, though RAM use would still need to be that low, and hardly any environmental projects out there, unless you count cpdn. Other than that, astronomy is the other field I'd stick with, of course. Not going anywhere else, nor to anything that may attach me to multiple projects, not letting me to stick to one and essentially switch over all that commitment.
So definitely critical for the project I'll switch to to produce enough work for the foreseeable future to be enough on its own, without finding that it doesn't generate enough or longer downtimes or risk of ending as well, putting me back in this situation in the next years.
ID: 2036491 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14672
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2036494 - Posted: 7 Mar 2020, 12:00:52 UTC

It's hard to advise right now. I've picked up two responses from other projects: one said 'bring it on - we've got a lot of work to get through at the moment', and the other said 'we've coped with the occasional challenge or pentathlon - I'm sure we'll cope'.

We don't now how many people will be looking for new homes when the hibernation actually happens. I suspect that both of those projects may change their views when the reality hits. For all our good-humoured grumbling here, SETI has actually coped remarkably well with a huge number of volunteers over the years, some of them with very-high powered computers. I think that may have bigger consequences than are yet apparent for the other projects. I think people here may have to be willing to make several moves before they can settle down in a new comfort zone.
ID: 2036494 · Report as offensive     Reply Quote
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2036584 - Posted: 7 Mar 2020, 21:54:17 UTC

one of my machines is going to Folding at home (husband's machine). Is there a seti orphans group over there to join??
ID: 2036584 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 . . . 43 · Next

Message boards : Number crunching : SETI orphans


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.