Message boards :
Number crunching :
SETI orphans
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 43 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13847 Credit: 208,696,464 RAC: 304 |
Running more than one project, you are better off with next to no cache. The larger the cache, the longer it takes for the Manager to figure out how long it takes to process WUs and do things in accordance with your resource share settings. Particularly so since it takes 6-8 weeks here at Seti for your RAC to get to it's nominal level, and joining up to anew project will result in debt being owed to the project, but no work history and only a very rough guess as to how long work will take to process. And the more people try to micro-manage things, the longer it takes for them to (eventually) settle down.Strange, MW is one of the few set and forget and never have to mess with it projects I run. Has a fixed limit on the number of tasks it allows on a host and reasonable deadlines I have never missed.If it had a reasonable resource share it would behave, but if I gave it a tiny one just to keep it around in the background, it wouldn't. With shares down to 2% it would get 300+ tasks then go into panic mode to make the deadline when it approached. Maybe I didn't give it enough time, but after about 2 months it would get on my nerves. Wound up setting NNT and manually connecting every so often to get about 150 tasks and set it back to NNT. But as you say, a few more weeks and it won't matter. Grant Darwin NT |
W-K 666 Send message Joined: 18 May 99 Posts: 19369 Credit: 40,757,560 RAC: 67 |
Strange, MW is one of the few set and forget and never have to mess with it projects I run. Has a fixed limit on the number of tasks it allows on a host and reasonable deadlines I have never missed.If it had a reasonable resource share it would behave, but if I gave it a tiny one just to keep it around in the background, it wouldn't. With shares down to 2% it would get 300+ tasks then go into panic mode to make the deadline when it approached. Maybe I didn't give it enough time, but after about 2 months it would get on my nerves. Wound up setting NNT and manually connecting every so often to get about 150 tasks and set it back to NNT. Einstein will do the same if you give it a small resource share, because it has a 7 day deadline. It insists on filling up to your cache size, then doesn't run for 5 days or so then panics and has to do more work than the resource share allows. Then come the Seti outrage and it would refill the cache, wash, rinse, repeat. |
halfempty Send message Joined: 2 Jun 99 Posts: 97 Credit: 35,236,901 RAC: 114 |
Running more than one project, you are better off with next to no cache. Einstein will do the same if you give it a small resource share Thanks for the suggestions and the warnings. I'll be doing more research and once the SETI tasks stop flowing I'll probably add the other projects all with the same resource share. |
Cameron Send message Joined: 27 Nov 02 Posts: 110 Credit: 5,082,471 RAC: 17 |
Or Einstein guesses after a too small initial controlled sample of like 1 or 2 and then fills the cache. I attached my new machine to Einstein just before learning of the announcement just for GPU work times. Currently got about 80 still to work through by Monday. I'll see by the weekend how many I may have to abort [I've done that before after inital connections with Einstein.] I've Usually hung around and done an equivelent number to the aborts before nnting. Not that it maters if it was just part of the process previously. I intend to use Milkyway like previously for the spare CPU threads. Milkyway I think now has a 30 wu per thread limit and it's been pretty reliable at managing its deadlines. Always was good even in the early days with the limit at 6. |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
I've been looking at Einstein as a new home. They were friendly enough to open a spot in their boards to let us chat on Tuesdays, so that earns them bonus points in my book. How is their uptime? do they have a weekly outage? do I need to configure my system differently? I have E@H setup as a backup project. 1) The gpu tasks are not major time burners. 2) The cpu tasks run huge amounts of hours. If you just add the project it will "run" but you can fine-tune it. Tom A proud member of the OFA (Old Farts Association). |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
1) The gpu tasks are not major time burners. Seems like we need petri take a visit on their code. |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
I'll go to a project that has an open source code for Nvidia GPUS and an off-line test suite similar to Seti@home. Hi, Einstein seems to have open source code and it uses 45-88% GPU on my system. Seems like it could need some optimizing. But but,... if it is OpenCL then I'd have to turn it to CUDA first ... To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
I'll go to a project that has an open source code for Nvidia GPUS and an off-line test suite similar to Seti@home. Hi petri, keep in mind they have two different types of tasks there, the low GPU utilized tasks must be the Gravity Wave task. it requires a lot of spare CPU support. I've seen 150% CPU thread per GPU. The Gamma Ray tasks pull only 100% CPU thread and have good 98% GPU utilization. I would love to see if optimizations are possible there! tilt the scales back towards the nvidia cards :). as it stands, the project heavily favors AMD cards. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
rob smith Send message Joined: 7 Mar 03 Posts: 22504 Credit: 416,307,556 RAC: 380 |
From what I hear the reason for AMD being favoured over nVidia on Einstein is that AMD are have a better DP float processor than nVidia. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
From what I hear the reason for AMD being favoured over nVidia on Einstein is that AMD are have a better DP float processor than nVidia. that's only a small portion of the WU process. the bulk of it is SP. in *general* AMD usually offers better SP:DP performance ratio, especially at the same price point, but that still varies from card to card. and AMD has started to reduce the SP:DP performance ratio in their more modern mainstream cards (their RX cards only at a 1:16 ratio) An Nvidia Titan V for example has more DP performance (7.45 TFlops, 1:2 ratio) than any AMD card. cost is about $2500 The Nvidia Quadro GP100 also has more DP performance (5.16 TFlops, 1:2 ratio) than any AMD card. can be had on ebay for less than $2000 The AMD Radeon VII is the best card for DP in their lineup at 3.36 TFlops, 1:4 ratio. can be had for reasonable prices ~$5-600 However the Nvidia Tesla K40 is a strong DP contender, even though its very old, at 1.62 TFlops, 1:3 ratio and can be had for about $100. but you don't really seen these strong nvidia DP cards on their leaderboards because it is only a small component. Milkyway however is all DP, and DP is king there, that's why you see Titan Vs dominating there. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
tazzduke Send message Joined: 15 Sep 07 Posts: 190 Credit: 28,269,068 RAC: 5 |
Well I aint shutting up shop now that Seti is sleeping as they say, already over at Einstein, testing the NVIDIA cards and doing okay. Will give GPUGRID a go, its medical research so thats a good thing. Might give Milkway a go, but really should get a couple of 2nd hand RX570s - maybe Just for kicks and a bit of fun - Primegrid - Like maths a bit lol. Happy Crunching and Cheerio :-) |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
Now I remember why I stopped doing Einstein... one of my Android devices - for which every project ever just got 4 tasks in because it only has 4 cores and the BOINC version on it has no options to change the amount of work asked - just got 17 Binary Pulsar tasks in from Einstein. Absolutely outrageous. It'll be swamped with those for the next days. I immediately set E@H back to NNT on all devices. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
The trick to managing work at E@H is to set a really, really small cache. Like 0.1days. Then you basically run on a "turn one in - get one" schedule. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Freewill Send message Joined: 19 May 99 Posts: 766 Credit: 354,398,348 RAC: 11,693 |
|
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I took my slowest host over to exclusive E@H last year with a 0.2 day cache. Running the GW cpu app and the GW and GRP gpu app on the host and it doesn't need to be babysat. I have around half a days worth of cpu work on it and same for the gpus. It is actually coexisting quite well with GPUGrid running concurrently on it. The resource share is E@H(50) to GPUGrid(75) and the the GPUGrid app runs mostly on 2 of the 3 gpus. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
The trick to managing work at E@H is to set a really, really small cache. Like 0.1days. Then you basically run on a "turn one in - get one" schedule.The 7.4.53 Android client does not have the option to set a cache, and asks by default 0.1 day plus additional 0.2 days worth of work. I cannot upgrade to the 7.16.3 version that allows setting a cache, because for me that version doesn't work (crashes immediately). |
Cameron Send message Joined: 27 Nov 02 Posts: 110 Credit: 5,082,471 RAC: 17 |
Well it looks like my cache of Einsten GPU GWs will be handled by deadline. without having to do anything drastic. Cache currently set to 5 days so I'll wind that back to 2 or 3 when done and just crunching SETI for what time is available. won't really need to smooth over scheduled weekly outages. |
Cavalary Send message Joined: 15 Jul 99 Posts: 104 Credit: 7,507,548 RAC: 38 |
I'm wondering about this. In these nearly 21 years of SETI@home, I've only quickly dabbled in a few other projects when S@h didn't generate work for a longer time, and those were climateprediction.net, MilkyWay@home and World Community Grid's Clean Energy Project - Phase 2 (which has since finished). Really the only reason I'll continue crunching is out of my commitment to SETI@home, in the idea that it created BOINC and made distributed computing known and I should honor that, but I am only looking for something that works just the same. Just do CPU work, so I go through WUs slowly, but also recently can't be connected that much, so need even more of a buffer. Always liked to have a full quota anyway, which lasted 4-5 days or so when it was 100, more now that it's 150. Also, considering the computer I have, low resource use is also very important, SETI@home uses some 75 Mb RAM for the 2 WUs it can process, just having 2 cores, and there are tiny amounts written to disk, so the idea would be similar limits. May make an exception for an environmental project, in terms of disk space at least, though RAM use would still need to be that low, and hardly any environmental projects out there, unless you count cpdn. Other than that, astronomy is the other field I'd stick with, of course. Not going anywhere else, nor to anything that may attach me to multiple projects, not letting me to stick to one and essentially switch over all that commitment. So definitely critical for the project I'll switch to to produce enough work for the foreseeable future to be enough on its own, without finding that it doesn't generate enough or longer downtimes or risk of ending as well, putting me back in this situation in the next years. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
It's hard to advise right now. I've picked up two responses from other projects: one said 'bring it on - we've got a lot of work to get through at the moment', and the other said 'we've coped with the occasional challenge or pentathlon - I'm sure we'll cope'. We don't now how many people will be looking for new homes when the hibernation actually happens. I suspect that both of those projects may change their views when the reality hits. For all our good-humoured grumbling here, SETI has actually coped remarkably well with a huge number of volunteers over the years, some of them with very-high powered computers. I think that may have bigger consequences than are yet apparent for the other projects. I think people here may have to be willing to make several moves before they can settle down in a new comfort zone. |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
one of my machines is going to Folding at home (husband's machine). Is there a seti orphans group over there to join?? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.