Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation
Previous · 1 . . . 76 · 77 · 78 · 79 · 80 · 81 · 82 . . . 107 · Next
Author | Message |
---|---|
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
The answer should be obvious, CREDITS! lol ;) What you could do with Boinc Credits? Not even a toaster you win anymore. :( We all here for the same reason, try to find an ET, just have different hosts capacities. And some have the capacity of use the "open source software" better than others, but that is the way the world works. Some flight on his private jets and others goes by bus, and that not makes anybody better or worst than the others. So please stop blaming about others hosts and concentrate to make your own host crunch and return all the work it could pick in the next days AFAP within the datelines. At the end all will be gone in a month or 2. BTW I'm starting to shutdown some of my GPU's and underclock the others today's to save some electric power. The remaining WU's could be crunched in the next week with 25% of the GPU's active only. |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
Yep! Bragging rights. That's the only thing you get from any of these projects. This project gives more credits is like saying exchanging a US dollar for Philippine Pesos gives you more money. You haven't changed any value, only the denomination. Let me rephrase: You have the choices in front of you. Spend your own hard earned money, helping a cause you believe in achieve its goals, or you throw your money at something which promises to make you look good. That's really what we're talking about. If you believe in some goal; do something to move toward the goal. |
Ville Saari Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 |
The answer should be obvious, CREDITS! lol ;) Every time I see that SETI has tasks to go out, I hit Update and try to get a few of them, but alas, the scheduler backs me off for 30 minutes again and again. :(You won't get any tasks if you update manually before the cooldown is over even if the scheduler had tasks to give. So if you want tasks, don't do that if the 30 minute timer is running. If the timer is over but boinc has not decided to contact the server on its own, then it is safe to hit update. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
And how the He** can he still get so many tasks? Yesterday for example a whole bunch. . . My only theory is that many of that high 'in progress' number are ghosts and that he received batches of ghost resends on 2 consecutive days, 13th and 14th. There may be others but it takes too long to delve past the first few pages. Stephen ? ? |
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 27 |
Interesting that the SSP numbers dropped like a rock Yet. DB purging does not seem to be able to get below 64/65K ?? Dave |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Maybe a wise thing to do is abort and resend with a small dateline all the WU from hosts who last contact was before april 1. . . +1 here :) Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Maybe a wise thing to do is abort and resend with a small dateline all the WU from hosts who last contact was before april 1. . . The way things are the deadlines will come first :) Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
At the current rate of return on the SSP, most of the results should be back in about 17 days. . . True of those that are actually being crunched, even by slow machines. But the unfortunately large number sitting in limbo assigned to defunct rigs will be there till the last deadline passes. Stephen :( |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13752 Credit: 208,696,464 RAC: 304 |
I think it is getting a Canonical Result into the Science Database which is the higher priority.So they need to reduce replication, decrease deadlines. The result can't be assimilated in to the science database until all Tasks for a WU has been returned. At the current rate of return on the SSP, most of the results should be back in about 17 days.And that won't happen, the return rate will gradually decay. As i mention in a previous post- most of my work still pending/valid/inconclusive etc is being held up by Tasks not due to be resent until late May, early June. Grant Darwin NT |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
They might decide to cancel Tasks in the next week or so if they get a Canonical Result, which might upset a few people who have cached thousands of WUs :) but it might upset some very slow crunchers too. . . Juan's proposal was that machines which have NOT contacted the servers since the official end of splitting have their outstanding tasks annulled. Other machines, fast or slow, that are regularly returning results would not be affected. But it is merely hypothetical anyway. . . As for second guessing the motivation for the massive bunkers being reported, the inference would be the latter option. But a third option occurs to me. Perhaps they anticipated, like most of us, that the clean up stage would last for months, and they wanted enough work to keep their machines occupied for that time. I can understand the desire to keep a machine busy to justify the power used to run it 24/7, this is why I am running E@H, but I chose a method which would not contribute to slowing the whole clean up process. Stephen :) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13752 Credit: 208,696,464 RAC: 304 |
projects who employ short deadlines don't have to deal with the bloat on their servers that comes from users like that. task distribution absolutely should scale to the relative speed of the system. fast systems should be alloted more work than others since they can get through them faster. my fastest system was capable of over 20,000 WUs per day. there's no reason I shouldn't be able to cache more than someone running a raspberry pi or other similarly slow system.Ideally any server side limits should be based not on a numbers of Tasks, but hours/days of cache size. That way systems can't get hundreds of thousands of Tasks (unless they have sub-second processing times), and other systems can't end up sitting on a few dozen Tasks with 3 week turnaround times. If/when Seti comes back, and they need to limit the size of the database again- 1 month deadlines on initial replication & 1 week for resends & a server side limit on the cache (12hrs if the weekly outage takes 6, 24hrs if it takes 12) would be the way to go. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13752 Credit: 208,696,464 RAC: 304 |
I can understand the desire to keep a machine busy to justify the power used to run it 24/7That doesn't make any sense- It only uses heaps of power if it's crunching. If power were an issue they'd have been better off finishing up what they had, and doing what comes along like everyone else. Grant Darwin NT |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
projects who employ short deadlines don't have to deal with the bloat on their servers that comes from users like that. . . Rosetta = 72 hours (wouldn't people complain here), E@H = 14 days. Both manage to function but I wouldn't recommend trying Rosetta on a really low end or very old system, it needs at least 8GB ram, it might work with only 4GB but I'm not willing to try it. Stephen :) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13752 Credit: 208,696,464 RAC: 304 |
it needs at least 8GB ram, it might work with only 4GB but I'm not willing to try it.No, it needs up to 1.3GB per core or thread. So old single or dual core CPUs can do work if they have at least 1.5 or 2.8GB of RAM. And processing power don't make any difference- Tasks are run for a set length of time. An extremely low powered CPU would probably have to have a minimum Target CPU run time of 12hrs to avoid problems with Tasks that take hours to produce any Decoys. But in the vast majority cases they could do valid work in as little as 4hrs. Grant Darwin NT |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
@Stephen There are another possibility for the large bunkered hosts with few reported WU, some used this on the WoW event, they set their host to report only 1 WU at a time while the host is actually crunching a lot more. By doing this the host continues to receive new WU but for the rest of us the host appears with a lot of non crunched WU. The user controls what to report first to avoid the deadline and will release their heavy bunkered batch at his control. @Grant I always believe the limit of WU per device penalizes who has fast hosts that is why we found a way to bypass this limitation. Thinking on the CPU only: 150 CPU WU is a huge cache for a single core CPU (not mentioned Androids or other devices where is absolutely nonsense use such cache) and is a meaningless cache for a 64 or more thread CPU. The same is even worst on the CPU side. 150 GPU cache on a Top GPU holds for less than 1-1 1/2 hour. But is enough for a low end GPU who crunch a WU in 5-15 min. Try to keep this limits on such different devices was one of the big problems never solved in S@H. The right way to distribute the work should be as you posted, a limit per day, the host could receive what he crunch on a day. By making that you avoid the slow hosts with 100`s of WU sitting for weeks doing nothing and the necessity of the large hosts to spoof to try to keep them work due the outages for example. |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
projects who employ short deadlines don't have to deal with the bloat on their servers that comes from users like that. task distribution absolutely should scale to the relative speed of the system. fast systems should be alloted more work than others since they can get through them faster. my fastest system was capable of over 20,000 WUs per day. there's no reason I shouldn't be able to cache more than someone running a raspberry pi or other similarly slow system.Ideally any server side limits should be based not on a numbers of Tasks, but hours/days of cache size. I would also think that increasing the size of the work unit would also be of highest importance, but absolutely agree about every point. Having a host check in with progress on a task might also at least show if progress is being made, or identify tasks which could theoretically be externally canceled and given to another host. |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
Again...very speculative on the return of SETI. I'm not feeling good about that prospect coming up. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
If it comes back, it wont be for YEARS. it wont be back in 6 months or any other pie-in-the-sky fantasies some might be holding on to. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 |
If it comes back, it wont be for YEARS. it wont be back in 6 months or any other pie-in-the-sky fantasies some might be holding on to. Agreed. Barring some type of major technological breakthrough like the building of space based arrays of telescopes, I'm not seeing the current scopes as being capable of looking any further back in time to get more signals. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13752 Credit: 208,696,464 RAC: 304 |
There is still the Southern Hemisphere yet to be checked with existing radio telescopes. And the more data from the north, the more points of reference for that area.If it comes back, it wont be for YEARS. it wont be back in 6 months or any other pie-in-the-sky fantasies some might be holding on to. Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.