Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation
Previous · 1 . . . 72 · 73 · 74 · 75 · 76 · 77 · 78 . . . 107 · Next
| Author | Message |
|---|---|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
I Agree with Richard, bunkering is not a good option on this last days. . . Again +1 Stephen :) |
|
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 572
|
Are you sure, I checked mine this morning, according to the "all tasks" list I had downloaded, processed returned and had validated 18 tasks in a single batch less than an hour previously. When I rechecked about 2 hours later, THREE hours after the original download they had already been removed from my lists. Resends are getting processed and deleted at a very fast rate. Kevin
|
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7381 Credit: 44,181,323 RAC: 238
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. Hi Ian, Sorry to disagree with you here, but I do. Case in point, my current host: When I look at my host list it show [2] NVIDIA GPUs. Guess what, I HAVE 2 NVIDIA GPUs in this host. Now, SETI upped the amount of tasks per device. I was getting 150 CPU tasks and 300 GPU tasks. Have a great day! :) Siran CAPT Siran d'Vel'nahr - L L & P _\\// Winders 11 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Hi Juan, . . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project. Stephen :( |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799
|
. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project. Thanks Stephen you get the meaning. yes i have some problems with the translations, you all know i'm not a native english speaker. About the GPU number... It's just a simple TAG as posted by Ian, nothing else, to prove this point please tell me any number (integer greater than 0 of course and under 32000) and i will switch mine for you to see nothing changes on my host. So please stop with this conspiracy theory.
|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. . . Well not ALL tasks in the field. but maybe all those older than a week or thereabouts. So all hosts with tasks less than a week old would still have a day or 2 to clear them. As for an overall limit on task numbers, believe it or not but there is such a limit in place. But obviously some have found a way to circumvent it. S@H is not perfect :( But certainly forcing resends on all older tasks would expedite the process of cleaning up the project. Stephen . . |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7381 Credit: 44,181,323 RAC: 238
|
Hi Kevin, Yep, I got 1 yesterday, 2 on the 11th, 1 on the 9th and 2 on the 3rd. 6 since the 1st. I even decided to go for CPU tasks instead of just GPU since I was having problems with Rosetta constantly restarting it's tasks and always running in high priority. An administrator said I should just set NNT and do something else since it will take weeks to months for a fix for the app to work properly and set checkpoints. Others were having that same problem. So I added CPU tasks here in hopes to get more tasks. Have a great day! :) Siran CAPT Siran d'Vel'nahr - L L & P _\\// Winders 11 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. . . And with the very low rate of resends available also meaningless. Even if he was reporting 2 million GPUs, he will not receive more tasks than are available, and that is small numbers indeed. Getting new work at this point is more like winning the lottery as has been remarked on several occasions. So one does have to wonder why he persists with such a pointless tactic at this time. Stephen < shrug > |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874
|
Now it is crunching the WU with dateline of May 18 04:11:11 AM EST.One point of view is that the rules changed on 2 Mar 2020, when we were all given 4 weeks' notice that the project was switching from raw data processing to analysis. The extended "anyone can participate here - even the slowest tortoise can join the fun" deadlines became meaningless from that moment. I, too, ran my computers up to the wire, and a little bit longer when the final tapes couldn't be split because the database was full. But my last remaining first-run task was reported complete at 2 Apr 2020, 20:01:50 UTC. I would have expected that we would all be participating in the clean-up by now, but instead we're debating (yet again) our freedoms to bend and interpret the rules however we like. Please let's just agree to differ, finish whatever work we've got (without worrying how we got it), and wish Eric and David the best of luck with their analysis, report-writing, and publishing. |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799
|
So, then 64 is also a tag, and all the extra WU's for a spoofed "64" client is just a conspiracy, and the thousands of "extra" WU's on such a host does not exist?? Please not put wrong words on my mouth, i just say in my case, the 1999 is a simple tag. Not related to the number of WU/GPU's or the way the host works. Nothing more or less. I just changed my to 2020 for you to see. If you go to E@H and Rossetta you will see the tag there is still 1999
|
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7381 Credit: 44,181,323 RAC: 238
|
Hi Juan, Hi Stephen, Yep, Juan explained what he meant and I agree with him. :) Have a great day! :) Siran CAPT Siran d'Vel'nahr - L L & P _\\// Winders 11 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. . . Puts hand up ... . . I'll take a wild guess, Copernicus? Stephen ? ? |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
My host is not actively crunching. . . OK, again I think this time you are being too literal. I am sure he was simply concerned that since he is not getting regular work to crunch he would not be included in Juan's definition of 'active cruncher', nothing more. I think we can all move past that confusion ... Stephen |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
1337 = leet = elite . . . D'OH! Stephen . . Not my world ... |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Even better, and for no effort on behalf of the servers, he just detaches those two computers then deletes his SETI account. Doing so would show that he at least had some grain of thought for the rest of the SETI community. . . Or perhaps if he simply set NNT and let it go at that ?? Stephen ? ? |
|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. I've seen the code in use, you haven't. So you don't know what you're talking about. the number is just a tag and has no bearing on what number of tasks he's getting. that's handled by a different process in the code. the OLD method to get more tasks was GPU spoofing, and was very rudimentary and acheivible by anyone who is able to copy/paste data in their coproc_info.xml file (Windows or Linux) and even has youtube videos explaining the process. but that's not what Ville is doing. he's using a totally different and more sophisticated/elegant method. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours
|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Thanks Stephen you get the meaning. yes i have some problems with the translations, you all know i'm not a native english speaker. . . Dude! Walk outside and roll around in the snow for a while (if there is any left), you need to cool off ... Stephen Sheesh ... |
|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640
|
you can't base history on what you can see NOW. we already went through this several days ago. just look back in your own messages. you got like 50 something tasks on April 3rd, but they have mostly been purged from the task list already so you can't see them anymore. but that doesn't mean they were never there. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours
|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Hi Kevin, . . Yeah, I have also had issues with Rosetta, it does not seem to play nice with E@H so I also stuck it on hold and went to WCG for the CPU work. It's been pretty seamless since then. Stephen :( |
|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. don'7 b3 m4d jus7 cuz U Rn7 1337 enuf Seti@Home classic workunits: 29,492 CPU time: 134,419 hours
|
©2026 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.