Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation
Previous · 1 . . . 27 · 28 · 29 · 30 · 31 · 32 · 33 . . . 108 · Next
| Author | Message |
|---|---|
|
Grumpy Swede Send message Joined: 1 Nov 08 Posts: 8170 Credit: 49,849,242 RAC: 147
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. Ah well, I could have said something about cheating, and Obsessive Crunching Disorder, but I won't, this time...... :-) But I will say to the spoofers: Spare us from hackers lingo please.... |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. . . Puts hand up ... . . I'll take a wild guess, Copernicus? Stephen ? ? |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7346 Credit: 44,181,323 RAC: 540
|
Hi Juan, Hi Stephen, Yep, Juan explained what he meant and I agree with him. :) Have a great day! :) Siran CAPT Siran d'Vel'nahr XO - L L & P _\\// USS Vre'kasht NCC-33187 Winders 10 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
So, then 64 is also a tag, and all the extra WU's for a spoofed "64" client is just a conspiracy, and the thousands of "extra" WU's on such a host does not exist?? Please not put wrong words on my mouth, i just say in my case, the 1999 is a simple tag. Not related to the number of WU/GPU's or the way the host works. Nothing more or less. I just changed my to 2020 for you to see. If you go to E@H and Rossetta you will see the tag there is still 1999
|
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14114 Credit: 200,643,578 RAC: 1,983
|
Now it is crunching the WU with dateline of May 18 04:11:11 AM EST.One point of view is that the rules changed on 2 Mar 2020, when we were all given 4 weeks' notice that the project was switching from raw data processing to analysis. The extended "anyone can participate here - even the slowest tortoise can join the fun" deadlines became meaningless from that moment. I, too, ran my computers up to the wire, and a little bit longer when the final tapes couldn't be split because the database was full. But my last remaining first-run task was reported complete at 2 Apr 2020, 20:01:50 UTC. I would have expected that we would all be participating in the clean-up by now, but instead we're debating (yet again) our freedoms to bend and interpret the rules however we like. Please let's just agree to differ, finish whatever work we've got (without worrying how we got it), and wish Eric and David the best of luck with their analysis, report-writing, and publishing. |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. . . And with the very low rate of resends available also meaningless. Even if he was reporting 2 million GPUs, he will not receive more tasks than are available, and that is small numbers indeed. Getting new work at this point is more like winning the lottery as has been remarked on several occasions. So one does have to wonder why he persists with such a pointless tactic at this time. Stephen < shrug > |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7346 Credit: 44,181,323 RAC: 540
|
Hi Kevin, Yep, I got 1 yesterday, 2 on the 11th, 1 on the 9th and 2 on the 3rd. 6 since the 1st. I even decided to go for CPU tasks instead of just GPU since I was having problems with Rosetta constantly restarting it's tasks and always running in high priority. An administrator said I should just set NNT and do something else since it will take weeks to months for a fix for the app to work properly and set checkpoints. Others were having that same problem. So I added CPU tasks here in hopes to get more tasks. Have a great day! :) Siran CAPT Siran d'Vel'nahr XO - L L & P _\\// USS Vre'kasht NCC-33187 Winders 10 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
|
Grumpy Swede Send message Joined: 1 Nov 08 Posts: 8170 Credit: 49,849,242 RAC: 147
|
. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project. So, then 64 is also a tag, and all the extra WU's for a spoofed "64" client is just a conspiracy, and the thousands of "extra" WU's on such a host does not exist?? Geeze, how the cheaters try....... Nah, the spoofers should have been banned immeditely when it was discovered that they areCHEATING. Basta!!! |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. . . Well not ALL tasks in the field. but maybe all those older than a week or thereabouts. So all hosts with tasks less than a week old would still have a day or 2 to clear them. As for an overall limit on task numbers, believe it or not but there is such a limit in place. But obviously some have found a way to circumvent it. S@H is not perfect :( But certainly forcing resends on all older tasks would expedite the process of cleaning up the project. Stephen . . |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project. Thanks Stephen you get the meaning. yes i have some problems with the translations, you all know i'm not a native english speaker. About the GPU number... It's just a simple TAG as posted by Ian, nothing else, to prove this point please tell me any number (integer greater than 0 of course and under 32000) and i will switch mine for you to see nothing changes on my host. So please stop with this conspiracy theory.
|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
Hi Juan, . . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project. Stephen :( |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7346 Credit: 44,181,323 RAC: 540
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. Hi Ian, Sorry to disagree with you here, but I do. Case in point, my current host: When I look at my host list it show [2] NVIDIA GPUs. Guess what, I HAVE 2 NVIDIA GPUs in this host. Now, SETI upped the amount of tasks per device. I was getting 150 CPU tasks and 300 GPU tasks. Have a great day! :) Siran CAPT Siran d'Vel'nahr XO - L L & P _\\// USS Vre'kasht NCC-33187 Winders 10 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
|
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 1,297
|
Are you sure, I checked mine this morning, according to the "all tasks" list I had downloaded, processed returned and had validated 18 tasks in a single batch less than an hour previously. When I rechecked about 2 hours later, THREE hours after the original download they had already been removed from my lists. Resends are getting processed and deleted at a very fast rate. Kevin
|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
I Agree with Richard, bunkering is not a good option on this last days. . . Again +1 Stephen :) |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
Well, I think we can say the Eagle has landed. +1 |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
since a host with over 50K tasks is cheating, or more precisely the owner of the host. FYI my host runs in the past with a ultra large buffer 3x this figure and all the WU are crunched within the dead lines. Now it is crunching the WU with dateline of May 18 04:11:11 AM EST. So it will report them more than a month in advance of the dateline! So what is the problem with that? I'm still waiting someone show me where this "hidden rule" who said a host can't has more than xxxx WU is write so i could read it. What is wrong is: if the host DL more WU than it's capacity to crunch them on time. Live long & Prosper!
|
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
. . A very long time since neither of the two listed machines seems to have returned any results since April Fool's Day. . . Or the SSP changed between the times when each of us looked at it. It does change from time to time you know ... Stephen :) |
rob smith ![]() Send message Joined: 7 Mar 03 Posts: 18643 Credit: 416,307,556 RAC: 863
|
Even better, and for no effort on behalf of the servers, he just detaches those two computers then deletes his SETI account. Doing so would show that he at least had some grain of thought for the rest of the SETI community. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7346 Credit: 44,181,323 RAC: 540
|
My host is not actively crunching. Hi Juan, Yeah, I have gotten 6 tasks since April 1st. I have one host still connected to SETI. When this host communicates with SETI the scheduler always does a 30 minute back off. I have seen where the RTS shows a dew tasks. I manually have BOINC communicate and I get nothing for the next 30 minutes. The next time I look at the servers, there are zero tasks RTS. How about the servers figuring out that a host that has over 50K tasks in progress, take a bunch of them and pass them out to hosts with none. That sounds fair to me, since a host with over 50K tasks is cheating, or more precisely the owner of the host. Yeah, I know, in the SETI software it checks and says that a host has met it's daily allotment. I have seen that on this host and yet not long afterwords the host will get more tasks. In my mind a host that has > 50K tasks has WAY more than a daily allotment. Have a great day! :) Siran CAPT Siran d'Vel'nahr XO - L L & P _\\// USS Vre'kasht NCC-33187 Winders 10 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
1337 = leet = elite Living and learning! Thanks for the info. It's coffee time here too. Enjoy.
|
©2020 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.