Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation
Previous · 1 . . . 28 · 29 · 30 · 31 · 32 · 33 · 34 . . . 108 · Next
| Author | Message |
|---|---|
|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 3150 Credit: 1,282,604,591 RAC: 15,062
|
1337 = leet = elite it's like internet-culture-pseudo-hacker lingo. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours
|
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. I could confirm at least in my case: 1999 is the year when Boinc (at least AFAIK) started some kind of tribute from my POV. And i agree, this number has nothing to do with the capacity of the host receive or no new WU's or the cache size. BTW I not know what 1337 means too. LOL
|
|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 3150 Credit: 1,282,604,591 RAC: 15,062
|
My host is not actively crunching. this is the second time he's made some kind of claim about his system not working on SETI when his task list clearly shows otherwise. it takes just the smallest amount of effort to verify a claim like this, especially when it can be verified by anyone who can see your list of tasks. just look, it's not that hard. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours
|
|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 3150 Credit: 1,282,604,591 RAC: 15,062
|
watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee. i can say with certainty, the GPU number you are seeing is a label only, and in no way affects how many tasks he's getting. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours
|
Link Send message Joined: 18 Sep 03 Posts: 833 Credit: 1,807,369 RAC: 1
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. I don't see any reason, why slower hosts should be punnished just for being slow, I'm absolutely against doing that, I also don't really care at this point about cheaters like Ville Saari as long as they return the tasks before deadline. At least we can be pretty sure, that he will crunch them. The database is now OK, so it doesn't matter any longer, they should have done something about it before, when it was important to keep the database small. The only thing I agree with is sending resends with shorter deadlines and limit the number of tasks per host to something very low. The servers will run for couple of months more anyway, no need to do more than this.
|
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
My host is not actively crunching. Your hosts show: Last contact 13 Apr 2020, 13:42:52 UTC and the last WU reported was: 12 Apr 2020, 21:56:54 UTC So is actively crunching in the last week at least. By my suggestion it is a candidate to receive the resends. Maybe a week or something similar
|
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7346 Credit: 44,181,323 RAC: 540
|
I Agree with Richard, bunkering is not a good option on this last days. Hi Juan, This is what you said above: "... who are still actively crunching and returning their work in the last few days ..." That is what I don't agree with. My host is not actively crunching. Now if I HAD seen "active users" I may have been less inclined to disagree, since as you said, my host is still active on SETI, just not actively crunching. The last time I remember getting one task from SETI was 2 or 3 days ago. I got it, it crunched for a few seconds and then uploaded and reported. Obviously it was a GPU task. ;) As for the rest you not speaking about, that's fine with me. I had just read about him before getting to your post and when I saw "actively crunching" in you post, I thought of him and used him as an extreme example for why I disagreed. :) [edit] BTW, I did read the whole post. I was just responding to that one line. :) [/edit] Have a great day! :) Siran CAPT Siran d'Vel'nahr XO - L L & P _\\// USS Vre'kasht NCC-33187 Winders 10 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
|
Grumpy Swede Send message Joined: 1 Nov 08 Posts: 8170 Credit: 49,849,242 RAC: 147
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. +1 And spoofing up to 1337 GPUs!!!!!!!!!!, to be able to get as many tasks he wants, that' just ridicolous, and extreme CHEATING!! |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
I Agree with Richard, bunkering is not a good option on this last days. Please forgive me but your answer is out of logic for a Vulcan. Sure as always you not read the entire post. I clearly post active users and posted for about a week or more... your host is included in this description. What i wish to say in another words, maybe you could understand better, not send to inactive hosts, those who not connect the project after the march 31 or hosts with problems like a lot of them we know. And in the following part os the msg i clearly say a limit of WU per host, like 10 WU (for example only), so is impossible to all the resends go to a single host as you wrongly suggest! If the limit is per host!!! Another illogical assumption btw. About the rest i not care or will not comment how the others users uses his hosts. If you have any question about mine i will happy to answer.
|
|
Alien Seeker Send message Joined: 23 May 99 Posts: 56 Credit: 511,652 RAC: 73
|
Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU. Gazing at the skies, hoping for contact... Unlikely, but it would be such a fantastic opportunity to learn. My alternative profile |
Siran d'Vel'nahr Send message Joined: 23 May 99 Posts: 7346 Credit: 44,181,323 RAC: 540
|
I Agree with Richard, bunkering is not a good option on this last days. Hi Juan, I would not agree with that statement. That would not be fair to those of us that do NOT spoof (cheat) as Ville Saari does. He has over 70K STILL IN PROGRESS! Why allow him to get ALL the resends and not let any through to anyone else. I still have BOINC set to get anything I can from SETI, be it GPU or CPU tasks. If only active hosts are allowed to get resends, than as I said, it is not fair to those of us that do not cheat. [edit] And, you cannot tell me that Ville Saari is not cheating by spoofing. One of his PCs is said to have 1337 GPUs!!!!!!!!!! He is CHEATING!!!!!!! [/edit] [edit2] The last time I looked at his PC list that PC said it had 64 GPUs. He increased the spoofing variable value (or whatever it is called in the software) just so he could get the lions share of any tasks ready to send. [/edit2] Have a great day! :) Siran CAPT Siran d'Vel'nahr XO - L L & P _\\// USS Vre'kasht NCC-33187 Winders 10 OS? "What a piece of junk!" - L. Skywalker "Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
I Agree with Richard, bunkering is not a good option on this last days. IMHO What is needed to do is a way to send the resends only to those hosts who are still actively crunching and returning their work in the last few days. Maybe a week or something similar. And if possible in a small batch of files only. Something like 10 WU max per host. The results in the field are already distributed, but a lot will be expired and if they will be sended again on a non crunching hosts (IE a host with AV problem, bunker, etc) will be a long wait until it reaches the dateline again. And BTW the dateline must be reduced to the minimum possible to expedite the crunch (by make the host entering in panic mode in case it's runs multiprojects). Something radical like 3 days for a GPU WU and 5-7 days for the CPU WU. My 0.02
|
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14114 Credit: 200,643,578 RAC: 1,983
|
Well, I think we can say the Eagle has landed. Results received in last hour ** 0 74 5,762 0m Workunits waiting for validation 0 0 0 0m Workunits waiting for assimilation 0 0 2 0m Workunit files waiting for deletion 0 0 1 0m Result files waiting for deletion 0 0 12 0m Workunits waiting for db purging 0 60 5,498 0m Results waiting for db purging 0 669 60,657 0m[SSP as of 13 Apr 2020, 10:30:04 UTC] The key one is 'assimilation' - that's been effectively zero for a while, while late results continue to trickle in. Like Grant, my personal Valid list shows all 'valid tasks with all workunit tasks reported' have been processed: those that are left are from the period at the end of March when extra replications were created, and some have not yet been returned. So the message to all bunkerers is: "Thank you for keeping out of the way while the servers recovered from their overload. But that phase is now over. If any of your computers now shows tasks in progress on the web page, please check it and act accordingly." * if you have switched to another project, please switch at least some resources back to SETI to help finish the run * if you have the tasks, and the computer is idle, please restart it * if you don't have the tasks - if they're ghosts - try fetching work at various times of day to see if you can recover them If you don't take some sort of action like that, you're now part of the problem, rather than part of the project you claim to support. |
Link Send message Joined: 18 Sep 03 Posts: 833 Credit: 1,807,369 RAC: 1
|
when I'm really returning 1500 to 2000 results a day. So due to cheating you have still a cache for well over a month (currently 73055 tasks) and are surprised, that people blame you?
|
|
Ville Saari Send message Joined: 30 Nov 00 Posts: 1119 Credit: 48,373,696 RAC: 74,889
|
Db purger seems to be still in the trigger happy mode it was in back when the db was heavily bloated so we don't see our returned tasks for 24 hour like we used to before the db bloat issues started. Any workunit that gets its last result returned is purged almost immediately :( That's probably the reason why people blamed me for not returning anything when I'm really returning 1500 to 2000 results a day. |
|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 12990 Credit: 208,696,464 RAC: 690
|
Finally. My Valid list is now just down to those waiting on Tasks to be returned. Grant Darwin NT |
|
Ville Saari Send message Joined: 30 Nov 00 Posts: 1119 Credit: 48,373,696 RAC: 74,889
|
When I look at my own pending tasks, I can find lots and lots of computers that stopped contacting the servers at the turn of the month without finishing their caches. And almost all of them are running Windows. I guess a lot of the 2 million tasks still out in the field are in those black holes waiting for their deadlines. |
|
Ville Saari Send message Joined: 30 Nov 00 Posts: 1119 Credit: 48,373,696 RAC: 74,889
|
Another computer with a very high number of tasks in progress (nearly 30k):That one is at least actively crunching so the tasks are not in a black hole. |
BetelgeuseFive ![]() Send message Joined: 6 Jul 99 Posts: 157 Credit: 17,117,787 RAC: 42
|
Another computer with a very high number of tasks in progress (nearly 30k): https://setiathome.berkeley.edu/show_host_detail.php?hostid=8568062 Wish I could get some ... Tom |
BetelgeuseFive ![]() Send message Joined: 6 Jul 99 Posts: 157 Credit: 17,117,787 RAC: 42
|
Well, it seems the assimilation queue has finally been depleted: Workunits waiting for assimilation 0 0 5 0m Still over 2 million results out in the field though Tom |
©2020 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.