The Server Issues / Outages Thread - Panic Mode On! (119)

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 28 · 29 · 30 · 31 · 32 · 33 · 34 . . . 108 · Next

AuthorMessage
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 3150
Credit: 1,282,604,591
RAC: 15,062
United States
Message 2044531 - Posted: 13 Apr 2020, 14:25:41 UTC

1337 = leet = elite

it's like internet-culture-pseudo-hacker lingo.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2044531 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044530 - Posted: 13 Apr 2020, 14:23:29 UTC - in response to Message 2044525.  
Last modified: 13 Apr 2020, 14:24:09 UTC

watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee.

i can say with certainty, the GPU number you are seeing is a label only, and in no way affects how many tasks he's getting.

I could confirm at least in my case: 1999 is the year when Boinc (at least AFAIK) started some kind of tribute from my POV.
And i agree, this number has nothing to do with the capacity of the host receive or no new WU's or the cache size.
BTW I not know what 1337 means too. LOL
ID: 2044530 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 3150
Credit: 1,282,604,591
RAC: 15,062
United States
Message 2044527 - Posted: 13 Apr 2020, 14:17:43 UTC - in response to Message 2044523.  

My host is not actively crunching.

Your hosts show: Last contact 13 Apr 2020, 13:42:52 UTC
and the last WU reported was: 12 Apr 2020, 21:56:54 UTC
So is actively crunching in the last week at least.
By my suggestion it is a candidate to receive the resends.
Maybe a week or something similar


this is the second time he's made some kind of claim about his system not working on SETI when his task list clearly shows otherwise.

it takes just the smallest amount of effort to verify a claim like this, especially when it can be verified by anyone who can see your list of tasks. just look, it's not that hard.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2044527 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 3150
Credit: 1,282,604,591
RAC: 15,062
United States
Message 2044525 - Posted: 13 Apr 2020, 14:15:26 UTC

watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee.

i can say with certainty, the GPU number you are seeing is a label only, and in no way affects how many tasks he's getting.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2044525 · Report as offensive     Reply Quote
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 833
Credit: 1,807,369
RAC: 1
Germany
Message 2044524 - Posted: 13 Apr 2020, 14:14:02 UTC - in response to Message 2044512.  
Last modified: 13 Apr 2020, 14:15:39 UTC

Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU.

I don't see any reason, why slower hosts should be punnished just for being slow, I'm absolutely against doing that, I also don't really care at this point about cheaters like Ville Saari as long as they return the tasks before deadline. At least we can be pretty sure, that he will crunch them. The database is now OK, so it doesn't matter any longer, they should have done something about it before, when it was important to keep the database small. The only thing I agree with is sending resends with shorter deadlines and limit the number of tasks per host to something very low. The servers will run for couple of months more anyway, no need to do more than this.
ID: 2044524 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044523 - Posted: 13 Apr 2020, 14:09:16 UTC - in response to Message 2044521.  
Last modified: 13 Apr 2020, 14:12:34 UTC

My host is not actively crunching.

Your hosts show: Last contact 13 Apr 2020, 13:42:52 UTC
and the last WU reported was: 12 Apr 2020, 21:56:54 UTC
So is actively crunching in the last week at least.
By my suggestion it is a candidate to receive the resends.
Maybe a week or something similar

ID: 2044523 · Report as offensive     Reply Quote
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7346
Credit: 44,181,323
RAC: 540
United States
Message 2044521 - Posted: 13 Apr 2020, 13:54:47 UTC - in response to Message 2044514.  
Last modified: 13 Apr 2020, 13:59:59 UTC

I Agree with Richard, bunkering is not a good option on this last days.
IMHO What is needed to do is a way to send the resends only to those hosts who are still actively crunching and returning their work in the last few days. Maybe a week or something similar. And if possible in a small batch of files only. Something like 10 WU max per host.
The results in the field are already distributed, but a lot will be expired and if they will be sended again on a non crunching hosts (IE a host with AV problem, bunker, etc) will be a long wait until it reaches the dateline again.
And BTW the dateline must be reduced to the minimum possible to expedite the crunch (by make the host entering in panic mode in case it's runs multiprojects). Something radical like 3 days for a GPU WU and 5-7 days for the CPU WU.
My 0.02

Hi Juan,

I would not agree with that statement. That would not be fair to those of us that do NOT spoof (cheat) as Ville Saari does. He has over 70K STILL IN PROGRESS! Why allow him to get ALL the resends and not let any through to anyone else. I still have BOINC set to get anything I can from SETI, be it GPU or CPU tasks. If only active hosts are allowed to get resends, than as I said, it is not fair to those of us that do not cheat.

[edit] And, you cannot tell me that Ville Saari is not cheating by spoofing. One of his PCs is said to have 1337 GPUs!!!!!!!!!! He is CHEATING!!!!!!! [/edit]
[edit2] The last time I looked at his PC list that PC said it had 64 GPUs. He increased the spoofing variable value (or whatever it is called in the software) just so he could get the lions share of any tasks ready to send. [/edit2]


Have a great day! :)

Siran

Please forgive me but your answer is out of logic for a Vulcan.
Sure as always you not read the entire post. I clearly post active users and posted for about a week or more... your host is included in this description.
What i wish to say in another words, maybe you could understand better, not send to inactive hosts, those who not connect the project after the march 31 or hosts with problems like a lot of them we know.
And in the following part os the msg i clearly say a limit of WU per host, like 10 WU (for example only), so is impossible to all the resends go to a single host as you wrongly suggest! If the limit is per host!!! Another illogical assumption btw.

About the rest i not care or will not comment how the others users uses his hosts. If you have any question about mine i will happy to answer.

Hi Juan,

This is what you said above:
"... who are still actively crunching and returning their work in the last few days ..."

That is what I don't agree with. My host is not actively crunching. Now if I HAD seen "active users" I may have been less inclined to disagree, since as you said, my host is still active on SETI, just not actively crunching. The last time I remember getting one task from SETI was 2 or 3 days ago. I got it, it crunched for a few seconds and then uploaded and reported. Obviously it was a GPU task. ;)

As for the rest you not speaking about, that's fine with me. I had just read about him before getting to your post and when I saw "actively crunching" in you post, I thought of him and used him as an extreme example for why I disagreed. :)

[edit] BTW, I did read the whole post. I was just responding to that one line. :) [/edit]

Have a great day! :)

Siran
CAPT Siran d'Vel'nahr XO - L L & P _\\//
USS Vre'kasht NCC-33187
Winders 10 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 2044521 · Report as offensive     Reply Quote
Grumpy Swede
Volunteer tester
Avatar

Send message
Joined: 1 Nov 08
Posts: 8170
Credit: 49,849,242
RAC: 147
Sweden
Message 2044519 - Posted: 13 Apr 2020, 13:39:13 UTC - in response to Message 2044512.  
Last modified: 13 Apr 2020, 13:41:03 UTC

Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU.

+1

And spoofing up to 1337 GPUs!!!!!!!!!!, to be able to get as many tasks he wants, that' just ridicolous, and extreme CHEATING!!
ID: 2044519 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044514 - Posted: 13 Apr 2020, 13:04:53 UTC - in response to Message 2044506.  

I Agree with Richard, bunkering is not a good option on this last days.
IMHO What is needed to do is a way to send the resends only to those hosts who are still actively crunching and returning their work in the last few days. Maybe a week or something similar. And if possible in a small batch of files only. Something like 10 WU max per host.
The results in the field are already distributed, but a lot will be expired and if they will be sended again on a non crunching hosts (IE a host with AV problem, bunker, etc) will be a long wait until it reaches the dateline again.
And BTW the dateline must be reduced to the minimum possible to expedite the crunch (by make the host entering in panic mode in case it's runs multiprojects). Something radical like 3 days for a GPU WU and 5-7 days for the CPU WU.
My 0.02

Hi Juan,

I would not agree with that statement. That would not be fair to those of us that do NOT spoof (cheat) as Ville Saari does. He has over 70K STILL IN PROGRESS! Why allow him to get ALL the resends and not let any through to anyone else. I still have BOINC set to get anything I can from SETI, be it GPU or CPU tasks. If only active hosts are allowed to get resends, than as I said, it is not fair to those of us that do not cheat.

[edit] And, you cannot tell me that Ville Saari is not cheating by spoofing. One of his PCs is said to have 1337 GPUs!!!!!!!!!! He is CHEATING!!!!!!! [/edit]
[edit2] The last time I looked at his PC list that PC said it had 64 GPUs. He increased the spoofing variable value (or whatever it is called in the software) just so he could get the lions share of any tasks ready to send. [/edit2]


Have a great day! :)

Siran

Please forgive me but your answer is out of logic for a Vulcan.
Sure as always you not read the entire post. I clearly post active users and posted for about a week or more... your host is included in this description.
What i wish to say in another words, maybe you could understand better, not send to inactive hosts, those who not connect the project after the march 31 or hosts with problems like a lot of them we know.
And in the following part os the msg i clearly say a limit of WU per host, like 10 WU (for example only), so is impossible to all the resends go to a single host as you wrongly suggest! If the limit is per host!!! Another illogical assumption btw.

About the rest i not care or will not comment how the others users uses his hosts. If you have any question about mine i will happy to answer.
ID: 2044514 · Report as offensive     Reply Quote
Alien Seeker
Avatar

Send message
Joined: 23 May 99
Posts: 56
Credit: 511,652
RAC: 73
France
Message 2044512 - Posted: 13 Apr 2020, 12:49:03 UTC - in response to Message 2044503.  

Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU.
Gazing at the skies, hoping for contact... Unlikely, but it would be such a fantastic opportunity to learn.

My alternative profile
ID: 2044512 · Report as offensive     Reply Quote
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7346
Credit: 44,181,323
RAC: 540
United States
Message 2044506 - Posted: 13 Apr 2020, 12:19:19 UTC - in response to Message 2044503.  
Last modified: 13 Apr 2020, 12:33:07 UTC

I Agree with Richard, bunkering is not a good option on this last days.
IMHO What is needed to do is a way to send the resends only to those hosts who are still actively crunching and returning their work in the last few days. Maybe a week or something similar. And if possible in a small batch of files only. Something like 10 WU max per host.
The results in the field are already distributed, but a lot will be expired and if they will be sended again on a non crunching hosts (IE a host with AV problem, bunker, etc) will be a long wait until it reaches the dateline again.
And BTW the dateline must be reduced to the minimum possible to expedite the crunch (by make the host entering in panic mode in case it's runs multiprojects). Something radical like 3 days for a GPU WU and 5-7 days for the CPU WU.
My 0.02

Hi Juan,

I would not agree with that statement. That would not be fair to those of us that do NOT spoof (cheat) as Ville Saari does. He has over 70K STILL IN PROGRESS! Why allow him to get ALL the resends and not let any through to anyone else. I still have BOINC set to get anything I can from SETI, be it GPU or CPU tasks. If only active hosts are allowed to get resends, than as I said, it is not fair to those of us that do not cheat.

[edit] And, you cannot tell me that Ville Saari is not cheating by spoofing. One of his PCs is said to have 1337 GPUs!!!!!!!!!! He is CHEATING!!!!!!! [/edit]
[edit2] The last time I looked at his PC list that PC said it had 64 GPUs. He increased the spoofing variable value (or whatever it is called in the software) just so he could get the lions share of any tasks ready to send. [/edit2]


Have a great day! :)

Siran
CAPT Siran d'Vel'nahr XO - L L & P _\\//
USS Vre'kasht NCC-33187
Winders 10 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 2044506 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044503 - Posted: 13 Apr 2020, 11:49:43 UTC
Last modified: 13 Apr 2020, 12:05:35 UTC

I Agree with Richard, bunkering is not a good option on this last days.
IMHO What is needed to do is a way to send the resends only to those hosts who are still actively crunching and returning their work in the last few days. Maybe a week or something similar. And if possible in a small batch of files only. Something like 10 WU max per host.
The results in the field are already distributed, but a lot will be expired and if they will be sended again on a non crunching hosts (IE a host with AV problem, bunker, etc) will be a long wait until it reaches the dateline again.
And BTW the dateline must be reduced to the minimum possible to expedite the crunch (by make the host entering in panic mode in case it's runs multiprojects). Something radical like 3 days for a GPU WU and 5-7 days for the CPU WU.
My 0.02
ID: 2044503 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14114
Credit: 200,643,578
RAC: 1,983
United Kingdom
Message 2044497 - Posted: 13 Apr 2020, 10:47:58 UTC

Well, I think we can say the Eagle has landed.
Results received in last hour **	0	74	5,762	0m
Workunits waiting for validation	0	0	0	0m
Workunits waiting for assimilation	0	0	2	0m
Workunit files waiting for deletion	0	0	1	0m
Result files waiting for deletion	0	0	12	0m
Workunits waiting for db purging	0	60	5,498	0m
Results waiting for db purging		0	669	60,657	0m
[SSP as of 13 Apr 2020, 10:30:04 UTC]

The key one is 'assimilation' - that's been effectively zero for a while, while late results continue to trickle in. Like Grant, my personal Valid list shows all 'valid tasks with all workunit tasks reported' have been processed: those that are left are from the period at the end of March when extra replications were created, and some have not yet been returned.

So the message to all bunkerers is: "Thank you for keeping out of the way while the servers recovered from their overload. But that phase is now over. If any of your computers now shows tasks in progress on the web page, please check it and act accordingly."

* if you have switched to another project, please switch at least some resources back to SETI to help finish the run
* if you have the tasks, and the computer is idle, please restart it
* if you don't have the tasks - if they're ghosts - try fetching work at various times of day to see if you can recover them

If you don't take some sort of action like that, you're now part of the problem, rather than part of the project you claim to support.
ID: 2044497 · Report as offensive     Reply Quote
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 833
Credit: 1,807,369
RAC: 1
Germany
Message 2044496 - Posted: 13 Apr 2020, 10:43:49 UTC - in response to Message 2044490.  
Last modified: 13 Apr 2020, 10:46:24 UTC

when I'm really returning 1500 to 2000 results a day.

So due to cheating you have still a cache for well over a month (currently 73055 tasks) and are surprised, that people blame you?
ID: 2044496 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1119
Credit: 48,373,696
RAC: 74,889
Finland
Message 2044490 - Posted: 13 Apr 2020, 10:23:05 UTC

Db purger seems to be still in the trigger happy mode it was in back when the db was heavily bloated so we don't see our returned tasks for 24 hour like we used to before the db bloat issues started. Any workunit that gets its last result returned is purged almost immediately :(

That's probably the reason why people blamed me for not returning anything when I'm really returning 1500 to 2000 results a day.
ID: 2044490 · Report as offensive     Reply Quote
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 12990
Credit: 208,696,464
RAC: 690
Australia
Message 2044489 - Posted: 13 Apr 2020, 10:11:29 UTC

Finally. My Valid list is now just down to those waiting on Tasks to be returned.
Grant
Darwin NT
ID: 2044489 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1119
Credit: 48,373,696
RAC: 74,889
Finland
Message 2044487 - Posted: 13 Apr 2020, 9:48:58 UTC

When I look at my own pending tasks, I can find lots and lots of computers that stopped contacting the servers at the turn of the month without finishing their caches. And almost all of them are running Windows. I guess a lot of the 2 million tasks still out in the field are in those black holes waiting for their deadlines.
ID: 2044487 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1119
Credit: 48,373,696
RAC: 74,889
Finland
Message 2044482 - Posted: 13 Apr 2020, 9:41:17 UTC - in response to Message 2044479.  

Another computer with a very high number of tasks in progress (nearly 30k):

https://setiathome.berkeley.edu/show_host_detail.php?hostid=8568062

Wish I could get some ...
That one is at least actively crunching so the tasks are not in a black hole.
ID: 2044482 · Report as offensive     Reply Quote
BetelgeuseFive Project Donor
Volunteer tester

Send message
Joined: 6 Jul 99
Posts: 157
Credit: 17,117,787
RAC: 42
Netherlands
Message 2044479 - Posted: 13 Apr 2020, 9:34:39 UTC

Another computer with a very high number of tasks in progress (nearly 30k):

https://setiathome.berkeley.edu/show_host_detail.php?hostid=8568062

Wish I could get some ...

Tom
ID: 2044479 · Report as offensive     Reply Quote
BetelgeuseFive Project Donor
Volunteer tester

Send message
Joined: 6 Jul 99
Posts: 157
Credit: 17,117,787
RAC: 42
Netherlands
Message 2044477 - Posted: 13 Apr 2020, 9:14:22 UTC

Well, it seems the assimilation queue has finally been depleted:

Workunits waiting for assimilation 0 0 5 0m

Still over 2 million results out in the field though

Tom
ID: 2044477 · Report as offensive     Reply Quote
Previous · 1 . . . 28 · 29 · 30 · 31 · 32 · 33 · 34 . . . 108 · Next

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (119)


 
©2020 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.