Allowed WU by user

Questions and Answers : Getting started : Allowed WU by user
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile marsinph

Send message
Joined: 7 Apr 01
Posts: 118
Credit: 17,466,168
RAC: 38,374
Belgium
Message 1806113 - Posted: 31 Jul 2016, 17:51:08 UTC

I see that some people have 500, 600 or much more WU in progress. One of them have more than one thousand WU "waiting" with a single core running under XP !!!
But those people have a very low RAC or not connected months ago ! But they still receive new tasks with a very slow or not connected computer and also without any reported task !
I have a computer 24/24, 7/7,365/365 connected. my turn around time is about 2 days.
But i stay blocked at 200 WU. My settings are 10 days extra work time (the maximum). So how is it possible computer receive thousands of WU without any results sent months ago ???
Who can explain how such persons can receive so much WU ?
Best regards
Philippe
ID: 1806113 · Report as offensive
rob smith
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 14926
Credit: 230,773,308
RAC: 382,726
United Kingdom
Message 1806115 - Posted: 31 Jul 2016, 17:53:34 UTC

There is a limit, imposed by the servers of 100 tasks for your CPU and 100 tasks per GPU.

Some folks have managed to exceed this limit by various nefarious means, or by virtue of having suffered communications problems at a critical moment.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1806115 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806135 - Posted: 31 Jul 2016, 18:43:24 UTC - in response to Message 1806115.  

There is a limit, imposed by the servers of 100 tasks for your CPU and 100 tasks per GPU.
Some folks have managed to exceed this limit by various nefarious means, or by virtue of having suffered communications problems at a critical moment.

Nefarious?!?
For those with more than 12 cores crunching full time(multple CPUs), a 100-task limit is insufficient for Tuesday's maintenance...so it is totally normal/acceptable that some have found ways around such an unrealistic constaint.

Also, from what I understand, the server doesn't make a difference between having reached the limit of 100 tasks and having gone well over the limit.
So if having a queue over 100 is "nefarious" then maybe they should fix the server/client code.

With the S@H WoW event 2016 starting in 2 weeks, expect the situation to get more "nefarious" with Mr Kevvy's GUPPIrescheduler.exe
ID: 1806135 · Report as offensive
rob smith
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 14926
Credit: 230,773,308
RAC: 382,726
United Kingdom
Message 1806142 - Posted: 31 Jul 2016, 19:24:50 UTC

Nefarious?!?
For those with more than 12 cores crunching full time(multple CPUs), a 100-task limit is insufficient for Tuesday's maintenance...so it is totally normal/acceptable that some have found ways around such an unrealistic constaint.


What you are saying is totally and utterly incorrect and shows that you have NO respect for the rules of SETI@Home. Such "ways around" are, by the definition of the word are "nefarious".

If a user finds that they are running out of work for their CPU work during the Tuesday outage then they should take note of the guidance offed by the project administration team and resort to running back-up projects.


(There is an issue with one particular version of BOINC, which will under some very normal situations cause the servers to continuously deliver tasks, with most of these tasks not being run by their deadline.)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1806142 · Report as offensive
OzzFan
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15468
Credit: 53,909,193
RAC: 13,668
United States
Message 1806169 - Posted: 31 Jul 2016, 20:52:56 UTC - in response to Message 1806135.  

There is a limit, imposed by the servers of 100 tasks for your CPU and 100 tasks per GPU.
Some folks have managed to exceed this limit by various nefarious means, or by virtue of having suffered communications problems at a critical moment.

Nefarious?!?
For those with more than 12 cores crunching full time(multple CPUs), a 100-task limit is insufficient for Tuesday's maintenance...so it is totally normal/acceptable that some have found ways around such an unrealistic constaint.

Also, from what I understand, the server doesn't make a difference between having reached the limit of 100 tasks and having gone well over the limit.
So if having a queue over 100 is "nefarious" then maybe they should fix the server/client code.

With the S@H WoW event 2016 starting in 2 weeks, expect the situation to get more "nefarious" with Mr Kevvy's GUPPIrescheduler.exe


The limit was put into place because the number of outstanding results grew so big that the database was crashing constantly. The project ran out of funds long ago so they can't afford to hire a full-time database admin to run constant maintenance, nor a modern server sporting enough CPU power and RAM, nor do they have funds to use a newer database application that supports database sizes as large as the SETI@home one does. The only other option at their disposal was to limit the number of workunits in progress.

So before you point the fingers at an unrealistic limit or broken server code, understanding the history and the 'why' goes a long way.
ID: 1806169 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806171 - Posted: 31 Jul 2016, 21:02:52 UTC - in response to Message 1806142.  

nefarious = wicked or criminal.

I obviously don't get your point as there is nothing "wicked or criminal" to having more than 100 tasks for a specific device ...especially for multi-CPU rigs since Boinc has yet to treat each CPU as a different device.
If the tasks get processed before their deadline, where's the harm?

FYI, I'm all for having a debate on the issue...but not if the basis of the debate is founded on judgemental hate speech, which "wicked and criminal" are part of.
ID: 1806171 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806173 - Posted: 31 Jul 2016, 21:28:38 UTC - in response to Message 1806169.  

The limit was put into place because the number of outstanding results grew so big that the database was crashing constantly. The project ran out of funds long ago so they can't afford to hire a full-time database admin to run constant maintenance, nor a modern server sporting enough CPU power and RAM, nor do they have funds to use a newer database application that supports database sizes as large as the SETI@home one does. The only other option at their disposal was to limit the number of workunits in progress.

So before you point the fingers at an unrealistic limit or broken server code, understanding the history and the 'why' goes a long way.

Thanks for the history OzzFan.
I still stand by my position that there is a situation currently with multi-CPU rigs that justify building a cache/queue above the 100 task limit.
This will become an issue in the future that will need to be resolved somehow...and we might want to discuss it now.

Also, there is an additional scenario that I haven't mentioned yet.
With nVidia's Pascal GPUs, we will soon have an app that will make them much more productive...and the day will soon come when a 100-task limit will be too limitting for rigs that can't be connected to the net 24/7/365.
ID: 1806173 · Report as offensive
OzzFan
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15468
Credit: 53,909,193
RAC: 13,668
United States
Message 1806234 - Posted: 1 Aug 2016, 1:23:12 UTC - in response to Message 1806173.  

The limit was put into place because the number of outstanding results grew so big that the database was crashing constantly. The project ran out of funds long ago so they can't afford to hire a full-time database admin to run constant maintenance, nor a modern server sporting enough CPU power and RAM, nor do they have funds to use a newer database application that supports database sizes as large as the SETI@home one does. The only other option at their disposal was to limit the number of workunits in progress.

So before you point the fingers at an unrealistic limit or broken server code, understanding the history and the 'why' goes a long way.

Thanks for the history OzzFan.
I still stand by my position that there is a situation currently with multi-CPU rigs that justify building a cache/queue above the 100 task limit.
This will become an issue in the future that will need to be resolved somehow...and we might want to discuss it now.

Also, there is an additional scenario that I haven't mentioned yet.
With nVidia's Pascal GPUs, we will soon have an app that will make them much more productive...and the day will soon come when a 100-task limit will be too limitting for rigs that can't be connected to the net 24/7/365.


Not sure this thread is the appropriate place for that discussion. You would really need to grab the attention of the Project Scientists and Project Administrators.

My personal opinion is that there really can't be a discussion unless the project receives more funds to alleviate the issues I outlined previously. I think they have beefier servers now, but I'm not so sure about the stability of the database. I'd rather run out of work than to see SETI go down for weeks at a time while they try to recover from corrupted database issues like we saw about 6-8 years ago.

And don't forget that the project has never guaranteed work to anyone. There are plenty of other worthwhile BOINC projects one can donate processing power to for times when SETI is out of work or unable to give it.
ID: 1806234 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 6,273,131
RAC: 12,287
United States
Message 1806357 - Posted: 1 Aug 2016, 13:08:12 UTC - in response to Message 1806113.  

Additional info:

And then there's this practice:


So before you point the fingers at an unrealistic limit or broken server code, understanding the history and the 'why' goes a long way.

OzzFan is correct.  From day one, the premise of S@h classic, then BOINC, was to utilize idle computing cycles from volunteers (John & Jane Q. Public).  Back then, multi-core, multi-socket rigs were not the norm for home use, nor are they today.

Historical perspective
ID: 1806357 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806569 - Posted: 2 Aug 2016, 7:07:43 UTC - in response to Message 1806357.  

Thanks for the links AMDave.
I had read all of those in the past except for "Bunkering". lol

For those competing in events like WoW, is there a name for the opposite act of caching a huge amount of tasks (let say thousands) with the plan of not processing them before the end of the event? ...so that wingmen don't get points for processing those tasks on their end.

As for the topic of a 100-task limit, I can't wait to see what will happen once Petri's app-in-dev hits the streets.
I think we'll have a quite diff convo then.

Cheers,
Rob :-)
ID: 1806569 · Report as offensive
Mark StevensonProject Donor
Volunteer tester
Avatar

Send message
Joined: 8 Sep 11
Posts: 1532
Credit: 131,495,453
RAC: 91,473
United Kingdom
Message 1806570 - Posted: 2 Aug 2016, 7:11:52 UTC
Last modified: 2 Aug 2016, 7:12:18 UTC

For those competing in events like WoW, is there a name for the opposite act of caching a huge amount of tasks (let say thousands) with the plan of not processing them before the end of the event? ...so that wingmen don't get points for processing those tasks on their end.


Yea it's exactly the same as the horders and that is "f-ing pink oboe players !!"
Life is what you make of it :-)

When i'm good i'm very good , but when i'm bad i'm shi#eloads better ;-) In't I " buttercups " p.m.s.l at authoritie !!;-)
ID: 1806570 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806595 - Posted: 2 Aug 2016, 9:11:06 UTC

...and while we've been debating about caching more than 100 tasks,
there's 4 PCs that have ~47,000 "in progress"! lol

If situations like that don't set-off alarm bells, my guess is the project staff doesn't mind having a few optimizers cache a bit more than they're "allowed" as long as the tasks don't expire.
ID: 1806595 · Report as offensive
Mark StevensonProject Donor
Volunteer tester
Avatar

Send message
Joined: 8 Sep 11
Posts: 1532
Credit: 131,495,453
RAC: 91,473
United Kingdom
Message 1806596 - Posted: 2 Aug 2016, 9:21:47 UTC

If situations like that don't set-off alarm bells, my guess is the project staff doesn't mind having a few optimizers cache a bit more than they're "allowed" as long as the tasks don't expire.


Maybe being understaffed , undefunded other work that they do and prioritys come first , don't mean that they are "happy" about the situation of "horders" and don't think you should read into it that they don't mind like you are doing !
Life is what you make of it :-)

When i'm good i'm very good , but when i'm bad i'm shi#eloads better ;-) In't I " buttercups " p.m.s.l at authoritie !!;-)
ID: 1806596 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806605 - Posted: 2 Aug 2016, 10:49:01 UTC - in response to Message 1806596.  

don't mean that they are "happy" about the situation of "horders" and don't think you should read into it that they don't mind like you are doing !

If I understand your definition of "horders", it includes someone who is caching let's say 200 tasks for a high-end GPU when that 200th task in the queue will be processed within the next 48hrs.

If so, what do you call someone with a very slow rig who sets their Boinc to:
"Store up to an additional 10 days of work"? The turnaround is ~5 times longer!
ID: 1806605 · Report as offensive
Mark StevensonProject Donor
Volunteer tester
Avatar

Send message
Joined: 8 Sep 11
Posts: 1532
Credit: 131,495,453
RAC: 91,473
United Kingdom
Message 1806610 - Posted: 2 Aug 2016, 11:24:31 UTC - in response to Message 1806605.  
Last modified: 2 Aug 2016, 11:51:23 UTC

don't mean that they are "happy" about the situation of "horders" and don't think you should read into it that they don't mind like you are doing !

If I understand your definition of "horders", it includes someone who is caching let's say 200 tasks for a high-end GPU when that 200th task in the queue will be processed within the next 48hrs.

If so, what do you call someone with a very slow rig who sets their Boinc to:
"Store up to an additional 10 days of work"? The turnaround is ~5 times longer!


That don't make much sence , the limits are 100 tasks for cpu and 100 tasks per gpu ive got two computers that have 400 tasks on each one 100 on each cpu and 300 for gpu 100*3 is 300 (or it was when i went to school) .
By your definition that makes me a horder .

A horder is a person who works around the server set limits so they have shi#eloads more tasks than the machine should be allowd to have . It don't have F-A to do with how fast they get thro the work units and since the limits were imposed 10 days + 10 extra days don't come in to it coz the imposed limits won't allow the server to send that many to most computers
Life is what you make of it :-)

When i'm good i'm very good , but when i'm bad i'm shi#eloads better ;-) In't I " buttercups " p.m.s.l at authoritie !!;-)
ID: 1806610 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806697 - Posted: 3 Aug 2016, 0:26:01 UTC - in response to Message 1806610.  

Hey Mark,
If I understand your definition of "horders", it includes someone who is caching let's say 200 tasks for a high-end GPU when that 200th task in the queue will be processed within the next 48hrs.

A horder is a person who works around the server set limits so they have shi#eloads more tasks than the machine should be allowd to have . It don't have F-A to do with how fast they get thro the work units and since the limits were imposed 10 days + 10 extra days don't come in to it coz the imposed limits won't allow the server to send that many to most computers
"a high-end GPU" = 1 GPU

If the turnaround is half of the max # of days for really slow PCs (10/2 = 5days)
I don't think it is hoarding by definition since there is a plan to use the increased cache in the immediate near-future.
I think we'll just have to agree to disagree on this one...until the project staff issues a clear statement.
Happy WoW 2016!
Cheers,
Rob :-)
ID: 1806697 · Report as offensive
Profile Ageless
Avatar

Send message
Joined: 9 Jun 99
Posts: 14183
Credit: 3,471,282
RAC: 1,498
Netherlands
Message 1806762 - Posted: 3 Aug 2016, 10:22:00 UTC - in response to Message 1806697.  
Last modified: 3 Aug 2016, 11:11:06 UTC

"a high-end GPU" = 1 GPU

BOINC isn't able to recognize a high end GPU from a cheap P.o.S. GPU. While it will read the amount of peak flops the GPU can do, it doesn't do anything with that information, neither does the project server.

The project also is not able to recognize a high end GPU from a cheap P.o.S. GPU.

So for that, it doesn't matter what GPU you have in the system.
Of course a high end GPU returns the work faster than a cheap P.o.S. GPU, but then it will also refill its cache faster.

There's really no need for a bit of a high end GPU to have thousands of tasks cached, for what if the inevitable happens and that machine dies? Or just another SNAFU happens?

Besides, I'd expect that for a bit of a competition you'd want to return your tasks ASAP when done, not have them sit in a cache and wait their turn to be done, then only reported when the long minimum (store for N days) is met. (Late edit: Unless the competition is to have as many tasks in cache as humanly possible...?)
Jord

Ancient Astronaut Theorists suggest that in many ways, you can be considered an alien conspiracy!
ID: 1806762 · Report as offensive
rob smith
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 14926
Credit: 230,773,308
RAC: 382,726
United Kingdom
Message 1806810 - Posted: 3 Aug 2016, 15:53:57 UTC

I think for some users the objective is to hoard as many tasks as they can. As you say Jord, the inevitable will happen, they will have a hard disk fail, or a CPU croak when they are not in a position to get the replacement up and running before a substantial number of tasks have timed out. This can happen with a "legal" sized cache - I recently had a CPU fail, "It'll be covered by warranty", but after a month the argument is still raging on, I've bought a new CPU and am now entering into a "small claims" case against the parties concerned. This failure probably resulted in a couple of hundred tasks timing out (from a cache of 300), but if the cache had been a hoard of 3000 then that figure would have been in the thousands....
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1806810 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1806851 - Posted: 3 Aug 2016, 19:58:23 UTC - in response to Message 1806810.  

Thanks for the replies.

There's really no need for a bit of a high end GPU to have thousands of tasks cached

I only took an advocate's position on caching enough for multi-cpu rigs and very high-end GPUs in order to cover the Tuesday maintenance (+~6hrs). I never used "thousands" in that context.

I think for some users the objective is to hoard as many tasks as they can.
Why? How prevalent is this scenario?

The 100 limit per device is reaching its own limit thanks to new hardware.
I would prefer to see it increased slightly incrementally every 2 months and have the deadline be reduced slightly (also in increments) in order for the impact on the database to be nil. Any thoughts on the feasibility or variants of such a scenario?

Cheers,
Rob :-)
ID: 1806851 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 6,273,131
RAC: 12,287
United States
Message 1806877 - Posted: 3 Aug 2016, 21:37:32 UTC - in response to Message 1806851.  

The 100 limit per device is reaching its own limit thanks to new hardware.
I would prefer to see it increased slightly incrementally every 2 months and have the deadline be reduced slightly (also in increments) in order for the impact on the database to be nil. Any thoughts on the feasibility or variants of such a scenario?

From what I've read, it will not happen prior to Seti hardware being overhauled.  Until then, "it is what it is."

Message 1784211
ID: 1806877 · Report as offensive
1 · 2 · Next

Questions and Answers : Getting started : Allowed WU by user


 
©2017 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.