Limits

Message boards : Number crunching : Limits
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1311995 - Posted: 7 Dec 2012, 0:30:58 UTC

Are the limits still in place ???
ID: 1311995 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1311997 - Posted: 7 Dec 2012, 0:33:10 UTC - in response to Message 1311995.  

Are the limits still in place ???

Until otherwise noted.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1311997 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1312005 - Posted: 7 Dec 2012, 0:56:42 UTC - in response to Message 1311997.  


oh well, that just means a lot of boxes are going to keep knocking on the front door adding to the general level of congestion whilst they can't get work ...
ID: 1312005 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1312042 - Posted: 7 Dec 2012, 5:07:51 UTC - in response to Message 1312005.  


oh well, that just means a lot of boxes are going to keep knocking on the front door adding to the general level of congestion whilst they can't get work ...

The limits were put in place because there were to many results in one of the tables. So any time someone requested work or reported work there was a good chance there would be a timeout, or something along those lines. It was announced that there is a plan to makes workunits/tasks 4 times their current size, but that development will take a few months. So we might be sitting on the 100 task limit until then.

Other projects have a hard limit of tasks in progress and they are doing fine. So I don't really see an issue. Other than my 24 core box that can only get a cache of about 12 hours. I have a backup project or two so really my machines are good.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1312042 · Report as offensive
Ianab
Volunteer tester

Send message
Joined: 11 Jun 08
Posts: 732
Credit: 20,635,586
RAC: 5
New Zealand
Message 1312086 - Posted: 7 Dec 2012, 8:31:44 UTC - in response to Message 1312042.  

Exactly, with the limits in place hopefully the database stays small enough that it's got a good response time, and is stable. So all those extra requests can at least be handled, and the system keeps humming along. Only thing missing is the big cache for the high end boxes.

OR - they can open the floodgates, and let folks fill 10 days caches, have the database grow back to 6 million WU's, and grind to a halt again.

Know which option I'd go with.

As Hal says, there is a medium term plan of creating bigger WU's, which is sensible as machines are now more powerful then when the current WU's where designed. This then shrinks the database to 25% it's current size, and things are back under control, until the next iteration of Moore's law. (a few years?)

Ian
ID: 1312086 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1315675 - Posted: 15 Dec 2012, 18:53:28 UTC

100 WU means currently 4 hours of buffer on one of my hosts. 100 WU for CPU is enough but not for GPU. 1000 WU would mean 40 hours for GPU. 24hours worth of buffer would be in most cases enough to cover during the weekly maintenance break. But 4 hours is too little.

ID: 1315675 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1315677 - Posted: 15 Dec 2012, 18:55:50 UTC - in response to Message 1315675.  

100 WU means currently 4 hours of buffer on one of my hosts. 100 WU for CPU is enough but not for GPU. 1000 WU would mean 40 hours for GPU. 24hours worth of buffer would be in most cases enough to cover during the weekly maintenance break. But 4 hours is too little.

Wayyyyyyy too little for the kitty crunching farm!
1000 would work until things get sorted and limits can be lifted again.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1315677 · Report as offensive
Mark Stevenson Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 8 Sep 11
Posts: 1736
Credit: 174,899,165
RAC: 91
United Kingdom
Message 1315689 - Posted: 15 Dec 2012, 19:20:43 UTC

+1
ID: 1315689 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1315735 - Posted: 15 Dec 2012, 20:58:39 UTC - in response to Message 1312086.  

OR - they can open the floodgates, and let folks fill 10 days caches, have the database grow back to 6 million WU's, and grind to a halt again

Or better yet they reduce the limit to 50, but make it per core/ per GPU instance.
That way pople will actually be able to cache more than a couple of hours work, but won't be able to get 10+ days worth.
Database remains small & people are able cache work. Sounds good to me.

Grant
Darwin NT
ID: 1315735 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65745
Credit: 55,293,173
RAC: 49
United States
Message 1315900 - Posted: 16 Dec 2012, 7:02:26 UTC - in response to Message 1315735.  
Last modified: 16 Dec 2012, 7:02:46 UTC

OR - they can open the floodgates, and let folks fill 10 days caches, have the database grow back to 6 million WU's, and grind to a halt again

Or better yet they reduce the limit to 50, but make it per core/ per GPU instance.
That way pople will actually be able to cache more than a couple of hours work, but won't be able to get 10+ days worth.
Database remains small & people are able cache work. Sounds good to me.

That would work for Me, the 50 per gpu that is.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1315900 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1315905 - Posted: 16 Dec 2012, 7:49:23 UTC - in response to Message 1315900.  

That would work for Me, the 50 per gpu that is.

As long as it's per instance.
ie My GTX560Ti runs 3 at a time, so it should get 150, the GTX460 2 at a time so it would get 100. My Core 2 Duo would get 100, my i7 would get 400.
Ideally the GPUs should get 5 times the number the CPUs get, but the way things are that'd be too much to hope for.

Grant
Darwin NT
ID: 1315905 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1316017 - Posted: 16 Dec 2012, 14:10:54 UTC - in response to Message 1315905.  
Last modified: 16 Dec 2012, 14:11:22 UTC

That would work for Me, the 50 per gpu that is.

As long as it's per instance.
ie My GTX560Ti runs 3 at a time, so it should get 150, the GTX460 2 at a time so it would get 100. My Core 2 Duo would get 100, my i7 would get 400.
Ideally the GPUs should get 5 times the number the CPUs get, but the way things are that'd be too much to hope for.

Not positive, but I don't think there is a mechanism in Boinc to tell the servers how many WUs you run concurrently on a given GPU. Just the number of active GPUs in the rig.
So I would still vote for the 1000 per active GPU.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1316017 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1316049 - Posted: 16 Dec 2012, 15:30:09 UTC - in response to Message 1316017.  
Last modified: 16 Dec 2012, 15:35:46 UTC

So I would still vote for the 1000 per active GPU.

I´m with the kitties 1000/GPU is good, 100 on a 2x690 host for example, is simply ridiculous.

PS: I hate limits!
ID: 1316049 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65745
Credit: 55,293,173
RAC: 49
United States
Message 1316071 - Posted: 16 Dec 2012, 15:49:52 UTC - in response to Message 1316049.  

So I would still vote for the 1000 per active GPU.

I´m with the kitties 1000/GPU is good, 100 on a 2x690 host for example, is simply ridiculous.

PS: I hate limits!

Doesn't everybody?
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1316071 · Report as offensive
Profile Khangollo
Avatar

Send message
Joined: 1 Aug 00
Posts: 245
Credit: 36,410,524
RAC: 0
Slovenia
Message 1316083 - Posted: 16 Dec 2012, 16:10:50 UTC

100 WU is not even enough for little GT 440 for 1 day, BTW.
ID: 1316083 · Report as offensive
Team kizb

Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1316089 - Posted: 16 Dec 2012, 16:23:17 UTC

So is this 100 WU limit per a computer or per an account?
My Computers:
â–ˆ Blue Offline
â–ˆ Green Offline
â–ˆ Red Offline
ID: 1316089 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1316091 - Posted: 16 Dec 2012, 16:24:31 UTC - in response to Message 1316089.  

So is this 100 WU limit per a computer or per an account?

Per computer, plus another 100 if it has at least one usable GPU.
ID: 1316091 · Report as offensive
Team kizb

Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1316094 - Posted: 16 Dec 2012, 16:28:29 UTC - in response to Message 1316091.  

So is this 100 WU limit per a computer or per an account?

Per computer, plus another 100 if it has at least one usable GPU.


So currently the most I'd be able to get is 100 WUs for the CPU and 100 WUs for GPUs regardless of the number of GPUs in the computer?
My Computers:
â–ˆ Blue Offline
â–ˆ Green Offline
â–ˆ Red Offline
ID: 1316094 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1316095 - Posted: 16 Dec 2012, 16:30:25 UTC - in response to Message 1316094.  

So is this 100 WU limit per a computer or per an account?

Per computer, plus another 100 if it has at least one usable GPU.


So currently the most I'd be able to get is 100 WUs for the CPU and 100 WUs for GPUs regardless of the number of GPUs in the computer?

That is correct.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1316095 · Report as offensive
mikeej42

Send message
Joined: 26 Oct 00
Posts: 109
Credit: 791,875,385
RAC: 9
United States
Message 1316096 - Posted: 16 Dec 2012, 16:30:58 UTC - in response to Message 1316091.  

So is this 100 WU limit per a computer or per an account?

Per computer, plus another 100 if it has at least one usable GPU.


Hopefully they never change to a per account limit system. For those of us stuck with CPU only systems that would be a pretty serious restriction.
ID: 1316096 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : Limits


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.