Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore

Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1812541 - Posted: 25 Aug 2016, 17:06:53 UTC - in response to Message 1812538.  

Richard, could you provide a link or the title to that thread.  I'd like to refresh my memory.

Sorry, it's in this thread - but you can't see it here under - ahem - 'current circimstances'.

You can see the UserID of the thread originator in the index page. Open the account page of one of your friends, or some other poster like me (but not your own account page), and replace the userid in the browser address bar with the one from the index. Then you can read from "Message boards 344 posts".
ID: 1812541 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1812650 - Posted: 26 Aug 2016, 2:17:00 UTC - in response to Message 1812527.  

A few options come to mind.

1.
The CPU limit could be applied based on processors and determined value. Something along the lines of (host # of processors/4)*task limit.

2.
Perhaps modifying the JobLimits options to allow for specified host # of processor range limits.

Something along the lines of:
<project>
	<cpu_limit> if set, limit is applied to all hosts unless another values applies to the host
		<jobs>N</jobs>
	<cpu_limit_16> if set, limit is applied to host with 16+ processors
		<jobs>N</jobs>
	<cpu_limit_32> if set, limit is applied to host with 32+ processors
		<jobs>N</jobs>
	<cpu_limit_64> if set, limit is applied to host with 64+ processors
		<jobs>N</jobs>
	<cpu_limit_128> if set, limit is applied to host with 128+ processors
		<jobs>N</jobs>
</project>


3.
A graduated max CPU tasks in progress could be derived using Number of tasks today from the CPU apps. It might be necessary to take the Number of tasks today and then create an Average daily Number of tasks to use. Then a using the specified value set by the project a dynamic limit could be applied based on how productive the machine is rather than by the indicated number of processors.
I think this might be the most complicated method & with each app version change the average would be reset.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1812650 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1812676 - Posted: 26 Aug 2016, 3:56:05 UTC - in response to Message 1812650.  

Any increase in allocation limits should come with greater cuts in allocations for systems that have high percentages of invalids/errors in relation to work in progress.
No point giving these system more work to mangle.
Grant
Darwin NT
ID: 1812676 · Report as offensive
MarkJ Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 08
Posts: 1139
Credit: 80,854,192
RAC: 5
Australia
Message 1812841 - Posted: 26 Aug 2016, 22:28:22 UTC
Last modified: 26 Aug 2016, 22:29:38 UTC

Most of my machines are i7's so they get 100 / 8 threads = 12.5 WU per thread.

If we expand on that by saying we get 12.5 x number of threads then it could work for the smaller (single thread) machines as well as the larger (56 thread) machines. I would suggest we make it something like 15 per thread. Its simplistic and achievable with the current infrastructure.

A more long term approach might be to increase that number based upon the average turnaround if the host is considered reliable. It could also be applied the other way to reduce the number if the host is unreliable.
BOINC blog
ID: 1812841 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1812855 - Posted: 26 Aug 2016, 23:35:49 UTC - in response to Message 1812841.  

A more long term approach might be to increase that number based upon the average turnaround if the host is considered reliable. It could also be applied the other way to reduce the number if the host is unreliable.

Honestly, I really think that this would be the best way, it doesn't care about cores, CPU or GPU speed, or anything else. It just sees that you are returning X returns per minute/hour/day/whatever, and ramp it up (to a reasonable limit, of course) based upon reliability and productivity, which in the end is all that really matters. And as mentioned before, a corresponding decrease to those who are returning pretty much nothing but junk.

ID: 1812855 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1812863 - Posted: 27 Aug 2016, 0:12:12 UTC - in response to Message 1812841.  

Most of my machines are i7's so they get 100 / 8 threads = 12.5 WU per thread.

If we expand on that by saying we get 12.5 x number of threads then it could work for the smaller (single thread) machines as well as the larger (56 thread) machines. I would suggest we make it something like 15 per thread. Its simplistic and achievable with the current infrastructure.

A more long term approach might be to increase that number based upon the average turnaround if the host is considered reliable. It could also be applied the other way to reduce the number if the host is unreliable.

The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1812863 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1812866 - Posted: 27 Aug 2016, 0:34:35 UTC - in response to Message 1812863.  

The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient.

I don't recall ever having per processor or per core WU limits before.
I do remember them making the GPU limit per GPU instead of for all GPUs.
Grant
Darwin NT
ID: 1812866 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1812896 - Posted: 27 Aug 2016, 2:28:43 UTC

There are at least 3 ('Advanced') methods I know to overcome the 100+100 tasks limits.
Someone dare to list them? ;)
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1812896 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1812916 - Posted: 27 Aug 2016, 3:52:52 UTC - in response to Message 1812896.  

Ruh Roh! We don't want the Banhammer swinging around today now, do we? ;-) lol

ID: 1812916 · Report as offensive
Dr Who Fan
Volunteer tester
Avatar

Send message
Joined: 8 Jan 01
Posts: 3206
Credit: 715,342
RAC: 4
United States
Message 1812925 - Posted: 27 Aug 2016, 4:40:35 UTC - in response to Message 1812916.  

Ruh Roh! We don't want the Banhammer swinging around today now, do we? ;-) lol

No, we do not want the whole thread vanishing into the ether at Berkeley.
Let's just say a skilled person knowing how & what to look for on their favorite search tool should be able to find the magic ways.
ID: 1812925 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1813062 - Posted: 27 Aug 2016, 21:11:34 UTC - in response to Message 1812866.  

The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient.

I don't recall ever having per processor or per core WU limits before.
I do remember them making the GPU limit per GPU instead of for all GPUs.

I believe it was around the end of 2011 or start of 2012. I seem to recall when they first set the task limits they had accidentally set 50 total per host. Then changed it to per processor for CPU. Then after some time the limits were removed, db when splat again, & then the limits were implemented again.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1813062 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1813070 - Posted: 27 Aug 2016, 22:52:43 UTC - in response to Message 1813062.  
Last modified: 27 Aug 2016, 22:53:31 UTC

Some links on the WU limit:

ID: 1813070 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1813082 - Posted: 27 Aug 2016, 23:31:16 UTC - in response to Message 1813070.  

I'd suggest adding message 1307567 to that list.
ID: 1813082 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1813085 - Posted: 27 Aug 2016, 23:41:07 UTC

I think it would be useful to know what kind of hit the DB took when the change was made from 100 GPU tasks per host to 100 tasks per GPU. Whatever that increase was, and how well the DB handled it, might be informative in the current discussion. However, I don't think that was ever looked at or, if it was, I don't remember ever seeing it mentioned here.
ID: 1813085 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1813479 - Posted: 29 Aug 2016, 16:57:49 UTC - in response to Message 1813070.  

Some links on the WU limit:


I was thinking of posts a bit older.
1185411
1197674
1229214

The posts I can find where the staff told us the values of the task limits are from 2010. Other posts are just notes like "task limits were raised".
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1813479 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.