Did the work unit limit change?

Message boards : Number crunching : Did the work unit limit change?
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 · Next

AuthorMessage
Profile mr.mac52
Avatar

Send message
Joined: 18 Mar 03
Posts: 67
Credit: 245,882,461
RAC: 0
United States
Message 1515004 - Posted: 12 May 2014, 18:08:56 UTC

I noticed that my work unit numbers is above the 600 mark for the first time in a long time. I currently show 837 tasks in progress when my three systems are usually limited to 600.

I just checked again and I'm at 900 tasks.

What has changed or should I just keep my mouth shut???

John
ID: 1515004 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1515015 - Posted: 12 May 2014, 18:40:26 UTC - in response to Message 1515004.  

I noticed that my work unit numbers is above the 600 mark for the first time in a long time. I currently show 837 tasks in progress when my three systems are usually limited to 600.

I just checked again and I'm at 900 tasks.

What has changed or should I just keep my mouth shut???

John

There was talk, and a changeset to change the limit to per GPU, ie 100 tasks per Nvidia GPU, 100 per ATI/AMD GPU, 100 per Intel GPU, looks as if it's in place,
all three of your hosts have dual GPUs (from the same vendor), they all have just short of 300 tasks, wasn't quite expecting that, but it's a move in the right direction (If it works correctly),
now we need to see a host with GPUs from multiple vendors.

Claggy
ID: 1515015 · Report as offensive
MikeN

Send message
Joined: 24 Jan 11
Posts: 319
Credit: 64,719,409
RAC: 85
United Kingdom
Message 1515031 - Posted: 12 May 2014, 19:05:01 UTC - in response to Message 1515015.  

That's probably why I cannot get any WUs at present, all the big hosts which were limited to 200 are grabbing them all.
ID: 1515031 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1515036 - Posted: 12 May 2014, 19:13:19 UTC
Last modified: 12 May 2014, 19:23:19 UTC

I'm getting HTTP errors, and the "crickets" have died after a massive initial splurge....
One of my crunchers was reporting about 600 "in progress", but this has dropped back to a more normal 200ish (and a rough count agrees with ~200)




[edit to cure finger trouble]
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1515036 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1515041 - Posted: 12 May 2014, 19:17:51 UTC - in response to Message 1515015.  

My three Hosts have two cards a piece and they are still stuck below 200.
ID: 1515041 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1515079 - Posted: 12 May 2014, 19:58:12 UTC
Last modified: 12 May 2014, 20:33:50 UTC

I reenabled Intel GPU work fetch, and got my i5-3210M/GT650M/Intel_Graphics_HD4000 to ask for work, it now has 300 tasks, but they are a mix of CPU and Nvidia tasks, none for the Intel GPU,
so probably a project side scheduler limit change, rather than a scheduler change that introduces limits per vendor.

http://setiathome.berkeley.edu/results.php?hostid=7054027&offset=0&show_names=0&state=0&appid=0

Edit: I've reported this to the boinc_dev list, the changeset was:

http://boinc.berkeley.edu/gitweb/?p=boinc-v2.git;a=commit;h=9889ee8fb64f52e145f889b8fb62c6004f293f0a

scheduler: enforce GPU job limits separately for each GPU type

Previously, if a project specified a limit on GPU jobs in progress,
it would be enforced across GPU types.
This could lead to starvation for hosts with multiple GPU types.
E.g. the limit is 10, and a host has 10 NVIDIA jobs and no AMD jobs.

Fix this by enforcing limits separately for each GPU type.


Claggy
ID: 1515079 · Report as offensive
Dave Stegner
Volunteer tester
Avatar

Send message
Joined: 20 Oct 04
Posts: 540
Credit: 65,583,328
RAC: 27
United States
Message 1515101 - Posted: 12 May 2014, 20:20:04 UTC

Claggy,

I have been meaning to ask someone. I recently started a new machine crunching:
http://setiathome.berkeley.edu/show_host_detail.php?hostid=7272773
4 core i5.

Is the limit per processor or per core?

I am getting a limit of 100, which equates to about 5 days of AP. (5 hr*100/24)

Seems like number of cores should be in the equation.

TIA
Dave

ID: 1515101 · Report as offensive
Batter Up
Avatar

Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,860,347
RAC: 0
United States
Message 1515126 - Posted: 12 May 2014, 20:38:38 UTC - in response to Message 1515101.  

Seems like number of cores should be in the equation.
Even a 12 processor i7-3970X takes a long time to crunch WU especially when processors have to be used to help the GPUs crunch AP unites. I never ran out of CPU but regulatory ran out of GPU work with the 200 limit. Before my meltdown I ran 8 GPUs.
ID: 1515126 · Report as offensive
Dave Stegner
Volunteer tester
Avatar

Send message
Joined: 20 Oct 04
Posts: 540
Credit: 65,583,328
RAC: 27
United States
Message 1515128 - Posted: 12 May 2014, 20:42:02 UTC - in response to Message 1515126.  

Don't run GPU CPU only.

Seems like I should be able to get 10 days worth.
Dave

ID: 1515128 · Report as offensive
Batter Up
Avatar

Send message
Joined: 5 May 99
Posts: 1946
Credit: 24,860,347
RAC: 0
United States
Message 1515142 - Posted: 12 May 2014, 21:00:46 UTC - in response to Message 1515128.  

Don't run GPU CPU only.

Seems like I should be able to get 10 days worth.

Most people would be timing out tasks if they gave out more than 100 WU per CPU even very fast ones. Most people do this casually and don't pay much attention to their crunching. 10 days is a long time between connections for a serious cruncher. There were no complaints about running out of CPU work but many about GPU.
ID: 1515142 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1515143 - Posted: 12 May 2014, 21:01:25 UTC - in response to Message 1515079.  

I reenabled Intel GPU work fetch, and got my i5-3210M/GT650M/Intel_Graphics_HD4000 to ask for work, it now has 300 tasks, but they are a mix of CPU and Nvidia tasks, none for the Intel GPU,
so probably a project side scheduler limit change, rather than a scheduler change that introduces limits per vendor.

http://setiathome.berkeley.edu/results.php?hostid=7054027&offset=0&show_names=0&state=0&appid=0

Edit: I've reported this to the boinc_dev list, the changeset was:

http://boinc.berkeley.edu/gitweb/?p=boinc-v2.git;a=commit;h=9889ee8fb64f52e145f889b8fb62c6004f293f0a

scheduler: enforce GPU job limits separately for each GPU type

Previously, if a project specified a limit on GPU jobs in progress,
it would be enforced across GPU types.
This could lead to starvation for hosts with multiple GPU types.
E.g. the limit is 10, and a host has 10 NVIDIA jobs and no AMD jobs.

Fix this by enforcing limits separately for each GPU type.


Claggy

Two of my Dual ATI Hosts have just gone above 200...
http://setiathome.berkeley.edu/results.php?hostid=6796475
http://setiathome.berkeley.edu/results.php?hostid=6796479
The third one is an old host I'm currently reviving and seems to have scheduling issues, but it is receiving more GPU tasks...
ID: 1515143 · Report as offensive
Dave Stegner
Volunteer tester
Avatar

Send message
Joined: 20 Oct 04
Posts: 540
Credit: 65,583,328
RAC: 27
United States
Message 1515149 - Posted: 12 May 2014, 21:05:12 UTC - in response to Message 1515142.  

10 days is a long time between connections for a serious cruncher.


I am not worried about time between connection, it is time between availability.

Project has been very stable lately but, who knows.
Dave

ID: 1515149 · Report as offensive
Wedge009
Volunteer tester
Avatar

Send message
Joined: 3 Apr 99
Posts: 451
Credit: 431,396,357
RAC: 553
Australia
Message 1515172 - Posted: 12 May 2014, 21:28:55 UTC

The change must have happened overnight for me. I noticed there was a bit of server down-time. I don't see any limit breaks on my hosts yet but I'll check again when I get back from work.
Soli Deo Gloria
ID: 1515172 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1515183 - Posted: 12 May 2014, 21:50:00 UTC - in response to Message 1515142.  

Don't run GPU CPU only.

Seems like I should be able to get 10 days worth.

Most people would be timing out tasks if they gave out more than 100 WU per CPU even very fast ones. Most people do this casually and don't pay much attention to their crunching. 10 days is a long time between connections for a serious cruncher. There were no complaints about running out of CPU work but many about GPU.

On my old i7-860 machines 100 AP tasks do work out to be something like an 8-9 day cache. My i5-4670 will run thought 100 in more like 5-6 days.

I voiced my opinion about 100 CPU task limits several times. When running MB only on my large multi core system like my 24 core server. 100 tasks can be less than a days worth of work. Just like on some GPU's.
My 24 core server & i7-860's just switch over to their backup projects when we do run out of work. So it isn't a huge deal anyway.

If task limits per CPU socket were an option that would be nice, but I have since reconfigured BOINC to run 1 instance per NUMA node on my 24 core system. Which looks it may be better to do as well.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1515183 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1515191 - Posted: 12 May 2014, 22:05:28 UTC - in response to Message 1515004.  

I'm running a single host.
It has 103 AP and 397 MB tasks in progress.
The 1 CPU and 4 GPU's seems to equal 500 WU's.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1515191 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1515292 - Posted: 13 May 2014, 2:44:29 UTC - in response to Message 1515079.  
Last modified: 13 May 2014, 2:47:19 UTC

I reenabled Intel GPU work fetch, and got my i5-3210M/GT650M/Intel_Graphics_HD4000 to ask for work, it now has 300 tasks, but they are a mix of CPU and Nvidia tasks, none for the Intel GPU,
so probably a project side scheduler limit change, rather than a scheduler change that introduces limits per vendor.

http://setiathome.berkeley.edu/results.php?hostid=7054027&offset=0&show_names=0&state=0&appid=0

Edit: I've reported this to the boinc_dev list, the changeset was:

http://boinc.berkeley.edu/gitweb/?p=boinc-v2.git;a=commit;h=9889ee8fb64f52e145f889b8fb62c6004f293f0a

scheduler: enforce GPU job limits separately for each GPU type

Previously, if a project specified a limit on GPU jobs in progress,
it would be enforced across GPU types.
This could lead to starvation for hosts with multiple GPU types.
E.g. the limit is 10, and a host has 10 NVIDIA jobs and no AMD jobs.

Fix this by enforcing limits separately for each GPU type.


Claggy

Yeah something in a bit off. Just noticed my HTPC is up to 300 tasks in progress. Up from 200. Looks like even tho my ATI card is not asking for work it is being granted the limit for both cards.
I am going to try sticking it in a venue that has "Use ATI GPU" disabled to see if it goes back down.
The intended change is not what is occurring.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1515292 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1515308 - Posted: 13 May 2014, 3:29:34 UTC
Last modified: 13 May 2014, 3:32:38 UTC

Apparently the finaly hear our prays and change the limits to 100 WU per GPU.
ID: 1515308 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1515310 - Posted: 13 May 2014, 3:32:42 UTC - in response to Message 1515292.  

I'd like to know if this is permanent before I go swapping cards around. I think I can get 2 6850s and a 7750 to work in my Mac. But, I'd have to swap cards in 3 machines to arrive at that goal. I'm really getting tried of swapping cards...
ID: 1515310 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1515311 - Posted: 13 May 2014, 3:34:30 UTC - in response to Message 1515308.  

Apparently the finaly hear our prays and change the limits to 100 WU per GPU.

The change indicates that it should still be 100 tasks, but per GPU type. So If you can NVIDIA, ATI, & iGPU you could have 300 GPU tasks, but it is not working that way at present.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1515311 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1515346 - Posted: 13 May 2014, 5:12:04 UTC

I've just checked, and my cruncher with two GTX780s has 300 tasks, so it is per GPU.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1515346 · Report as offensive
1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Did the work unit limit change?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.