Message boards :
Number crunching :
GPU task limits
Message board moderation
Author | Message |
---|---|
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
It looks like they are working on getting the GPU limits to prevent starvation working correctly. My systems with ATI & Intel GPUs are now at the limit with 200 tasks instead of 300 as it was meant to be. Looking at other machines it looks like it went from (limit * total # GPUs) to (limit * # GPUs from single vendor) at the moment. So those with multiple GPUs from the same vendor are still getting a higher limit as before. They might still be working on it. So it could go to (limit * # vendor GPUs). Which is what I think the original purpose was in changing the BOINC code. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Hope that doesn't cause starvation in the reverse direction... If my GPUs start sitting around half the day waiting, i'll be force to move them to projects where they can feed. Edit.. Which then would make me wonder about using more advance GPUs..I'd have to consider using slower GPUs or just build a rig with lower GPUs that is tailored to the amount work units given out. |
Phil Burden Send message Joined: 26 Oct 00 Posts: 264 Credit: 22,303,899 RAC: 0 |
It looks like they are working on getting the GPU limits to prevent starvation working correctly. My systems with ATI & Intel GPUs are now at the limit with 200 tasks instead of 300 as it was meant to be. It's been 100 per cpu/gpu for at least a year, from my limited knowledge. I did enable my Intel Gpu recently and picked up another 100 wu's. So I guess the limit it what it is ;-) P. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
It looks like they are working on getting the GPU limits to prevent starvation working correctly. My systems with ATI & Intel GPUs are now at the limit with 200 tasks instead of 300 as it was meant to be. Yes, the project set of limit of 100 CPU & 100 GPU quite a long while ago. However somewhat recently, few months ago?, Dr. Anderson made a change to the BOINC server code. The intended effect wast so that systems with mixed GPUs would be able to fetch work for each vendor type. So that a machine with say ATI & Nvidia would not download 100 tasks for ATI & leave the Nvidia GPU idle. The change was implemented but in an unintended way which would still lead to GPU starvation. You could download 200 task for the ATI GPU and the Nvidia would still sit idle. It has been changed so that the starvation issue looks to have been corrected. Also you will not pick up another 100 tasks for your ATI card by enabling your Intel GPU, but if you are using your iGPU it should be able to download up to 100 tasks for it to process. If you were to have 2 ATI GPUs it looks like you would still download 200 GPU tasks... for now. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13869 Credit: 208,696,464 RAC: 304 |
If you were to have 2 ATI GPUs it looks like you would still download 200 GPU tasks... As it should be. for now. And hopefully remains so, until they double the numbers. Then double them again. Then once more so we can once again keep a full cache. Grant Darwin NT |
betreger Send message Joined: 29 Jun 99 Posts: 11421 Credit: 29,581,041 RAC: 66 |
And hopefully remains so, until they double the numbers. Then double them again. Probably won't happen until something major changes in the data base. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13869 Credit: 208,696,464 RAC: 304 |
And hopefully remains so, until they double the numbers. Then double them again. Or they score some serious hardware. Or both. Grant Darwin NT |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
And hopefully remains so, until they double the numbers. Then double them again. I think the db limit might be a software issue rather than hardware related. When hit a limit of out in the field the db would just barf all over itself. I don't recall the number but something in the range of 4-4.5 million seems familiar. Maybe a 40 core 3GHz server with 1TB+ of ram would solve that if it is hardware related. I have not seem any major things pop up on the GPU Users group equipment fundraiser page in a while. So I would guess their wishlist doesn't include hardware at the moment. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14680 Credit: 200,643,578 RAC: 874 |
And hopefully remains so, until they double the numbers. Then double them again. "Results out in the field" reached 10,725,146 on Sunday 4 November 2012, and was rising by a million a day. That was the point when I asked for an emergency shutdown of the splitters - it probably peaked a little higher before the email got through. From memory, it took us about a week and a half to recover from that event..... Details are buried somewhere in http://setiathome.berkeley.edu/forum_thread.php?id=69890 (about three-quarters of the way back to the beginning of the thread) |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
And hopefully remains so, until they double the numbers. Then double them again. With people connecting data centers to SETI@home we should probably be very glad for the limits. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
betreger Send message Joined: 29 Jun 99 Posts: 11421 Credit: 29,581,041 RAC: 66 |
With people connecting data centers to SETI@home we should probably be very glad for the limits. I find it ironic that the sucess of this project limits what we as indivuals are allowed to contribute. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13869 Credit: 208,696,464 RAC: 304 |
With people connecting data centers to SETI@home we should probably be very glad for the limits. Success brings it's own problems, hence my suggestion of improved hardware and/or software to deal with it. Server side limits on hosts are just a work around, not a solution. Grant Darwin NT |
FalconFly Send message Joined: 5 Oct 99 Posts: 394 Credit: 18,053,892 RAC: 0 |
The Host limit got me into another issue a few times : On a system with mixed GPUs and with the Host at the limit for tasks in progress, it consistently favored refilling a slower NVidia card, while letting a 4x as powerful ATI card basically run dry - leaving the NVidia card heavily overcommited on tasks and the fast card eventually run idle. I had to manually micromanage it via Host venues to setup ATI only tasks, just to refill it occasionally. I often wished the workunits went into BOINC without being hard-coded to the platform (specific GPU type or CPU) they'll complete on, the coding would then occur when the task is launched. Or something like a local workunit pool like in the old SETI classic days (don't remember the name of the Utility that allowed setting that up). All that would have saved me from a high number of sleepless hours micromanaging frequent over/undercommitments of GPUs. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13869 Credit: 208,696,464 RAC: 304 |
Or something like a local workunit pool like in the old SETI classic days (don't remember the name of the Utility that allowed setting that up). Seti Queue was the one I used from memory. Grant Darwin NT |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
@Falconfly There are a essy way to solve your problem, run 2 instances of Boinc on the host, on each one you allow only one of your GPU work then you will be abble to mantain the cache filled separately with WU for each GPU´s. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
@Falconfly Not required with the per vendor limits being fixed. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
@Falconfly Could be but, if you carefuly knows how to configurate & use the 2 instances your could actualy double the buffer capacity for each GPU (instead of 100 por GPU to 200 per GPU) and you don´t care about one GPU be faster than the other so you could optimize the parameters for each GPU in separate and squize the best of each model. This option gives you a better control of your resources specialy if your GPU´s are from diferent models. Just my 2 Cents. <edit>BTW I´m not telling is easy to do, but realy works. |
FalconFly Send message Joined: 5 Oct 99 Posts: 394 Credit: 18,053,892 RAC: 0 |
Agreed, I only took a quick glimpse at running multiple BOINC instanced but refrained from it for the sake of reliability and easy management. I don't like fiddling around deep into that anymore, I'd rather prefer getting along with the established BOINC standards. The issue was just new to me as I've never before taken a mixed host so hard onto the tasks limit like I did for the WoW!2014 Race ;) Now I know what to do and next time, it could indeed already be fixed :) |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
FYI: I first noticed a change in #WUs sent me around May 12; the change was going from a total of 100 WU for GPU computing to 100/GPU (I use dual NV GPUs on my 2 crunchers). |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
FYI: I first noticed a change in #WUs sent me around May 12; the change was going from a total of 100 WU for GPU computing to 100/GPU (I use dual NV GPUs on my 2 crunchers). I guess you missed the thread "Did the work unit limit change?" & the other places it was mentioned back then? SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.