Message boards :
Number crunching :
Increase the Units Cache amount
Message board moderation
Author | Message |
---|---|
t94xr Send message Joined: 22 Mar 12 Posts: 2 Credit: 3,911,149 RAC: 0 |
Hey, I've noticed the project down "under maintenance" lately - and my system runs out of units to process before the site goes online again. Is there anyway I can increase the amount of cached units - my GPU processes them in like 9-10minutes per unit so I need a larger cache so the server doesnt process them all before the project is back online and sits idle for several hours. t94xr - Taupo, New Zealand. | cameronwalker.nz - server |
Bill G Send message Joined: 1 Jun 01 Posts: 1282 Credit: 187,688,550 RAC: 182 |
No. You can only bank 100 units for the CPU and 100 for each GPU that you might have. SETI@home classic workunits 4,019 SETI@home classic CPU time 34,348 hours |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
Hi, My Titan V does guppi vlars in 39-48 seconds. I run out of work in 1 h 20 mins with a 100 WU cache. Shorties take 17 seconds. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
W3Perl Send message Joined: 29 Apr 99 Posts: 251 Credit: 3,696,783,867 RAC: 12,606 |
There are many ways to solve this problem : - you can modify the boinc code to remove this 100 wu limit (as Petri did) - you can use an old boinc version which don't have such limitation (boinc 6.10.58 as kittyman do) - you can use a script to fill the cache before the maintenance (as I do) You can download my perl tool cpu2gpu.pl at http://www.w3perl.com/seti/ Hope it helps ! |
t94xr Send message Joined: 22 Mar 12 Posts: 2 Credit: 3,911,149 RAC: 0 |
No. You can only bank 100 units for the CPU and 100 for each GPU that you might have.So it will increase to 200 for my second GPU when that's installed. hmm thanks. Hi,You sir, I tip my hat. :) There are many ways to solve this problem :That's an excellent idea - find the 100 limit in the code. I'll do some investigation about this. t94xr - Taupo, New Zealand. | cameronwalker.nz - server |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
Server side, not local. And it is not too obvious in its location.... Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
W3Perl Send message Joined: 29 Apr 99 Posts: 251 Credit: 3,696,783,867 RAC: 12,606 |
Yes, I was wrong on a 100 wu limit in the code....you should increase the number of GPU detected in the code so boinc will send 100 extra wu for each card you'll virtually add. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Server side code still hard limits the number of tasks in your cache to 100 per device per scheduler work request. The other limitation is in the client side code that limits the number of tasks being held on any host to 1000 tasks for BOINC versions >> 7.2.0 when bunkering via rescheduling is being used. The limit was put in place in work_fetch.cpp in this code: // don't request work from projects w/ > 1000 runnable jobs // int job_limit = 1000; for (unsigned int i=0; i<gstate.projects.size(); i++) { PROJECT* p = gstate.projects[i]; if (p->pwf.n_runnable_jobs > job_limit && !p->pwf.cant_fetch_work_reason) { p->pwf.cant_fetch_work_reason = CANT_FETCH_WORK_TOO_MANY_RUNNABLE; Earlier BOINC versions like the 6.10.58/60 mentioned or modified client code allows up to 3000 tasks to be held per host. "Bunkering" or rescheduling is the only way to have enough tasks to make it through our typical 10-12 hour maintenance period. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Chris Adamek Send message Joined: 15 May 99 Posts: 251 Credit: 434,772,072 RAC: 236 |
There are many ways to solve this problem : This should work on a Mac right? I don’t know if the folder structure is the same before Linux and a Mac. Thanks, Chris |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Doesn't matter. The Perl script has a line where you input the BOINC folder location on your system. You just have to adjust Laurent's script, comment out some superfluous lines and uncomment another line to make the task move at the end of the script. You do have to install some version of Perl for your OS as a starter. [Edit] You might want to begin here to learn the genesis of the various Reschedulers. GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Chris Adamek Send message Joined: 15 May 99 Posts: 251 Credit: 434,772,072 RAC: 236 |
Thanks and sure, I used the rescheduler someone made on here years ago on Windows. I just hadn’t seen a fairly clean/easy option for the Mac. I installed Perl earlier today. I’ll give the script a look and see how it goes. The 1080ti is very thirsty on maintenance days...lol |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Just my opinion, but I think any further discussion of the subject should be via PM and this thread should be locked. Meeeeeeeeeeeeeeeeeeeeeeeow. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
I'm with you Mark, a few are (fairly) responsible with their efforts to break the server limits on how many tasks they hoard, while others may be somewhat less so. All who do so forget that SETI has NEVER promised us full caches all the time. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Mithotar Send message Joined: 11 Apr 01 Posts: 88 Credit: 66,037,385 RAC: 50 |
I'm still waiting on my toaster............. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.