Message boards :
Number crunching :
Anything relating to AstroPulse tasks
Message board moderation
Previous · 1 . . . 29 · 30 · 31 · 32 · 33 · 34 · 35 . . . 120 · Next
Author | Message |
---|---|
Louis Loria II Send message Joined: 20 Oct 03 Posts: 259 Credit: 9,208,040 RAC: 24 |
WOOT! That last batch of APs pushed me over 40K RAC. I don't expect it to stay there for long. First time though. My invalids and errors are dropping off also. Looks like my Rig is finally settling in.... |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
Out of interest how do people set the caches for CPU work e.g. to get 10 days and does it affect other projects and MB tasks? |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
With the modern day computers you will never get a 10 day cache. You will hit the 100 file limit far before you hit 10 days. Your computer definitely falls in this category. My i5 750Ti runs about 4 days CPU, 2 days GPU, IF they are all AP tasks ... with MB tasks, it is much less days/hours. I hit the 100 task wall, and there is nothing you can do about that. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Yeah, with GPUs, you'll never get 10 days for those anymore. CPUs... it's still pretty easy to get 10 days for those, depending on AMD/Intel and how many cores you have/use. It's been known for a decade that Intel does significantly better with this project than AMD does, even with the optimized apps. So for Intel CPUs, I wouldn't really expect something with at least 4 cores to be able to get a 10-day cache worth of APs. And GPUs.. I would think most of the fine-tuned high-end setups would struggle to get much more than just a single day's cache with how quickly they blast through WUs. Mid-range GPUs probably get 2-3 days at the most until the 100-task limit puts a stop to that. For me, I keep my cache full at 10 days most of the time simply because I only run on 50% of the cores. This shared-FPU design just wastes time and electricity when trying to use one FPU by two cores simultaneously. I ran extensive testing on it when I built the rig and despite being 6 cores, when I run all six cores, it becomes the effectiveness of 3.9x over 1 core. Cinebench and a few others all seemed to agree with that analysis. But when I limit it to 3 cores, it ends up at 2.99x vs. 1 core. So for twice the electricity and twice the heat just to get 1 more core worth of effectiveness doesn't seem to make sense. Hopefully, the upcoming Zen ends up being as amazing as everyone says it needs to be if AMD wants to stay in business. edit: Unrelated, I was coming to mention that I'm crunching through a section of my cache now that has a lot of 25no11ak APs, with a spate of B3_P1s. I do offline testing on those when I get them assigned to see if I need to suspend the cache to push them to the to.. because not all B3_P1s are bad. The two that I tested from 25no11ak were not 100% blanked, so I left them alone. Now that I'm finally getting to that portion of the cache, I've noticed quite a few in the past 24 hours that ended up being 100% blanked. I've never seen that before, on the same tape: some B3_P1s being 100% blanked and some not. It's always been "all or nothing" on those. Conceivably, different tapes from the same day would be different, but never from the same tape. So... that's the first time I've observed that oddity. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
interesting is it easy to test units off-line? |
uglybiker Send message Joined: 6 Dec 02 Posts: 32 Credit: 11,417,951 RAC: 42 |
Now I've got 4 ap's in my cache that have been sitting there for amost 2 days, now. Is there some way I can prioritize boinc to run them faster? I've got wingmen waiting! |
Phil Burden Send message Joined: 26 Oct 00 Posts: 264 Credit: 22,303,899 RAC: 0 |
Now I've got 4 ap's in my cache that have been sitting there for amost 2 days, now. You can suspend all other tasks, Boinc will then run your non-suspended tasks. But why not just let them run when they're due? Your wingmen probably won't even notice either way.. P. |
rob smith Send message Joined: 7 Mar 03 Posts: 22227 Credit: 416,307,556 RAC: 380 |
All too often when one tries to force BOINC to run a few tasks before their time the result is that other tasks fail to run. It is far, far safer to allow BOINC to do the scheduling for you, and accept that task may sit for longer than YOU would want. Don't worry about two days, even with my crunchers I see tasks waiting to run for several days, and don't forget that unlike many other project SETI@Home has deadlines that are set to take into account slow return of results. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Louis Loria II Send message Joined: 20 Oct 03 Posts: 259 Credit: 9,208,040 RAC: 24 |
All too often when one tries to force BOINC to run a few tasks before their time the result is that other tasks fail to run. It is far, far safer to allow BOINC to do the scheduling for you, and accept that task may sit for longer than YOU would want. Personally, I'll suspend all other tasks in order to push APs through, But only if it is a few APs and I have time to monitor the progress. Otherwise I must agree, let BOINC take care of the scheduling. |
WezH Send message Joined: 19 Aug 99 Posts: 576 Credit: 67,033,957 RAC: 95 |
Again on this issue, I am receiving WUs for the CPU and the Haswell Intel GPU but nada for the nvidia GPU. That's on both Haswell CPUs. The Xeon machine on the other hand is receiving CPU and nvidia GPU work steadily.... I do have noticed same kind of problem with my two hosts with AMD APU GPU and Nvidia GPU: First BOINC loads AMD GPU full of WU's, after that to Nvidia (and if Nivida cache is full (never) then to CPU) This host has two 750ti's, average crunch time is about 53min/wu, when integrated AMD APU GPU it is about 2h58min/wu This host has two 560ti's, average crunch time is about 33min/wu, when integrated AMD APU GPU it is about 3h27min/wu So why BOINC loads slower, single GPU to max, when there are faster, two GPU's waiting? |
Louis Loria II Send message Joined: 20 Oct 03 Posts: 259 Credit: 9,208,040 RAC: 24 |
WOOT! That last batch of APs pushed me over 40K RAC. I don't expect it to stay there for long. First time though. My invalids and errors are dropping off also. Looks like my Rig is finally settling in.... O.K. So what magic is this?...I broke 42K.... |
Louis Loria II Send message Joined: 20 Oct 03 Posts: 259 Credit: 9,208,040 RAC: 24 |
What is up with the APs? I don't know all of the ins and outs as some of you do, but where are all of these APs coming from? Resends or what? |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
They tend to be doing AP splitting to get (My thought) a lot of hours of work out there before maintenance. And it seems to be working good. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
As of late, all the APs have been from 2011 tapes. I'm guessing that most of these tapes that do end up producing actual WUs for us were either never processed (by AP) in the first place back when they were fresh tapes, or because of the way the radar blanking was done back then, a lot of the work needs to be redone now with the different/better method we have for handling that now. There are a lot of 2011 tapes that the splitters chew through and produce no APs for because those WUs already exist in the database since we've done them before. It has been said a few times over the past 5+ years that there are LOTS of tapes in off-site storage that can be dusted-off and used in times of limited fresh supply of data, but the past year and a half has been limited on fresh data. I don't imagine there's too many more of these "saved for a rainy day" tapes left in the off-site storage stockpiles, but the good news is that we may be getting pretty close to beginning to process the many...many terabytes of Green Bank data. I don't recall the exact post or time period for it, but I remember Matt said something about there being 12TB of data from GBT right from the start right after GBT got up and running, and I would imagine tons of it has been piling up elsewhere since then (at least a year now, maybe two). It's just a matter of doing what Matt has been working on lately, which is re-coding the splitters and all the other processes to be able to accept different kinds of work from different data sources. At one point, I think the plan was to have a separate set of splitters and processes and databases for the GBT data, but it seems it's probably more economical to just adapt what we already have to accept multiple sources and kinds of data. It's all good news pretty much no matter how you look at it. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
uglybiker Send message Joined: 6 Dec 02 Posts: 32 Credit: 11,417,951 RAC: 42 |
Okay. I found the problem. Since my GPU is set up to run three tasks at once, they show up as this:(.04 CPUs + .33 NVIDIA GPUs) AP tasks show up like this: (.04 CPUs + 1 NVIDIA GPUs) I suspended all my other PGU tasks and the APU started. As long as I have ANY regular GPU work, the AP will not start as it needs the resources of the entire card. This is annoying. |
Bill G Send message Joined: 1 Jun 01 Posts: 1282 Credit: 187,688,550 RAC: 182 |
Okay. I found the problem. Since my GPU is set up to run three tasks at once, they show up as this:(.04 CPUs + .33 NVIDIA GPUs) Change AP to .04 CPU + 0.5 GPU This will allow 2 APs to run. Better I would think would be 1 CPU and 0.5 GPU for APs or 0.33 GPU to run 3 at a time SETI@home classic workunits 4,019 SETI@home classic CPU time 34,348 hours |
uglybiker Send message Joined: 6 Dec 02 Posts: 32 Credit: 11,417,951 RAC: 42 |
I already have it set to .33 GPU. That's my problem as BOINC is saying the APs require 1 GPU. Is it possible I missed something when I changed the app? |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
I already have it set to .33 GPU. That's my problem as BOINC is saying the APs require 1 GPU. Is it possible I missed something when I changed the app? Assuming you're under Lunatics 0.43b? Some excepts from my app_info file: . <app_version> . <app_name>astropulse_v7</app_name> . <version_num>710</version_num> . <platform>windows_x86_64</platform> . <avg_ncpus>0.04</avg_ncpus> . <max_ncpus>0.2</max_ncpus> # for APs, for MBs will be 0.04 . . . <coproc> . <type>CUDA</type> . <count>0.5</count> # only parameter you ever change, . # 1-1, .05=2, .033=3, .025=4 jobs . </coproc> If this doesn't help, I guess I'm not understanding your issue? Jim ... |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
I already have it set to .33 GPU. That's my problem as BOINC is saying the APs require 1 GPU. Is it possible I missed something when I changed the app? So I totally screwed the above up, sorry! Should be : . <coproc> . <type>CUDA</type> . <count>0.5</count> # only parameter you ever change, . # 1=1, 0.5=2, 0.33=3, 0.25=4 jobs . </coproc> Thanks to Rob for smacking me upside the head and getting my mind right ... ;) |
uglybiker Send message Joined: 6 Dec 02 Posts: 32 Credit: 11,417,951 RAC: 42 |
You had it right. The goofup was mine. When I went into the file, I wasn't running APs so anything marked astropulse I left alone. duh. Now I think I'll go out to the garage and do something more befitting my technical skills. Like clean the points on my Studebaker. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.