1)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1282702)
Posted 12 Sep 2012 by Sunny129 Post:
[scratches head]hmmm....[/scratches head] |
2)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1281757)
Posted 9 Sep 2012 by Sunny129 Post: This brings up another question: Since there is no Lunatics NVIDIA GPU app for Astropulse (as of yet...), will my machine run GPU Astropulse WU's using the standard SETI app, or do I need to add something the the app_info file? And if so, what? The answer may be out there somewhere, but a search here and on the Lunatics site hasn't turned it up... yeah, i wasn't thinking about the fact that you were already running an app_info.xml to pass parameters to the nVidia Multibeam app. b/c you are employing an app_info.xml file, your host will only download work for the apps specified in the app_info.xml. so i stand corrected - you must have an entry for even the stock nVidia OpenCL Astropulse app since you're already using an app_info.xml. at any rate, i agree that its best to start a new thread since that's not the topic of this thread. that being said, if you just cut & paste the section of code i posted above into your app_info.xml, you should be good to go. i wouldn't worry about the <avg_ncpus>0.04</avg_ncpus> and <max_ncpus>0.2</max_ncpus> parameter values, as these have been pretty well established.. the only thing you need to manipulate is <count>. the executable of the app itself is the same despite my running Windows7 x64 and your running WinXP x32, so that doesn't need to be changed either. |
3)
Message boards :
News :
New AstroPulse applications for GPUs.
(Message 1281069)
Posted 7 Sep 2012 by Sunny129 Post: Isn't it possible to cut down on the AP splitters so they don't finish long before the MB's are done? what would that solve though? the project server only receives splittable AP data when Arecibo (or wherever it comes from) has data to send. and that data comprises only a minority of all SETI data that must be crunched (which is why there is approx. 1 AP task for every 20 or so Multibeam tasks). if the AP splitters are intentionally slowed down, we'll see AP work less often than we do now... |
4)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1281018)
Posted 7 Sep 2012 by Sunny129 Post: Or you could just download MSI Afterburner and design you own custom fanspeed vs. Gpu temp Curve. yes, it'll work on just about any system. nevertheless, some people occasionally have trouble with it, and have to resort to an alternative method of GPU monitoring. so thank you for posting a link to the nVidia System Tools w/ SEA Support, as i'm sure some folks will find it to be a viable alternative to MSI Afterburner. |
5)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1280972)
Posted 7 Sep 2012 by Sunny129 Post: so i suppose you could try 3 simultaneous tasks to see if they finish in less than three times the run time of a single task by itself. yeah, that's what i originally thought and seemed to remember...then i saw that the SETI tasks weren't consuming as much VRAM as i thought they were. then i started to think about all the other projects i'm involved in, and figured i might be confusing SETI's VRAM requirements w/ that of another project...and so i started to second guess myself. but as i mentioned in a previous post, i was only able to run 2 Multibeam tasks at a time on either of my GTX 560 Ti's. |
6)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1280880)
Posted 7 Sep 2012 by Sunny129 Post: the <count>n</count> statement is the one that controls the number of tasks running in parallel, where n=1 corresponds to 1 task, n=0.5 corresponds to 2 tasks, n=0.33 corresponds to 3 tasks, and so on and so forth... hmm...it appears you have some room to push. i could have sworn that 3 simultaneous tasks would require more than 1GB of VRAM, but 2 simultaneous tasks on your GT 520 appear to be using only 466MB...i must be mixing up SETI w/ some other project as far as VRAM consumption is concerned. so i suppose you could try 3 simultaneous tasks to see if they finish in less than three times the run time of a single task by itself. |
7)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1280870)
Posted 7 Sep 2012 by Sunny129 Post: yes, once you enable AP tasks via your web preferences, your host should eventually download the stock nVidia OpenCL Astropulse binaries, and tasks will of course follow when they become available. perhaps it isn't supposed to download the executable and the associated files until new AP tasks are actually ready to be sent to your host...i really don't know. i would try manually updating the project from within BOINC before i try deselecting and re-selecting AP tasks in the web preferences. worst case it tells you that AP tasks aren't available at this time, and you'll get the binaries when tasks become available. you really only need an entry for it in the app_info.xml if you want to run multiple tasks in parallel/increase GPU utilization/decrease CPU utilization/mitigate GUI lag/etc. come to think of it, i'm not entirely sure if it would even be worth it to try to run more than a single task at a time on a GT 520. your card may have enough VRAM to run more than one task at a time, but a single task just might come close to maxing out your GPU utilization. really there's only one way to find out - run a single AP task, and then try two at a time. if they finish in less than twice the run time of the task that ran by itself, then your card can benefit from multiple tasks at once. rinse and repeat...although i can tell you right away that the 1GB of VRAM on GPUs like yours (and even my otherwise much more powerful GTX 560 Ti's) will not be enough to run 3 tasks in parallel...not without over-utilizing VRAM and increasing run times. at any rate, here's how the AP nVidia section of my app_info.xml reads: <app_info> <app> <name>astropulse_v6</name> </app> <file_info> <name>AP6_win_x86_SSE2_OpenCL_NV_r1316.exe</name> <executable/> </file_info> <app_version> <app_name>astropulse_v6</app_name> <version_num>604</version_num> <avg_ncpus>0.04</avg_ncpus> <max_ncpus>0.2</max_ncpus> <plan_class>cuda_fermi</plan_class> <cmdline></cmdline> <coproc> <type>CUDA</type> <count>0.5</count> </coproc> <file_ref> <file_name>AP6_win_x86_SSE2_OpenCL_NV_r1316.exe</file_name> <main_program/> </file_ref> </app_version> </app_info> the <count>n</count> statement is the one that controls the number of tasks running in parallel, where n=1 corresponds to 1 task, n=0.5 corresponds to 2 tasks, n=0.33 corresponds to 3 tasks, and so on and so forth... |
8)
Message boards :
Number crunching :
Optimize your GPU. Find the value the easy way.
(Message 1280858)
Posted 7 Sep 2012 by Sunny129 Post: This brings up another question: Since there is no Lunatics NVIDIA GPU app for Astropulse (as of yet...), will my machine run GPU Astropulse WU's using the standard SETI app, or do I need to add something the the app_info file? And if so, what? The answer may be out there somewhere, but a search here and on the Lunatics site hasn't turned it up... yes, once you enable AP tasks via your web preferences, your host should eventually download the stock nVidia OpenCL Astropulse binaries, and tasks will of course follow when they become available. you really only need an entry for it in the app_info.xml if you want to run multiple tasks in parallel/increase GPU utilization/decrease CPU utilization/mitigate GUI lag/etc. |
9)
Message boards :
News :
Bug in server affecting older BOINC clients with NVIDIA GPUs.
(Message 1279463)
Posted 3 Sep 2012 by Sunny129 Post: How do I do this? that's b/c you're running the stock application, which only allows for 1 GPU task to run at a time. in order to run more than one task on a single GPU simultaneously, you must employ what is known as an app_info.xml file in your SETI@Home project folder. there is a line of code in this file that you must manipulate in order to change the number of tasks you'd like your GPU to crunch simultaneously. but that's a topic for an entirely different thread. search the forums, as there's plenty of info to be found on app_info.xml files, and how to employ and manipulate them. |
10)
Message boards :
News :
Bug in server affecting older BOINC clients with NVIDIA GPUs.
(Message 1279298)
Posted 2 Sep 2012 by Sunny129 Post: thanks for the insight Claggy...i looked at his platform details, but it didn't dawn on me to check his actual tasks...so i kind of assumed he was trying to run 14 tasks in parallel on his GTX 550 Ti LOL...my mistake. i see now that he's only running 2 at a time on his dual core CPU and one at a time on the GPU. that being said, don't you think Paulie might stand to bring down his GPU task run times by trying to run 2 GPU tasks in parallel? his GPU has enough VRAM for it...i think its just a matter of whether or not the GPU core itself becomes overloaded w/ only 2 tasks running. i think it might be worth a shot...after all, i get reduced run times when running 2 GPU tasks in parallel vs only 1 task at a time on my GTX 560 Ti's. |
11)
Message boards :
News :
Bug in server affecting older BOINC clients with NVIDIA GPUs.
(Message 1279164)
Posted 2 Sep 2012 by Sunny129 Post: Just to verify, running about 14 SETIs at one time is OK ? no...definitely not on a single GTX 550 Ti. w/ 2 Multibeam tasks running in parallel on one of my GTX 560 Ti's, VRAM consumption sometimes maxes out above 700MB. some quick math tells us that trying to run 3 tasks in parallel will almost certainly require more than the 1GB of VRAM on the GPU. some more quick math tells us that 14 Multibeam tasks would require in the neighborhood of ~5GB of VRAM. whenever you over-allocate either your GPU's core and/or memory resources, you sacrifice crunching efficiency and increase run times. |
12)
Message boards :
Number crunching :
Anyone else not getting work???
(Message 1279069)
Posted 2 Sep 2012 by Sunny129 Post: yes, its entirely possible that switching drivers can effect the run times of GPU tasks. i don't know about the specific driver versions you speak of, but there might be a thread here that talks about driver versions and which ones work best for specific applications and/or GPU architectures. if you're talking about one of the latest driver releases, i don't know if there will be much info in the server database yet...but a search should dig up something. |
13)
Message boards :
Number crunching :
Anyone else not getting work???
(Message 1277718)
Posted 30 Aug 2012 by Sunny129 Post: After my shutdown and restart of the BOINC program I go in a bunch of a whole slew of Einstein's and some LHC's. But only 5 new SETI's 1 6.04, and 4 6.09's. I haven't changed my setting since synching them all up months ago creating essentially 4 types none, home, school and work. that way I could assign each machine into a category that best fit what it had and how long it takes to run things. as BillBG previously mentioned in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, BOINC is upposed to respect resource share in the long term, which means that its effects won't be immediately noticeable. as i also previously mentioned in that same thread, i used to have 90% of the CPU in one of my machines allocated to LHC@Home SixTrack (in hopes that on the rare occasion SixTrack WU's became available, my host would download a bunch and give them priority), and the remaining 10% allocated to a handful of other projects. it actually worked the exact opposite as i expected - BOINC wouldn't even bother to download LHC@H work despite the massive 90% resource allocation to the project, and all other projects would continue downloading/crunching/uploading/reporting/invoking scheduler requests like they always do. i then decided to split the resources evenly between all projects, and all of the sudden i started getting LHC@H work! perhaps you should try splitting resource share evenly between all projects on that host, instead of giving 70% to SETI and the remaining 30% to all other projects, and see what happens? it might not do anything for you, but it sure couldn't hurt to try... |
14)
Message boards :
Number crunching :
Panic Mode On (76) Server Problems?
(Message 1277682)
Posted 30 Aug 2012 by Sunny129 Post: Got some astro's :) But a task from another project has about another 40 hours to go till it's finished :( let me guess - LHC@Home Classic (SixTrack), right? |
15)
Message boards :
Number crunching :
Panic Mode On (76) Server Problems?
(Message 1275814)
Posted 26 Aug 2012 by Sunny129 Post: What to do, what to do? :-) go to your "community preferences" and set the maximum number of posts per page to something manageable on your machine. |
16)
Message boards :
News :
Bug in server affecting older BOINC clients with NVIDIA GPUs.
(Message 1275374)
Posted 25 Aug 2012 by Sunny129 Post: Was I wrong in believing I had read that you could force BOINC through the resource share option to partition out the work in the percentage that you wanted? well, that's the idea...but when was the last time BOINC actually did something you wanted it to? that function doesn't work correctly for me either. two of my machines crunch LHC@Home Classic along with a handful of other projects. i used to allocate 90% of the CPU to LHC@H and the remaining 10% to the other projects b/c LHC@H hardly ever sees new work...so i wanted to be sure that my machines would give priority to LHC@H work whenever it is available. so one would think that BOINC would prioritize LHC@H work over all other work whenever its available, but does it? nope. instead, LHC@H doesn't even download new work on the rare occasion that its available, and all the projects that appear to have a minimal CPU resource share continue to grab more work... ...interestingly enough, one day i decided to reallocate the CPU resources such that all projects have an even share...and what do you know, all of the sudden LHC@H starts getting work when its available...go figure. |
17)
Message boards :
Number crunching :
Panic Mode On (76) Server Problems?
(Message 1273874)
Posted 22 Aug 2012 by Sunny129 Post: So, how to get this new app for Nvidia OpenCL AstroPulse if you are running current Lunatics stuff on an anonymous platform? i'm not sure i understand what you're saying...the r1316 build didn't appear until 7/6/12, long after the most current Lunatics installer v0.40 was released (which included no OpenCL apps, let alone any apps on the r1316 build). if Keith is running the latest Lunatics installer, then he has an app_info.xml and is running anonymous platform. won't that prevent his host from automatically DLing OpenCL AP tasks for his nVidia GPU, even if APv6 are checked in his web preferences? doesn't this new OpenCL AP app have to be manually placed in the SETI@Home data directory, and isn't an entry required in the app_info.xml file in order to reference this new app and receive work for it? Keith, see Raistmer's first post in THIS thread for a link to the Lunatics webpage that has the new OpenCL app available for download. that thread i linked you to also have a sample app_info.xml entry for nVidia GPUs in the 2nd post. |
18)
Message boards :
Number crunching :
Panic Mode On (76) Server Problems?
(Message 1273789)
Posted 21 Aug 2012 by Sunny129 Post: Why? And yes I did receive some for the Nvidia gpu during the last server hiccup. Did those too on the gpu. Run times were posted in an earlier post. b/c a number of us haven't seen any VLARs crunch on our GPUs since the "fix" was put in place. in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, someone was asking troubleshooting questions a few days ago. after solving his problem, he made mention of some VLARs, which briefly led Erik Corpela to think that the VLARs were broken again. but it was quickly pointed out that his VLARs occurred on August 11th, before the fixes were put into place. so it seemed like a false alarm, but then you mentioned having VLARs on your nVidia GPU, so now i'm wondering again... to be clear, i'm not having problems with this phenomenon myself. |
19)
Message boards :
Number crunching :
Panic Mode On (76) Server Problems?
(Message 1273782)
Posted 21 Aug 2012 by Sunny129 Post: Then you should define what "most" is/means to elaborate on what Mike is saying about "most" nVidia GPU hosts not receiving VLARs, i believe that the few nVidia GPU hosts out there that do occasionally (or regularly) receive VLARs are receiving them in error. you see, the server is coded to take angle range into account and prevent VLARs from going out to nVidia GPU hosts. i don't know if the error lies within the server or the handful of nVidia GPU hosts that accidentally receive VLARs. *EDIT* - Bill, out of curiosity, when was the last time you saw a VLAR running on one of your nVidia GPUs? |
20)
Message boards :
Number crunching :
Panic Mode On (76) Server Problems?
(Message 1273387)
Posted 20 Aug 2012 by Sunny129 Post: And it looks like we are starting to come back up.... thanks...that got my reporting working again. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.