Posts by Sunny129


log in
1) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1282702)
Posted 716 days ago by Profile Sunny129

Results: Device: 0, device count: 3, average time / count: 2161, average time on device: 720 Seconds (12 Minutes, 0 Seconds)

Results: Device: 0, device count: 4, average time / count: 2825, average time on device: 706 Seconds (11 Minutes, 46 Seconds)

>> The best average time found: 720 Seconds (12 Minutes, 0 Seconds), with count: 0.33 (3)

[scratches head]hmmm....[/scratches head]
2) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1281757)
Posted 719 days ago by Profile Sunny129
This brings up another question: Since there is no Lunatics NVIDIA GPU app for Astropulse (as of yet...), will my machine run GPU Astropulse WU's using the standard SETI app, or do I need to add something the the app_info file? And if so, what? The answer may be out there somewhere, but a search here and on the Lunatics site hasn't turned it up...

yes, once you enable AP tasks via your web preferences, your host should eventually download the stock nVidia OpenCL Astropulse binaries, and tasks will of course follow when they become available. you really only need an entry for it in the app_info.xml if you want to run multiple tasks in parallel/increase GPU utilization/decrease CPU utilization/mitigate GUI lag/etc.

Well, that does not appear to be the case as I received the following message:

SETI@home: notice from server
your app_info.xml file doesn't have a usable entry of AstroPulse V6

So obviously, I need to add entry(s) the the app_info file -- but what?? I think that I will start a new thread on this and see if anyone can help...

yeah, i wasn't thinking about the fact that you were already running an app_info.xml to pass parameters to the nVidia Multibeam app. b/c you are employing an app_info.xml file, your host will only download work for the apps specified in the app_info.xml. so i stand corrected - you must have an entry for even the stock nVidia OpenCL Astropulse app since you're already using an app_info.xml. at any rate, i agree that its best to start a new thread since that's not the topic of this thread. that being said, if you just cut & paste the section of code i posted above into your app_info.xml, you should be good to go. i wouldn't worry about the <avg_ncpus>0.04</avg_ncpus> and <max_ncpus>0.2</max_ncpus> parameter values, as these have been pretty well established.. the only thing you need to manipulate is <count>. the executable of the app itself is the same despite my running Windows7 x64 and your running WinXP x32, so that doesn't need to be changed either.
3) Message boards : News : New AstroPulse applications for GPUs. (Message 1281069)
Posted 720 days ago by Profile Sunny129
Isn't it possible to cut down on the AP splitters so they don't finish long before the MB's are done?

what would that solve though? the project server only receives splittable AP data when Arecibo (or wherever it comes from) has data to send. and that data comprises only a minority of all SETI data that must be crunched (which is why there is approx. 1 AP task for every 20 or so Multibeam tasks). if the AP splitters are intentionally slowed down, we'll see AP work less often than we do now...
4) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1281018)
Posted 720 days ago by Profile Sunny129
Or you could just download MSI Afterburner and design you own custom fanspeed vs. Gpu temp Curve.

Max fan speed 100%.


I do not have an MSI motherboard or MSI graphics card. Will it work on any system?

yes, it'll work on just about any system. nevertheless, some people occasionally have trouble with it, and have to resort to an alternative method of GPU monitoring. so thank you for posting a link to the nVidia System Tools w/ SEA Support, as i'm sure some folks will find it to be a viable alternative to MSI Afterburner.
5) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1280972)
Posted 721 days ago by Profile Sunny129
so i suppose you could try 3 simultaneous tasks to see if they finish in less than three times the run time of a single task by itself.

If you look at the numbers posted in this thread, you'll see that for most video cards 2 is the number.
Only highend cards get any benefit from 3, on most cards you end up doing less work.

yeah, that's what i originally thought and seemed to remember...then i saw that the SETI tasks weren't consuming as much VRAM as i thought they were. then i started to think about all the other projects i'm involved in, and figured i might be confusing SETI's VRAM requirements w/ that of another project...and so i started to second guess myself. but as i mentioned in a previous post, i was only able to run 2 Multibeam tasks at a time on either of my GTX 560 Ti's.
6) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1280880)
Posted 721 days ago by Profile Sunny129
the <count>n</count> statement is the one that controls the number of tasks running in parallel, where n=1 corresponds to 1 task, n=0.5 corresponds to 2 tasks, n=0.33 corresponds to 3 tasks, and so on and so forth...

Well, I know about <count> and I finally found some info on <flops>, but there is stuff in there that I don't entirely understand (for example, what's <avg_ncpus> mean?). It would be nice if the parameters in the app_info file were documented someplace...

I am currently running two SETI tasks in the 520 -- they appear to complete in somewhat less than twice the time for a single task, so It looks like I am coming out ahead. GPU-Z shows 99% GPU Load, 74% Memory Controller Load, 466 MB Memory Used and GPU Temp of 79 deg C when running two SETI enhanced tasks in parallel.

But it would be nice if I could get Fred's test program to run on my machine...

hmm...it appears you have some room to push. i could have sworn that 3 simultaneous tasks would require more than 1GB of VRAM, but 2 simultaneous tasks on your GT 520 appear to be using only 466MB...i must be mixing up SETI w/ some other project as far as VRAM consumption is concerned. so i suppose you could try 3 simultaneous tasks to see if they finish in less than three times the run time of a single task by itself.
7) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1280870)
Posted 721 days ago by Profile Sunny129
yes, once you enable AP tasks via your web preferences, your host should eventually download the stock nVidia OpenCL Astropulse binaries, and tasks will of course follow when they become available.

Thanks for the reply. The stock binaries haven't appeared yet, but maybe I need to de-select AP, update, and then re-select AP.

perhaps it isn't supposed to download the executable and the associated files until new AP tasks are actually ready to be sent to your host...i really don't know. i would try manually updating the project from within BOINC before i try deselecting and re-selecting AP tasks in the web preferences. worst case it tells you that AP tasks aren't available at this time, and you'll get the binaries when tasks become available.


you really only need an entry for it in the app_info.xml if you want to run multiple tasks in parallel/increase GPU utilization/decrease CPU utilization/mitigate GUI lag/etc.

This brings up another question -- where can I find info on what all can be done via settings in the app_info.xml file??? Can multiple stock AP tasks be executed in the GPU? I didn't think that was possible...

come to think of it, i'm not entirely sure if it would even be worth it to try to run more than a single task at a time on a GT 520. your card may have enough VRAM to run more than one task at a time, but a single task just might come close to maxing out your GPU utilization. really there's only one way to find out - run a single AP task, and then try two at a time. if they finish in less than twice the run time of the task that ran by itself, then your card can benefit from multiple tasks at once. rinse and repeat...although i can tell you right away that the 1GB of VRAM on GPUs like yours (and even my otherwise much more powerful GTX 560 Ti's) will not be enough to run 3 tasks in parallel...not without over-utilizing VRAM and increasing run times. at any rate, here's how the AP nVidia section of my app_info.xml reads:

<app_info>

<app>
<name>astropulse_v6</name>
</app>
<file_info>
<name>AP6_win_x86_SSE2_OpenCL_NV_r1316.exe</name>
<executable/>
</file_info>
<app_version>
<app_name>astropulse_v6</app_name>
<version_num>604</version_num>
<avg_ncpus>0.04</avg_ncpus>
<max_ncpus>0.2</max_ncpus>
<plan_class>cuda_fermi</plan_class>
<cmdline></cmdline>
<coproc>
<type>CUDA</type>
<count>0.5</count>
</coproc>
<file_ref>
<file_name>AP6_win_x86_SSE2_OpenCL_NV_r1316.exe</file_name>
<main_program/>
</file_ref>
</app_version>

</app_info>

the <count>n</count> statement is the one that controls the number of tasks running in parallel, where n=1 corresponds to 1 task, n=0.5 corresponds to 2 tasks, n=0.33 corresponds to 3 tasks, and so on and so forth...
8) Message boards : Number crunching : Optimize your GPU. Find the value the easy way. (Message 1280858)
Posted 721 days ago by Profile Sunny129
This brings up another question: Since there is no Lunatics NVIDIA GPU app for Astropulse (as of yet...), will my machine run GPU Astropulse WU's using the standard SETI app, or do I need to add something the the app_info file? And if so, what? The answer may be out there somewhere, but a search here and on the Lunatics site hasn't turned it up...

yes, once you enable AP tasks via your web preferences, your host should eventually download the stock nVidia OpenCL Astropulse binaries, and tasks will of course follow when they become available. you really only need an entry for it in the app_info.xml if you want to run multiple tasks in parallel/increase GPU utilization/decrease CPU utilization/mitigate GUI lag/etc.
9) Message boards : News : Bug in server affecting older BOINC clients with NVIDIA GPUs. (Message 1279463)
Posted 725 days ago by Profile Sunny129
How do I do this?

I clicked on run while computer in use and see I'm only running two SETIs, the others are apparently on standby ...

that's b/c you're running the stock application, which only allows for 1 GPU task to run at a time. in order to run more than one task on a single GPU simultaneously, you must employ what is known as an app_info.xml file in your SETI@Home project folder. there is a line of code in this file that you must manipulate in order to change the number of tasks you'd like your GPU to crunch simultaneously. but that's a topic for an entirely different thread. search the forums, as there's plenty of info to be found on app_info.xml files, and how to employ and manipulate them.
10) Message boards : News : Bug in server affecting older BOINC clients with NVIDIA GPUs. (Message 1279298)
Posted 725 days ago by Profile Sunny129
thanks for the insight Claggy...i looked at his platform details, but it didn't dawn on me to check his actual tasks...so i kind of assumed he was trying to run 14 tasks in parallel on his GTX 550 Ti LOL...my mistake. i see now that he's only running 2 at a time on his dual core CPU and one at a time on the GPU. that being said, don't you think Paulie might stand to bring down his GPU task run times by trying to run 2 GPU tasks in parallel? his GPU has enough VRAM for it...i think its just a matter of whether or not the GPU core itself becomes overloaded w/ only 2 tasks running. i think it might be worth a shot...after all, i get reduced run times when running 2 GPU tasks in parallel vs only 1 task at a time on my GTX 560 Ti's.
11) Message boards : News : Bug in server affecting older BOINC clients with NVIDIA GPUs. (Message 1279164)
Posted 726 days ago by Profile Sunny129
Just to verify, running about 14 SETIs at one time is OK ?

no...definitely not on a single GTX 550 Ti. w/ 2 Multibeam tasks running in parallel on one of my GTX 560 Ti's, VRAM consumption sometimes maxes out above 700MB. some quick math tells us that trying to run 3 tasks in parallel will almost certainly require more than the 1GB of VRAM on the GPU. some more quick math tells us that 14 Multibeam tasks would require in the neighborhood of ~5GB of VRAM. whenever you over-allocate either your GPU's core and/or memory resources, you sacrifice crunching efficiency and increase run times.
12) Message boards : Number crunching : Anyone else not getting work??? (Message 1279069)
Posted 726 days ago by Profile Sunny129
yes, its entirely possible that switching drivers can effect the run times of GPU tasks. i don't know about the specific driver versions you speak of, but there might be a thread here that talks about driver versions and which ones work best for specific applications and/or GPU architectures. if you're talking about one of the latest driver releases, i don't know if there will be much info in the server database yet...but a search should dig up something.
13) Message boards : Number crunching : Anyone else not getting work??? (Message 1277718)
Posted 728 days ago by Profile Sunny129
After my shutdown and restart of the BOINC program I go in a bunch of a whole slew of Einstein's and some LHC's. But only 5 new SETI's 1 6.04, and 4 6.09's. I haven't changed my setting since synching them all up months ago creating essentially 4 types none, home, school and work. that way I could assign each machine into a category that best fit what it had and how long it takes to run things.

My primary workstation has dual CPU's with dual cores and hyperthreading for 8 CPU Cores, 16 gigs of RAM, a GeForce 250 GTS the first that runs openCLl, and U320 15,000 RPM SCSI hard disks.

In the past I was doing around 4000-4400 credit average for just SETI. Since Aug 8th when I added the extra projects the number of SETI 6.03-6.09 WU's has dropped and dropped with the only things coming through being the Astropulse 6.01's. Yet I set the RESOURCE SHARE to 1400 on SETI to force supposedly 70% to SETI and 5% to everything else but it runs more like I've set SETI to 5%.

I know this is the 2nd time I've put this out here but I've been pretty much trouble free for a decade. If it means I have to drop these other projects I will in order to maintain my SETI ever increasing WU count.

Jame

as BillBG previously mentioned in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, BOINC is upposed to respect resource share in the long term, which means that its effects won't be immediately noticeable. as i also previously mentioned in that same thread, i used to have 90% of the CPU in one of my machines allocated to LHC@Home SixTrack (in hopes that on the rare occasion SixTrack WU's became available, my host would download a bunch and give them priority), and the remaining 10% allocated to a handful of other projects. it actually worked the exact opposite as i expected - BOINC wouldn't even bother to download LHC@H work despite the massive 90% resource allocation to the project, and all other projects would continue downloading/crunching/uploading/reporting/invoking scheduler requests like they always do. i then decided to split the resources evenly between all projects, and all of the sudden i started getting LHC@H work!

perhaps you should try splitting resource share evenly between all projects on that host, instead of giving 70% to SETI and the remaining 30% to all other projects, and see what happens? it might not do anything for you, but it sure couldn't hurt to try...
14) Message boards : Number crunching : Panic Mode On (76) Server Problems? (Message 1277682)
Posted 728 days ago by Profile Sunny129
Got some astro's :) But a task from another project has about another 40 hours to go till it's finished :(

let me guess - LHC@Home Classic (SixTrack), right?
15) Message boards : Number crunching : Panic Mode On (76) Server Problems? (Message 1275814)
Posted 732 days ago by Profile Sunny129
What to do, what to do? :-)

go to your "community preferences" and set the maximum number of posts per page to something manageable on your machine.
16) Message boards : News : Bug in server affecting older BOINC clients with NVIDIA GPUs. (Message 1275374)
Posted 734 days ago by Profile Sunny129
Was I wrong in believing I had read that you could force BOINC through the resource share option to partition out the work in the percentage that you wanted?


James

well, that's the idea...but when was the last time BOINC actually did something you wanted it to? that function doesn't work correctly for me either. two of my machines crunch LHC@Home Classic along with a handful of other projects. i used to allocate 90% of the CPU to LHC@H and the remaining 10% to the other projects b/c LHC@H hardly ever sees new work...so i wanted to be sure that my machines would give priority to LHC@H work whenever it is available. so one would think that BOINC would prioritize LHC@H work over all other work whenever its available, but does it? nope. instead, LHC@H doesn't even download new work on the rare occasion that its available, and all the projects that appear to have a minimal CPU resource share continue to grab more work...

...interestingly enough, one day i decided to reallocate the CPU resources such that all projects have an even share...and what do you know, all of the sudden LHC@H starts getting work when its available...go figure.
17) Message boards : Number crunching : Panic Mode On (76) Server Problems? (Message 1273874)
Posted 737 days ago by Profile Sunny129
So, how to get this new app for Nvidia OpenCL AstroPulse if you are running current Lunatics stuff on an anonymous platform?

Keith

Why would you want to change, you're already running r1316, Stock 6.04 is r1316 too.

Claggy

i'm not sure i understand what you're saying...the r1316 build didn't appear until 7/6/12, long after the most current Lunatics installer v0.40 was released (which included no OpenCL apps, let alone any apps on the r1316 build). if Keith is running the latest Lunatics installer, then he has an app_info.xml and is running anonymous platform. won't that prevent his host from automatically DLing OpenCL AP tasks for his nVidia GPU, even if APv6 are checked in his web preferences? doesn't this new OpenCL AP app have to be manually placed in the SETI@Home data directory, and isn't an entry required in the app_info.xml file in order to reference this new app and receive work for it?

Keith, see Raistmer's first post in THIS thread for a link to the Lunatics webpage that has the new OpenCL app available for download. that thread i linked you to also have a sample app_info.xml entry for nVidia GPUs in the 2nd post.
18) Message boards : Number crunching : Panic Mode On (76) Server Problems? (Message 1273789)
Posted 737 days ago by Profile Sunny129
Why? And yes I did receive some for the Nvidia gpu during the last server hiccup. Did those too on the gpu. Run times were posted in an earlier post.

b/c a number of us haven't seen any VLARs crunch on our GPUs since the "fix" was put in place. in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, someone was asking troubleshooting questions a few days ago. after solving his problem, he made mention of some VLARs, which briefly led Erik Corpela to think that the VLARs were broken again. but it was quickly pointed out that his VLARs occurred on August 11th, before the fixes were put into place. so it seemed like a false alarm, but then you mentioned having VLARs on your nVidia GPU, so now i'm wondering again...

to be clear, i'm not having problems with this phenomenon myself.
19) Message boards : Number crunching : Panic Mode On (76) Server Problems? (Message 1273782)
Posted 737 days ago by Profile Sunny129
Then you should define what "most" is/means
because my Nvidias from the GTX/GTS 460/450
on up have always been able to run them.

Not as fast as some might like, but work is work.

Unless one is into cherry picking. Don't know what
else it can be called. Flushing some work units that can
be crunched because they run slower than other work units
is cherry picking in my book.

If they cause errors or cause too great a lag
so that it interferes with what the PC is normally
used for, then one has a valid reason to abort them.

Just because they run "slow" is not one of them,IMHO.

to elaborate on what Mike is saying about "most" nVidia GPU hosts not receiving VLARs, i believe that the few nVidia GPU hosts out there that do occasionally (or regularly) receive VLARs are receiving them in error. you see, the server is coded to take angle range into account and prevent VLARs from going out to nVidia GPU hosts. i don't know if the error lies within the server or the handful of nVidia GPU hosts that accidentally receive VLARs.

*EDIT* - Bill, out of curiosity, when was the last time you saw a VLAR running on one of your nVidia GPUs?
20) Message boards : Number crunching : Panic Mode On (76) Server Problems? (Message 1273387)
Posted 738 days ago by Profile Sunny129
And it looks like we are starting to come back up....
Crickets showing some life again, just completed some uploads and reported.

yeah, an hour ago i had 100 or so tasks waiting to upload...they've all uploaded since then. however i'm still having problems reporting...in fact i probably have several hundred tasks waiting to report.

Have you tried <max_tasks_reported>100</max_tasks_reported> in your cc_config.xml file?

thanks...that got my reporting working again.


Next 20

Copyright © 2014 University of California