Nvidia GTX 750 Ti

Questions and Answers : GPU applications : Nvidia GTX 750 Ti
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Graeme J

Send message
Joined: 12 Apr 01
Posts: 15
Credit: 50,821,397
RAC: 6
Australia
Message 1501277 - Posted: 8 Apr 2014, 8:06:04 UTC

I have been running an ASUS GTX 550 Ti for 3 years and it decided to die. I now have a PALIT GTX 750 Ti. I have an INTEL i5. Do I need to do anything to maximize GPU performance?
ID: 1501277 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1501395 - Posted: 8 Apr 2014, 15:51:16 UTC

How many WU's were you running on the 550Ti?

You should be able to run at least 2 at a time on the 750Ti.

ID: 1501395 · Report as offensive
Profile Graeme J

Send message
Joined: 12 Apr 01
Posts: 15
Credit: 50,821,397
RAC: 6
Australia
Message 1501500 - Posted: 8 Apr 2014, 21:00:30 UTC

How do you do that. I have just been running as is. I haven't been doing any special things. I just put in the card and started crunching.
ID: 1501500 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1501525 - Posted: 8 Apr 2014, 21:41:14 UTC - in response to Message 1501500.  

Ok, first, how comfortable are you with trying to modify a written program? If you aren't comfortable at all then I would say stop here and just let the computer do it's thing. If you are up to trying to work through modifying, then there are 2 options

One is to download and run Lunatics optimized program for your computer. The other is to write a cc_config.xml The first requires you to know something about your system (ie 32 versus 64 bit) Which type of cuda you want (in your case, might want to stick with 42 but you can do 50 if you want) either works fine. Once it installs, your estimate time are going to look way out of proportion, but they will come down significantly after a few days. This will speed up your computer and get you better results. But to get more than 1 work unit on each GPU will require you to go into the downloaded cc_app.xml that was installed by lunatics, open that with Notepad, scroll down thru the code and change some things. I can go into more detail but that will make this a really long post. If you don't want to do that, you can stop there and let the lunatics run.

In the second options, some might just post the code for a simple cc_config.xml that you could copy and place in the NOTEPAD application. You would then need to save it as a .XML file. After that you would need to place it in the Bonic data directory (this tends to be hidden) so you would have to unhide it and place it into the correct directory. After this, restart your computer and then run the bonic. If all goes well, then 2 work units will start on your GPU, if not, then only 1 (which means either the cc_config was wrong file type or placed in wrong place.)

Both of these methods require more than just a basic understanding of computers. Do you want to try either of these? I didn't do either for a while (3 months), it took a friend to walk me thru most of those before I felt like I could try. The choice is yours.
ID: 1501525 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1501544 - Posted: 8 Apr 2014, 23:21:08 UTC - in response to Message 1501500.  
Last modified: 8 Apr 2014, 23:22:42 UTC

How do you do that. I have just been running as is. I haven't been doing any special things. I just put in the card and started crunching.


You can get some info from this thread.
http://setiathome.berkeley.edu/forum_thread.php?id=72918#1421963
I have this running as my app_config.xml

<app_config>
   <app>
      <name>setiathome_v7</name>
      <gpu_versions>
          <gpu_usage>0.49</gpu_usage>
          <cpu_usage>.04</cpu_usage>
      </gpu_versions>
    </app>
	<app>
        <name>astropulse_v6</name>
		 <gpu_versions>
          <gpu_usage>0.51</gpu_usage>
          <cpu_usage>1</cpu_usage>
      </gpu_versions>
    </app>
</app_config>


ID: 1501544 · Report as offensive
Profile Graeme J

Send message
Joined: 12 Apr 01
Posts: 15
Credit: 50,821,397
RAC: 6
Australia
Message 1501628 - Posted: 9 Apr 2014, 8:07:13 UTC

Thank you Arkayn and Zalster. GPU now running 2 WU's.
ID: 1501628 · Report as offensive
Hawkeye
Volunteer tester
Avatar

Send message
Joined: 30 Apr 00
Posts: 22
Credit: 13,295,346
RAC: 0
United States
Message 1518571 - Posted: 20 May 2014, 4:16:50 UTC

I added a GTX 750Ti to my existing computer and I am only seeing ~5000 RAC increase over what the CPU and ATI HD5850 was putting out (~18,000)

My NAS has a 750Ti in it as well and in 2 days it's already producing 10k just on MB7 alone (no CPU being worked, though some old WU may get validated from wingmen)

I have 2 computers that have 650Ti in each of them that are around ~15000 with AMD cpu's. I would have expected the 750Ti to put more than only 5k out...

http://setiathome.berkeley.edu/show_host_detail.php?hostid=7243181 Computer in question
ID: 1518571 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1518578 - Posted: 20 May 2014, 4:58:10 UTC - in response to Message 1518571.  

You are running both a Nvidia and ATI on the same computer right? I have no idea how that would influence the productivity of the 750 Ti. Some of the others run combination GPUs but I've never heard how their RACs are affected by sharing resources.
ID: 1518578 · Report as offensive
Hawkeye
Volunteer tester
Avatar

Send message
Joined: 30 Apr 00
Posts: 22
Credit: 13,295,346
RAC: 0
United States
Message 1518595 - Posted: 20 May 2014, 5:48:46 UTC - in response to Message 1518578.  

I have CPU cores set aside for both cards via my app_info.xml. My CPU is overclocked to a reasonable 4.1ghz so I doubt that there would be a bottleneck.
ID: 1518595 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1542003 - Posted: 15 Jul 2014, 12:07:20 UTC - in response to Message 1518595.  
Last modified: 15 Jul 2014, 12:27:13 UTC

I have been reading and it might be you need to set 2 cpus per gpu instance to maximize production.

Certainly that is what the "OpenCL" txt file is saying.

So if you are running say 2 instances in each gpu then you would need to devote for 4 cpu cores to all of it. Something like this I think.

app_config.xml
----------------

<app_config>
<app>
<name>astropulse_v6</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>2.0</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>2.0</cpu_usage>
</gpu_versions>
</app>
</app_config>

Don't get me wrong, empirical experimentation is king.

HTH,
Tom
A proud member of the OFA (Old Farts Association).
ID: 1542003 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1542012 - Posted: 15 Jul 2014, 12:32:18 UTC - in response to Message 1542003.  

Now there is a question. When we refer to using cpu's to feed gpus. I read someplace that you need to reduce the number of cpus available to Seti inorder to free up the cpus.

So you go into your local seti client and if you have 8 cpu/cores (including hyper threading) available and you want to make 1 available to "feed" the gpu you would change that from 100% of muliple cpus to: 0.875

So have have been running a high feed number but not "freeing up" a cpu. Hmmm...

Tom
A proud member of the OFA (Old Farts Association).
ID: 1542012 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1542018 - Posted: 15 Jul 2014, 12:42:01 UTC - in response to Message 1542012.  
Last modified: 15 Jul 2014, 12:42:10 UTC

So you go into your local seti client and if you have 8 cpu/cores (including hyper threading) available and you want to make 1 available to "feed" the gpu you would change that from 100% of muliple cpus to: 0.875

No, you change "On multiprocessors, use at most X% of the processors" to any number between 87.5% and 99%. The value is an integer, so even setting 99% will mean BOINC uses 7 CPU cores, and leaves one free.
ID: 1542018 · Report as offensive
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 1542824 - Posted: 16 Jul 2014, 23:54:11 UTC
Last modified: 17 Jul 2014, 0:00:06 UTC

GPUs do NOT need dedicated CPUs. Stock AP WUs will use ALL of a CPU and part of a GPU, but the Lunatic version r2180 uses much less when -sleep is used. (NOTE: I have not yet adjusted my app_config.xml to reflect the lower cpu needed.) Lunatics MB will use most of a GPU but only a little of a CPU.

Look here:

2 AP + 6 MB GPU WUs, 5 CPU WUs (from Rosetta), 89% CPU and 99% GPU currently (all 4 GTX750Ti cards are between 95-99% busy). Notice I am letting BOINC manage ALL the cpus.

I hope that convinces some readers that at least the GTX750Ti cards running on an i7-4770K do NOT need dedicated CPUs.

[EDIT: to reflect app_config.xml still has old value for AP WUs]
ID: 1542824 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1542827 - Posted: 17 Jul 2014, 0:02:59 UTC - in response to Message 1542824.  

You're giving bad advice Mr. Wilcox. It has been proven that freeing up a CPU core to feed the GPUs maximizes overall performance of the GPUs.

So, depending on your definition of "needing", you're right, it isn't strictly required, but it is highly recommended to do so.
ID: 1542827 · Report as offensive
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 1542947 - Posted: 17 Jul 2014, 5:08:00 UTC - in response to Message 1542827.  

You're giving bad advice Mr. Wilcox. It has been proven that freeing up a CPU core to feed the GPUs maximizes overall performance of the GPUs.

So, depending on your definition of "needing", you're right, it isn't strictly required, but it is highly recommended to do so.


I don't believe I claimed to be maximizing the GPUs although I do think I am coming pretty close to that. I give CPU time to other projects, and GPU time to SETI. That maximizes MY wants for my machines. I encourage other to do the same i.e., maximize their wants.

I fully agree with leaving CPU TIME available to feed GPUs. I was addressing the "a CPU core should be left free" statement that is repeated many times without adequate explanation. I have BOINC make time available by not over committing my resources, CPU and GPU. As it is, and as you can see from my screen capture, I am running 8 WUs for SETI and 5 for Rosetta while leaving 11% CPU time free for anything else that needs it, and getting 99% utilization on the GPU I was showing (typical values are 95-99%). This is how BOINC manages based on my app_config.xml parameters.

Instead of having a "free" CPU, I have BOINC use ALL the CPUs to accomplish the work, leaving none "free" so long as work is available. If there is no GPU work, then all CPUs can be busy with other work. Like you, I support Rosetta with my CPU time. Also note that IF a GPU WU comes in AND can start AND needs more CPU time than is available, BOINC will "bump" a CPU task into a "waiting" state and take its CPU time.

It has been proven that freeing up a CPU core to feed the GPUs maximizes overall performance of the GPUs.

Unfortunately, such a rule is too simple minded to keep my four (4) GPUs busy when AP WUs come along. I have BOINC schedule what is needed, not a simple rule that only works for some simpler configurations. Look at the % busy of the graphic card to see it is working hard (i.e., it is getting plenty of CPU time to feed it).

Look back at Arkayn's post of his app_config. It says to use 1 CPU for each AP WU, and 0.04 CPU for each V7 MB WU. I will bet he does not just leave CPUs "free" since he is also supporting Rosetta.

[OVER-THE-TOP WARNING]
To maximize the GPUs, one should strip everything out of the O/S that is not needed to run them. Do not run anything else (e.g., SETI on the CPUs). Use the lowest resolution on your monitor. Don't peek at what the machine is doing anymore than a minimum. Run optimized applications (I use Lunatic's). Get the fastest RAM possible, fastest CPU with biggest cache, SSDs, .... Over-clock the cards. One can go to extremes if one wants to (Dang! There it is again! What one wants!)
[/OVER-THE-TOP]

Seriously, though, if someone is going to the trouble to try to get more out of their system(s) than the stock apps get, then they need good, correct information, and will have to spend some time learning how things work to make the hardware work its hardest. Getting simple rules is OK provided the advice is given as being a simple rule, and not one that will magically make their system get maximum work out. If one is not willing to learn, then ... use the simple rule and get less than maximum.

Mr. OzzFan, I respect your opinion but I rely on facts and data to tune my systems. I have used my CPUs to support SETI for nearly 15 years, but only got my first GPU (a GT-610 castaway from my son) a year ago. Since then, I have gotten quite interested in tuning and optimizing my additional GPUs and CPUs. It is a lot of fun.
ID: 1542947 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1543017 - Posted: 17 Jul 2014, 10:07:21 UTC - in response to Message 1542947.  

I don't believe I claimed to be maximizing the GPUs although I do think I am coming pretty close to that. I give CPU time to other projects, and GPU time to SETI. That maximizes MY wants for my machines. I encourage other to do the same i.e., maximize their wants.

I fully agree with leaving CPU TIME available to feed GPUs. I was addressing the "a CPU core should be left free" statement that is repeated many times without adequate explanation. I have BOINC make time available by not over committing my resources, CPU and GPU. As it is, and as you can see from my screen capture, I am running 8 WUs for SETI and 5 for Rosetta while leaving 11% CPU time free for anything else that needs it, and getting 99% utilization on the GPU I was showing (typical values are 95-99%). This is how BOINC manages based on my app_config.xml parameters.

Instead of having a "free" CPU, I have BOINC use ALL the CPUs to accomplish the work, leaving none "free" so long as work is available. If there is no GPU work, then all CPUs can be busy with other work. Like you, I support Rosetta with my CPU time. Also note that IF a GPU WU comes in AND can start AND needs more CPU time than is available, BOINC will "bump" a CPU task into a "waiting" state and take its CPU time.


If the CPU is busy crunching a workunit, the OS must wait for a context switch (save, load, restore) before it can address the needs of the GPU. Leaving one core free to feed the GPU instead of crunching increases the productivity of the GPU, which is far more efficient than the CPU at crunching.

It has been proven that freeing up a CPU core to feed the GPUs maximizes overall performance of the GPUs.

Unfortunately, such a rule is too simple minded to keep my four (4) GPUs busy when AP WUs come along. I have BOINC schedule what is needed, not a simple rule that only works for some simpler configurations. Look at the % busy of the graphic card to see it is working hard (i.e., it is getting plenty of CPU time to feed it).


There's far more to the situation than looking at CPU and GPU loads. As I explained above, you are slowing down your system due to context switches.

Look back at Arkayn's post of his app_config. It says to use 1 CPU for each AP WU, and 0.04 CPU for each V7 MB WU. I will bet he does not just leave CPUs "free" since he is also supporting Rosetta.


You should ask him. I bet he does leave CPUs free, as I do.

[OVER-THE-TOP WARNING]
To maximize the GPUs, one should strip everything out of the O/S that is not needed to run them. Do not run anything else (e.g., SETI on the CPUs). Use the lowest resolution on your monitor. Don't peek at what the machine is doing anymore than a minimum. Run optimized applications (I use Lunatic's). Get the fastest RAM possible, fastest CPU with biggest cache, SSDs, .... Over-clock the cards. One can go to extremes if one wants to (Dang! There it is again! What one wants!)
[/OVER-THE-TOP]


Yes, because nothing makes a case like an over-exaggeration. /s

Getting simple rules is OK provided the advice is given as being a simple rule, and not one that will magically make their system get maximum work out. If one is not willing to learn, then ... use the simple rule and get less than maximum.

Mr. OzzFan, I respect your opinion but I rely on facts and data to tune my systems. I have used my CPUs to support SETI for nearly 15 years, but only got my first GPU (a GT-610 castaway from my son) a year ago. Since then, I have gotten quite interested in tuning and optimizing my additional GPUs and CPUs. It is a lot of fun.


I assure you, my advice is more than opinion and is based in fact. If you truly relied on facts, instead of just arguing with people and giving them the wrong advice, you would actually try it, measure the results and see for yourself. If you've already done that, I strongly encourage you to share your results with those in the Number Crunching forum as they are every bit the tweakers you are, and they are the ones that also suggest leaving a CPU free to feed the GPU.

The fact is more than just a simple rule. It is sound advice based upon factual data. Try looking it up sometime and you can see for yourself.
ID: 1543017 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1543088 - Posted: 17 Jul 2014, 14:06:57 UTC - in response to Message 1542947.  
Last modified: 17 Jul 2014, 14:12:02 UTC

I see that you were running Einstein as well. At Einstein it has been proven that leaving one CPU core free per GPU speeds up the GPU calculations enormously and your system shows that perfectly:

- your system, Intel Core i7-4770K CPU @ 3.50GHz, number of processors 8, coprocessors [4] NVIDIA GeForce GTX 750 Ti (2048MB) driver: 33523, INTEL Intel(R) HD Graphics 4600 (1297MB)
- another person's computer, Intel Core i5-2400 CPU @ 3.10GHz, number of processors 4, coprocessors NVIDIA GeForce GTX 550 Ti (1024MB) driver: 33523

Your system shows that you ran one BRP5-cuda32-nv301 task on the GTX 750 Ti in 15,768.14 seconds. Now compare that to any of the same tasks that your competitor ran, on a GTX 550 Ti: at average around 10,300 seconds. That's 5,000 seconds faster on a slower GPU.

That's what leaving CPU cores free to feed the GPU does.
With the OpenCL applications there it's even more striking, as those show a speed-up of 10 times when leaving a core free. So then tasks run in 3,000 seconds instead of 30,000.
ID: 1543088 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1543797 - Posted: 18 Jul 2014, 11:28:28 UTC - in response to Message 1543088.  

In the mean time, the wingman has brought the below task home as well.
Your system shows that you ran one BRP5-cuda32-nv301 task on the GTX 750 Ti in 15,768.14 seconds.

443819786 11352749 3 Jul 2014 15:26:39 UTC 4 Jul 2014 5:30:25 UTC   Completed and validated 15,768.14 3,596.65 42.61 3,333.00 	Binary Radio Pulsar Search (Perseus Arm Survey) v1.39 (BRP5-cuda32-nv301)
445987799 4555772 17 Jul 2014 10:08:53 UTC 18 Jul 2014 10:52:03 UTC Completed and validated 10,289.42 2,455.14 20.00 3,333.00 	Binary Radio Pulsar Search (Perseus Arm Survey) v1.39 (BRP5-cuda32-nv301)

task 443819786 wrote:
Activated exception handling...
[07:26:11][3984][INFO ] Starting data processing...
[07:26:11][3984][INFO ] CUDA global memory status (initial GPU state, including context):
------> Used in total: 101 MB (1948 MB free / 2049 MB total) -> Used by this application (assuming a single GPU task): 0 MB
[07:26:11][3984][INFO ] Using CUDA device #2 "GeForce GTX 750 Ti" (0 CUDA cores / 0.00 GFLOPS)
[07:26:11][3984][INFO ] Version of installed CUDA driver: 6000
[07:26:11][3984][INFO ] Version of CUDA driver API used: 3020

task 445987799 wrote:
[22:27:17][8460][INFO ] Starting data processing...
[22:27:17][8460][INFO ] CUDA global memory status (initial GPU state, including context):
------> Used in total: 263 MB (762 MB free / 1025 MB total) -> Used by this application (assuming a single GPU task): 0 MB
[22:27:17][8460][INFO ] Using CUDA device #0 "GeForce GTX 550 Ti" (192 CUDA cores / 729.60 GFLOPS)
[22:27:17][8460][INFO ] Version of installed CUDA driver: 6000
[22:27:17][8460][INFO ] Version of CUDA driver API used: 3020

ID: 1543797 · Report as offensive
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 1544108 - Posted: 19 Jul 2014, 2:46:13 UTC - in response to Message 1543797.  

To Ageless:

These are somewhat interesting one-off results. I have not been optimizing Einstein on the machine with the GTX750Ti cards because I rarely run it there. I probably accidently resumed it but then suspended it again. The other result is from a different machine, also not yet optimized for ANY project.

My Einstein app_config for this first machine is:

<app>
<name>einsteinbinary_BRP5</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>.25</cpu_usage>
</gpu_versions>
</app>

Perhaps the CPU value is too low. Perhaps running another WU alongside this one is too much for the GPU. I will know more when I do the optimization.

I suggest caution in trying to prove a position is correct based on only one or two samples, especially without knowing the conditions or configurations under which they were made.

By the way, my Einstein app_config for the second machine tells BOINC to allocate 0.5 CPU for those WUs:

<app>
<name>einsteinbinary_BRP5</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>

NB: I run TWO WUs at a time in that single GPU, mostly SETI. Perhaps that is what caused the extended ELAPSED time but doesn't affect the CPU time. Perhaps it was the nature of the WU. Or perhaps it was something else.

I do not argue against having CPU time available to feed GPUs: I argue that it is less than optimal to dedicate a CPU just to feed GPUs. On my systems, ALL CPUs and ALL GPUs are given to BOINC to manage and schedule based on my app_config files. When my parameters reflect the actual usage of the WUs (i.e., optimized), it does a splendid job of keeping a balance.

And now to ask a favor of you: please explain (private message or here) how to get a link to another page with text of your own choosing as you did with your "Your system" and "another person's computer,".

Thanks in advance for your consideration and interest in helping.
ID: 1544108 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1544205 - Posted: 19 Jul 2014, 7:15:04 UTC - in response to Message 1544108.  

And now to ask a favor of you: please explain (private message or here) how to get a link to another page with text of your own choosing as you did with your "Your system" and "another person's computer,".

[url=http://einstein.phys.uwm.edu/show_host_detail.php?hostid=11352749]your system[/url]
[url=http://einstein.phys.uwm.edu/show_host_detail.php?hostid=4555772]another person's computer[/url]
[quote=task 443819786]text[/quote]

ID: 1544205 · Report as offensive
1 · 2 · Next

Questions and Answers : GPU applications : Nvidia GTX 750 Ti


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.