V8 CUDA for Linux?

Message boards : Number crunching : V8 CUDA for Linux?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 10 · Next

AuthorMessage
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1775790 - Posted: 2 Apr 2016, 18:59:46 UTC - in response to Message 1775787.  

All is working fine but GeForce GTX 560, perhaps because it's factory overclocked.


Yeah, some factory Overclocked 560's and 560ti's require a small core voltage bump for stability. Not sure if that's possible under Linux, short of modifying the firmware.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1775790 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1775795 - Posted: 2 Apr 2016, 19:24:20 UTC - in response to Message 1775787.  

If you wish to use the 560 for SETI work you might consider swapping it with the 460 currently in XP. You should be able to find a Tool in XP that will allow you to change the settings to produce valid work on the 560. According to the 460 tasks it is only slightly above the posted Reference Clock rate and should work fine in Linux.

The other cards in Linux appear to be working slower than they could. You might be able to help them by adding a setting to the cc_config file listed here;
http://boinc.berkeley.edu/wiki/Client_configuration
You would want to add the line; <no_priority_change>1</no_priority_change>
This will run All the Apps at nice level Zero, including the CPU tasks. The cc_config.xml file would be similar to;
<cc_config>
  <log_flags>
  </log_flags>
  <options>
    <use_all_gpus>1</use_all_gpus>
    <no_priority_change>1</no_priority_change>
  </options>
</cc_config>

That should speed up the GPUs a little.
ID: 1775795 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1775918 - Posted: 3 Apr 2016, 8:30:54 UTC - in response to Message 1775795.  

Great advice, but my son is coming back soon so I don't want to change hardware on his pc.

I see that now I get a greater number of valid results (18, I see now), even if with e few seconds of job but good credit. Using NVIDIA X Seerver settings I see that gtx 560 is about 10-20% busy (95% of time) and sometimes is 50 or more. I have see one time that as the gpu reached 80%, the task ended.
In this moment I see 31 valid results done by 8 CPUs and 18 by 1 GPU. 36% of my credit in this host is done by GPU. Must be better but in two weeks I buy new hardware and I will not choose overclocked one!

About priority, for me it's better if I run Seti@ in low priority.
ID: 1775918 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1776006 - Posted: 3 Apr 2016, 19:13:15 UTC - in response to Message 1775918.  
Last modified: 3 Apr 2016, 19:26:15 UTC

About priority, for me it's better if I run Seti@ in low priority.

OK. I was just trying to determine why there is such a difference between my two old cards and yours. My two cards were running with the no change setting here, http://setiweb.ssl.berkeley.edu/beta/results.php?hostid=72013&offset=160
I was also trying the -poll command which is why some of them have such a high CPU usage.

Looking a little closer at the GTX 560 I'm thinking this has to be a Gamer setting rather than a Factory setting. The EVGA GeForce GTX 560 Superclocked only went to 1700MHz, http://www.evga.com/articles/00630/, quite a bit lower than 1850. Also, according to nVidia the range is from 1620 to 1900, http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-560/specifications. So, it's not likely some vendor would sell a card that's nearly pegged on the Overclock scale and almost three times as high as the Superclocked Edition.
That means you should be able to down clock it with the tools on the machine.
Did you try the settings in the second guide, it's a bit different than the first guide, http://www.phoronix.com/scan.php?px=MTY1OTM&page=news_item

ID: 1776006 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1776037 - Posted: 3 Apr 2016, 22:08:19 UTC

BTW, while looking at my Inconclusive results I ran across another 560 running with a clock rate of 1850 MHz in Windows. He is running Stock Apps and his results are basically the same. The only tasks Validating are the Real Overflows and everything else is trash. He has already completed the 10 task requirement so he is receiving the normal credit for his Valid Overflows...which isn't much;

State: All (1283) · In progress (47) · Validation pending (488) · Validation inconclusive (523) · Valid (66) · Invalid (159) · Error (0)

http://setiathome.berkeley.edu/results.php?hostid=7969746
ID: 1776037 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1776079 - Posted: 4 Apr 2016, 2:04:07 UTC - in response to Message 1776037.  

Yes, It's starting to look like (to me) that now that the different supported devices are generally in much closer agreement than before v8, the individual problem hosts stand out more clearly than they used to.

There are different options to improve this, app and server side. Increasing the server side complexity to deal with them would probably be an unreasonable demand on project/boinc resources.

What I'll probably do (already planned) is design in some very low cost sanity checks, such that the app can go into temporary exits if it's uncertain. (eventually errored out by the client if it's repeated)

The trick will probably be to choose a delay such that genuine glitches (such as cosmic ray induced bit flips) pass, or hard fault(s) slow the work burned on the host. More balancing acts.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1776079 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1776200 - Posted: 4 Apr 2016, 16:07:37 UTC - in response to Message 1776006.  

Of course I need to know exactly how to set GTX 560 as default (non OC) params.
ID: 1776200 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1776239 - Posted: 4 Apr 2016, 20:15:24 UTC - in response to Message 1776200.  

Of course I need to know exactly how to set GTX 560 as default (non OC) params.


This should activate the extra controls you need in the nvidia-settings application (Though I only used fan control with my 680):
http://www.phoronix.com/scan.php?px=MTY1OTM&page=news_item
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1776239 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1776248 - Posted: 4 Apr 2016, 21:02:53 UTC - in response to Message 1776239.  

I haven't had any experience with it myself. Since I currently don't have a recent nVidia card in Linux all I can do is make assumptions on what I've read.
First assumption is the settings should revert to Defaults on every reboot. Since that doesn't appear to be happening, I would guess a boot option has been added such as in this post;
https://www.phoronix.com/forums/forum/linux-graphics-x-org-drivers/nvidia-linux/42368-nvidia-releases-337-linux-driver-with-overclocking-better-egl?p=531219#post531219
Clock frequency settings are not saved/restored automatically by default to
avoid potential stability and other problems that may be encountered if the
chosen frequency settings differ from the defaults qualified by the
manufacturer. You can use the command line below in '~/.xinitrc' to
automatically apply custom clock frequency settings when the X server is
started:

# nvidia-settings -a GPUOverclockingState=1 -a
GPU2DClockFreqs=<GPU>,<MEM> -a GPU3DClockFreqs=<GPU>,<MEM>

Here '<GPU>' and '<MEM>' are the desired GPU and video memory frequencies
(in MHz), respectively.

I suppose if such an option has been added then you will just have to reset the clocks after each reboot...purely speculation.

To edit the xorg.conf you would have to be root. To avoid the terminal as much as possible I would open the terminal and enter: gksu nautilus
This will open the file browser as root. Then go to Computer/etc/X11 and open xorg.conf with gedit. Check and see if the Option "Coolbits" "8" has been added in the device section similar to this Option for Coolbits 1, yours must be 8, not 1.


If the Option isn't there, add it. Then close everything out and reboot. The next time you open nVidia Settings the OC section should be there.
Good Luck ;-)
ID: 1776248 · Report as offensive
Juha
Volunteer tester

Send message
Joined: 7 Mar 04
Posts: 388
Credit: 1,857,738
RAC: 0
Finland
Message 1776705 - Posted: 6 Apr 2016, 20:06:08 UTC - in response to Message 1773563.  

Yes, It's a mystery to me why the driver version is not detected/reported by the client on Linux.


Seems to have been fixed sometime during 7.4 series.


Good to know at least newer clients can use familiar scheduler logic.


Addendum: It was first fixed in v7.3.16. And then yesterday it was fixed some more.
ID: 1776705 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1778298 - Posted: 12 Apr 2016, 8:39:31 UTC

I'm going to change the host with the overclocked 560.

Now I'm going to build a system based on Intel® Core™ i7-6700 Processor and a gtx 750ti.

I see that inside the CPU I will have a GPU: Intel® HD Graphics 530.
I ask if this GPU is able to work with CUDA on linux mint (SETIv8_Linux_CUDA42.7z) as a normal GPU.

Of course if I don't need the 750 GPU, I save some money and I can install some water cooling.

Any idea?
ID: 1778298 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1778299 - Posted: 12 Apr 2016, 8:42:46 UTC

question:

How can I know, under linux, if I can run more than one CUDA on theese GPU?

NVIDIA GeForce GTX 260
NVIDIA GeForce GT 640
NVIDIA GeForce GTX 750
NVIDIA GeForce GTS 250

And if yes, which command and where have I to set?

Thanks
ID: 1778299 · Report as offensive
W3Perl Project Donor
Volunteer tester

Send message
Joined: 29 Apr 99
Posts: 251
Credit: 3,696,783,867
RAC: 12,606
France
Message 1778314 - Posted: 12 Apr 2016, 11:56:32 UTC - in response to Message 1778299.  

From Videocard Benchmarks :

NVIDIA GeForce GTX 260 : 1120
NVIDIA GeForce GT 640 : 1300
NVIDIA GeForce GTX 750 : 3240
NVIDIA GeForce GTS 250 : 900
Intel HD 530 : 960

So Intel GPU is as slow as your GTS 250 card but will require more cooling.
I have also an i6700K but Intel GPU is not activated (afaik it require a recent linux kernel > 4.4 (which will be available in the forthcoming distro release)).

You should use cuda42 for pre-fermi cards which should works also for current graphic card.
ID: 1778314 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1778406 - Posted: 12 Apr 2016, 18:30:38 UTC - in response to Message 1778298.  
Last modified: 12 Apr 2016, 18:35:07 UTC

I'm going to change the host with the overclocked 560.
Now I'm going to build a system based on Intel® Core™ i7-6700 Processor and a gtx 750ti.

I see that inside the CPU I will have a GPU: Intel® HD Graphics 530.
I ask if this GPU is able to work with CUDA on linux mint (SETIv8_Linux_CUDA42.7z) as a normal GPU.

Of course if I don't need the 750 GPU, I save some money and I can install some water cooling.

Any idea?

The 560 is a fast card, faster than the 750Ti. It just uses almost three times the power. It would probably work fine if you could just downclock it to around 1700MHz. The Over/Underclock works using the Coolbits setting, I've tried it on my GTX750Ti. If you can't get it to work my only guess would be someone edited the ROM. There are ROMs available, even one that will clock your 560 to 1850, http://www.techpowerup.com/vgabios/110700/asus-gtx560-1024-110801 it's not an Asus is it? Just find ROM that will work with your card and flash it back to being just Superclocked, http://www.techpowerup.com/vgabios/?architecture=NVIDIA&manufacturer=&model=GTX+560&interface=&memType=&memSize= Beware, flashing the ROM can be tricky, perhaps your son could help.

The Intel® HD Graphics is usually much slower than a real video card, usually slows down the CPUs when running, and usually produces large numbers of Inconclusive results. It also only works with OpenCL, Not CUDA.

The Pre-Fermi cards don't produce any advantage when running multiple tasks. I've been watching this host for about a month, https://setiathome.berkeley.edu/results.php?hostid=5940343&offset=160 When he started he was running a Shorty in about 650~ seconds and a 0.42 in about 1000. Then he decided to run 3 at a time. Now the 0.42s are taking Over 3000 secs to finish. The only advantage is we Now know the 275 can successfully run 3 tasks at a time, it's not any faster though, it seems slower. The other ways to produce more work on a Pre-Fermi would be the Priority setting already mentioned, and the -poll commandline. Using the Poll setting will increase the GPU load to around 95% and use a Full CPU. I've tried the Poll setting on two of my cards;
GTX 750Ti with CUDA 42 - http://setiweb.ssl.berkeley.edu/beta/results.php?hostid=72013
GTX 950 with CUDA 65 on a Mac - http://setiweb.ssl.berkeley.edu/beta/results.php?hostid=63959
It seems to speedup the Newer cards a little more than the Pre-Fermi cards, and you should only run ONE TASK at a time as it increases the GPU load to near Max with just ONE task.
ID: 1778406 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1779030 - Posted: 14 Apr 2016, 13:54:15 UTC

by the way, what happens to seti tasks if I move my SSD with Linux Mint from actual host to the new one?
ID: 1779030 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1779263 - Posted: 15 Apr 2016, 9:03:11 UTC

when boinc starts up it reads out the host's stats. unless you have tasks on board for a device type not present in the new rig, it carries straight on. you keep the old APR values, so times may take a long time to settle to their actual new values, messing with what you actually cache.

you may want to check on the new rig you are getting tasks for all device types you've got there. (e.g. if you have an intel GPU now you didn't have in the previous one)
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1779263 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1779324 - Posted: 15 Apr 2016, 16:04:04 UTC

Hi, again problems.

New host is ready: ID: 7978786

Boinc from repository. 8 CPUs cruncing.

GTX 750ti installed, with Nvidia 352.63

As usual I get "No usable GPUs found":

So I look for nvidia-modprobe but nothing ... "not found".

Usually I need it if driver is greater than 304.

But in this new installation I don't see any 304 to install.
And no nvidia-modprobe

Blocked.
ID: 1779324 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1779335 - Posted: 15 Apr 2016, 16:46:57 UTC - in response to Message 1779324.  

Have you tried installing it from the Terminal?
sudo apt-get install nvidia-modprobe

Personally I'd recommend removing the Repository driver and installing 361.42 from nVidia. It seems to be working the best with my 750Ti.
ID: 1779335 · Report as offensive
Profile Francesco Forti
Avatar

Send message
Joined: 24 May 00
Posts: 334
Credit: 204,421,005
RAC: 15
Switzerland
Message 1779357 - Posted: 15 Apr 2016, 18:30:11 UTC - in response to Message 1779335.  
Last modified: 15 Apr 2016, 18:30:44 UTC

1) Yes, I have tried before to write here: impossible to find nvidia-modprobe package
2) how to install 361.42 ?
ID: 1779357 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1779366 - Posted: 15 Apr 2016, 19:17:52 UTC - in response to Message 1779357.  
Last modified: 15 Apr 2016, 19:31:36 UTC

Download the driver, http://www.nvidia.com/Download/driverResults.aspx/101423/en-us,
then move it to your Home folder and make sure the Execute Permission is set.
Close all programs then hit control+ALT+F1 to move into the console and Login.

Stop the XServer;
sudo service lightdm stop

Remove the Repository driver;
sudo apt-get remove --purge nvidia*

Remove the Leftovers;
sudo apt-get autoremove

Install the Driver;
sudo ./NVIDIA-Linux-x86_64-361.42.run

Reboot;
sudo reboot

Just watch out for any Errors, sometimes files left from other installs will cause an error.
If you get an error, uninstall the driver;
sudo ./NVIDIA-Linux-x86_64-361.42.run --uninstall
then run the installer again.

BTW, Option "Coolbits" "8" also works well with driver 361.
ID: 1779366 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 10 · Next

Message boards : Number crunching : V8 CUDA for Linux?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.