Titan V and GTX1060s

Message boards : Number crunching : Titan V and GTX1060s
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1967933 - Posted: 30 Nov 2018, 3:07:02 UTC - in response to Message 1967932.  
Last modified: 30 Nov 2018, 3:17:00 UTC

I've got a better idea, Stay away from Any New Release that refuses to load the Vendors Driver. Eventually they will fix it. One Year will be up in April, it usually works after about a Year or so.

Look how long it took the AMD Drivers to work in 16.04, about two years wasn't it?

Hmmm, I think I'll try the CUDA 10 Toolkit for 18.04 and see how that works. just for S & Gs.
ID: 1967933 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1967935 - Posted: 30 Nov 2018, 3:14:59 UTC - in response to Message 1967933.  
Last modified: 30 Nov 2018, 3:50:43 UTC

I jumped into AMD GPUs about a year ago after the Linus scorching that set the present amdgpu work in motion.

nVidia meanwhile are still doing things 'their way'...


Not looked at nVidia since, but the usual game to check is kernel version compatibility, CUDA toolkit in place, and whatever other 'quirks'.

You also have the new joker of whatever it is that systemd expects/requires to be in place to play...

Hopefully some pointers there to search on.

And... Take a look at what the ppa does and where?


Good luck.

Happy super fast crunchin'!
Martin

PS: Also check your user/boinc has write access to the /dev/nvidia...
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1967935 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1967945 - Posted: 30 Nov 2018, 3:43:56 UTC - in response to Message 1967925.  

So, why did 18.04 remove the option to install gksu anyway? Seems to be just another move to annoy the user to me.

Yes, that was an annoying development to me also. Something to do with Gnome desktop manager. There is a workaround.
gedit admin:///etc/default/apport
or if you just preface the gedit call for the file with
admin:///
you are able to edit the file as root in the file manager.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1967945 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1967946 - Posted: 30 Nov 2018, 3:46:15 UTC - in response to Message 1967933.  

I've got a better idea, Stay away from Any New Release that refuses to load the Vendors Driver. Eventually they will fix it. One Year will be up in April, it usually works after about a Year or so.

Look how long it took the AMD Drivers to work in 16.04, about two years wasn't it?

Hmmm, I think I'll try the CUDA 10 Toolkit for 18.04 and see how that works. just for S & Gs.

Just remember that the CUDA toolkit installs its own graphics driver. You can't mix and match the driver in the toolkit with the standalone driver apparently from multiple posts about the topic.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1967946 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1967948 - Posted: 30 Nov 2018, 3:51:10 UTC

I got the official Nvidia .run installer to install the drivers properly in Ubuntu 18.04 AS LONG as I have purged all remnants of Nvidia ppa drivers AND done a autoremove. I also had to start from the Nouveau drivers. But they did install eventually. Too much work for me since the ppa drivers have always worked for me. The missing OpenCL driver on some versions is easily handled with a
sudo apt-get install ocl-icd-libopencl1

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1967948 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1967952 - Posted: 30 Nov 2018, 3:56:40 UTC - in response to Message 1967933.  
Last modified: 30 Nov 2018, 4:04:27 UTC

...I think I'll try the CUDA 10 Toolkit for 18.04 and see how that works. just for S & Gs.
Same problem. All the nVidia software from the Toolkit says the driver isn't loaded. Additional drivers says a manually installed driver is in use. BOINC says No Usable GPUs found. Hmmm, perhaps that's why I'm the Only one that can seem to get a Mining board to work correctly in Ubuntu. I think I'm the only one using the nVidia Driver from nVidia in 16.04. Works nicely with 11 GPUs in 16.04.

Oh, I never installed the repository driver in this system. Went straight from the nouveau driver to the Downloaded drivers installed from the Recovery mode. I didn't have any trouble enabling networking in recovery mode, it took about 3 seconds. No problems installing the drivers in the recovery mode. I took a hint from Lubuntu. I ran autoremove in Lubuntu and it seemed to remove half the OS...had to reinstall after that one.

I think I'm going to give up for now, and keep using 16.04 for a while.
ID: 1967952 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1967958 - Posted: 30 Nov 2018, 4:16:53 UTC - in response to Message 1967952.  

I think I'm going to give up for now, and keep using 16.04 for a while.

Whatever works for you and causes the least amount of drama?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1967958 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1967962 - Posted: 30 Nov 2018, 4:30:00 UTC - in response to Message 1967958.  

Yep, need to stay away from the drama. Else you might end up like these people, https://devtalk.nvidia.com/default/topic/1042520/driver/-when-will-the-nvidia-web-drivers-be-released-for-macos-mojave-10-14-/post/5293903/#5293903 From all that you'd think everything prior to Mojave suddenly stopped working. High Sierra still works as well as 16.04, which is more than you can say about nVidia drivers in 18.04 and Mojave. Someday they will get it working, until then everything else is still working fine.
ID: 1967962 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1968020 - Posted: 30 Nov 2018, 14:21:31 UTC

So, I decided to try installing the Repository 390 driver and see if that changed anything seeing as how nothing from nVidia would load. That worked as expected, missing OpenCL, and when I went to remove 390 autoremove only removed a few dozen files, somewhat better than the Lubuntu autoremove experience. Prior to 18.04 running autoremove after the uninstall command would only remove a handful of files, now it removes dozens of leftover driver related files. That seemed to work! I booted into Recovery Mode, ran the 410.78 installer, and after rebooting 410,78 loaded without any trouble. Something that happened during the Repository driver install/uninstall triggered the nVidia driver to load. Kinda weird. Anyway, I finally got the Downloaded driver to load and it seems to work normally. Right now everything is working in 18.04.1. Strange that after 7 months the Repository driver still can't install a driver with a working OpenCL. Also, having to jump through hoops to get the Downloaded driver to load after 7 months is worrisome. Hopefully they will get it working normally by April...
ID: 1968020 · Report as offensive
Tod

Send message
Joined: 17 Apr 99
Posts: 27
Credit: 143,685,603
RAC: 0
United States
Message 1968034 - Posted: 30 Nov 2018, 16:38:12 UTC - in response to Message 1967880.  

@Ian&Steve

No, its actually an old EVGA X99 system.
ID: 1968034 · Report as offensive
Tod

Send message
Joined: 17 Apr 99
Posts: 27
Credit: 143,685,603
RAC: 0
United States
Message 1968035 - Posted: 30 Nov 2018, 16:41:13 UTC - in response to Message 1967958.  

@Keith

I ended up installing 4.10, last night. Granted its only been running about 12 hours or so, but on the Titan V, the results with 4.15 are pretty much identical (about 38-39 seconds per work unit). I also increased the clocks of both memory and gpu (about 300 each), and neither really seemed to matter.
ID: 1968035 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1968037 - Posted: 30 Nov 2018, 16:46:29 UTC - in response to Message 1968034.  
Last modified: 30 Nov 2018, 16:48:54 UTC

your host info for the Fedora system says the CPU is an i7-4820k, which is an LGA 2011 CPU and uses DDR3 memory

X99 is LGA 2011-v3 and uses DDR4 memory.

must be EVGA X79, no? your Windows host looks to be running an X99 platform though.

in any case, it was just a curiosity if it was ASUS or not. most of the afflicted boards seem to be ASUS, but it can happen to any brand board depending on what components it has and how the BIOS is setup I suppose.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1968037 · Report as offensive
Tod

Send message
Joined: 17 Apr 99
Posts: 27
Credit: 143,685,603
RAC: 0
United States
Message 1968038 - Posted: 30 Nov 2018, 17:02:18 UTC - in response to Message 1968037.  

You're correct! I ran an lspci and sure enough, X79, but it is an EVGA board. I got the windows and fedora boards mixed up. My bad :-)

Its getting close to being recycled. Likely will replace with an X299 and a core i9
ID: 1968038 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1968043 - Posted: 30 Nov 2018, 17:55:57 UTC - in response to Message 1968035.  

@Keith

I ended up installing 4.10, last night. Granted its only been running about 12 hours or so, but on the Titan V, the results with 4.15 are pretty much identical (about 38-39 seconds per work unit). I also increased the clocks of both memory and gpu (about 300 each), and neither really seemed to matter.

So what does Nvidia X Server settings app show for Graphics Clock and Memory Transfer Rate for the Titan V on the PowerMizer tab? I think you can get a lot more clock in the memory side than 300. I wonder are you actually seeing a +300 increase in the core clock from default P2 0 offset? I'm not at all familiar with the clocks on the Titan V so wonder if it does the GPU Boost3.0 thing or not.

If I whack in a +100 core clock offset into any of my cards, they just laugh it off and maybe rise 30Mhz over the stock P2 0 offset clocks. The thermal headroom is not high enough to allow such a massive increase in clocks and the card just drags the clocks back to what it knows it can actually manage.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1968043 · Report as offensive
Tod

Send message
Joined: 17 Apr 99
Posts: 27
Credit: 143,685,603
RAC: 0
United States
Message 1968047 - Posted: 30 Nov 2018, 18:38:45 UTC - in response to Message 1968043.  
Last modified: 30 Nov 2018, 18:41:21 UTC

For the Titan, it has 3 power states, P0,P1,P2. I set the dropdown to Max Performance. Watching the course of a work unit, it mostly hovers on P1, with an occasional switch to P2.

     GPU                             Memory
P0   135 Min - 135 Max          1700min-1700Max
P1   135 Min - 1335 Max         1700min-1700Max
P2   135 Min - 1912 Max         1700min-1700Max

ID: 1968047 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1968050 - Posted: 30 Nov 2018, 18:46:12 UTC - in response to Message 1968047.  

looks like the memory doesnt have different clocks for your max power state, which is unlike the geforce cards. maybe it's a Titan V thing.

does look like you're getting a heavy gimp on that core clock though if it's staying at P1 most of the time (the occasional switch to P2, or max power state is normal when BOINC switches to a new WU).
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1968050 · Report as offensive
Tod

Send message
Joined: 17 Apr 99
Posts: 27
Credit: 143,685,603
RAC: 0
United States
Message 1968052 - Posted: 30 Nov 2018, 19:08:00 UTC - in response to Message 1968050.  

I've placed the GPU offset at 600 just to test for a few hours. Watching a few of the WUs, I dont see any change, but I'll give it time. All this leads me to believe the bottle neck might be the old hardware the card is installed on. But we'll see soon enough :-)
ID: 1968052 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1968055 - Posted: 30 Nov 2018, 19:18:44 UTC - in response to Message 1968052.  

See here: https://devtalk.nvidia.com/default/topic/1036962/linux/titan-v-max-clock-speed-locked-to-1-335-mhz-and-underperforms-titan-xp-ubuntu-16-04-nvidia-390-amp-396-/

looks like the same thing me and Keith were talking about. but instead of gimping the memory clocks (like they do on Pascal), they have gimped your core clocks.

one thing to look out for, when you apply that clock offset, it may be applying that on the brief switches to the higher power state, trying to add 600MHz to that 1900 value. several of us have seen issues and system crashes from this behavior.

if you run into issue, check out the "keepP2" script that petri created. see here: NVIDIA P0, P2 states and overclocking 1080, 1080Ti and VOLTA in Linux
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1968055 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1968057 - Posted: 30 Nov 2018, 19:44:02 UTC

It looks like the suggested driver fix for Volta didn't happen in November like the Nvidia representative said it would. Still locked in 410.78 drivers. Did you see the core clock lock at 1335Mhz with the 415 drivers?

It looks like it is pointless to try and use a core clock or memory clock offset on Volta as the driver is hard locking the clocks to defaults. You still get the advantage of much better FP16 and FP32 performance over consumer cards and the fastest performance on the special app of any card in play so far.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1968057 · Report as offensive
Tod

Send message
Joined: 17 Apr 99
Posts: 27
Credit: 143,685,603
RAC: 0
United States
Message 1968058 - Posted: 30 Nov 2018, 19:44:21 UTC - in response to Message 1968055.  
Last modified: 30 Nov 2018, 19:45:42 UTC

Thanks, I'll try installing that tonight :-)


I'll revert back to 415 tonight as well and report back the clocks
ID: 1968058 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Titan V and GTX1060s


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.