Message boards :
Number crunching :
GPU + Linux + Seti = Work???
Message board moderation
Author | Message |
---|---|
ausymark Send message Joined: 9 Aug 99 Posts: 95 Credit: 10,175,128 RAC: 0 |
Hi Team I have recently setup my Mandriva Linux box with Boinc/Seti 6.10.58. I am also running the nvidia173-CUDA core and CUDA-OPENCL libraries. (Have an nVidia 9800 GT) Boinc is requesting GPU work units but isnt receiving any (has been asking for 7 days now). My SETI account does not list any CUDA application to distribute work units for. I have only the following applications set: SETI@home Enhanced: yes Astropulse v5: yes Astropulse v5.05: yes I dont have any other options. Suggestions please Cheers Mark |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
You will have to manually install the Linux CUDA app as one has not been made by the SETI admins. http://calbe.dw70.de/seti.html Only caveat is you will have to manually update the app if a newer one is created. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
SETI@home have no stock Linux CUDA application (http://setiathome.berkeley.edu/apps.php). You would need to install the application manually. I have no experiences with Linux, you should find infos here: http://lunatics.kwsn.net/3-linux/index.0.html. There are also optimized applications for CPU available here: http://lunatics.kwsn.net/index.php?module=Downloads;catd=1. Strange, currently only for AstroPulse and not for SETI@home Enhanced? Enhanced (also named MultiBeam) from here (e.g. 32bit): http://calbe.dw70.de/linux32.html? Maybe someone other with Linux experiences could have an answer.. |
elgordodude Send message Joined: 2 Jun 10 Posts: 13 Credit: 7,550,388 RAC: 0 |
Just went through this installing an 8800 GT. The 173 won't work, you want the 260, also from what I found you need to be running in 64 bit. Most, if not all of the 32-bit Multibeam (MB) apps were removed due to massive failures. If you can run 64bit, download the current Nvidia driver, remove all active Nvidia drivers, blacklist nouveau, install the Nvidia driver, reboot, and see what happens. If a 260 driver installation goes well optimize from http://calbe.dw70.de/linux64.html and good luck! It took most of my holiday to figure that out. |
ML1 Send message Joined: 25 Nov 01 Posts: 20326 Credit: 7,508,002 RAC: 20 |
If you can run 64bit, download the current Nvidia driver, remove all active Nvidia drivers, blacklist nouveau, install the Nvidia driver, reboot, and see what happens. You should be using 64-bit in any case. Pointless to only use 1/2 - 1/4 of your CPU hardware with 32-bit only! Select the 'proprietary' nVidia driver from the Mandriva Control Centre: "Configure your computer" -> Hardware -> Set up the graphical server Then select the nVidia driver "GeForce 6100 and later" ... And away you go! Depending on how much graphics ram you card has, for s@h you may need to reduce your number of virtual desktops to just one if you only have 512MByte ram, rather than the default four desktops. Check out the Lunatics optimised apps! Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
ausymark Send message Joined: 9 Aug 99 Posts: 95 Credit: 10,175,128 RAC: 0 |
Hi Team I have installed the latest Mandriva 2010.2 x64 After some fussing about I have my Nvidia 9800 GT card as being recognised as a CUDA capable device - yaaaay However what is now happening is that seti@home is trying to download the: [error] No URL for file transfer of libcudart.so.3 [error] No URL for file transfer of libcufft.so.3 files. I am sure there is a fix for this which I am currently investigating. So thanks for all your help so far, and hopefully I can get this fixed so that I can get on with crunching :) Cheers Mark :) |
geyser Send message Joined: 7 Oct 04 Posts: 8 Credit: 64,645,821 RAC: 201 |
I am working through the same problem as you. If you run ldd on the CUDA executable and there are no errors then you can modify your app_info.xml and remove the sections about the libraries and the errors: [error] No URL for file transfer of libcudart.so.3 [error] No URL for file transfer of libcufft.so.3 should go away. In my case the configuration in the app_innfo.xml file, after removing those unneeded library sections, was in good enough state to pull down work units but the processed too fast so now I am trying to find a way to debug the CUDA app to see what I need to do next. |
ML1 Send message Joined: 25 Nov 01 Posts: 20326 Credit: 7,508,002 RAC: 20 |
I am working through the same problem as you. Try putting in soft links "ln -s" for those in the boinc directory? You'll likely need something like: (Go into a command line terminal) cd to_wherever_it_is_you_have_boinc ln -s /usr/lib64/libcudart.so.3 ln -s /usr/lib64/libcufft.so.3 No "64" if you're only 32-bit. But then again, why would you be 32-bit?! Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Brkovip Send message Joined: 18 May 99 Posts: 274 Credit: 144,414,367 RAC: 0 |
I just wish they would have something 64 bit that would work with the 400 line of Nvidia cards in Linux. The 64 bit 6.08 version will just toss errors all day. :( |
ML1 Send message Joined: 25 Nov 01 Posts: 20326 Credit: 7,508,002 RAC: 20 |
I just wish they would have something 64 bit that would work with the 400 line of Nvidia cards in Linux. The 64 bit 6.08 version will just toss errors all day. :( Mmmm... I need to get my nVidia 4xx running from since the Christmas break... And pick up some threads... Time flies too quick! Happy crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I just wish they would have something 64 bit that would work with the 400 line of Nvidia cards in Linux. The 64 bit 6.08 version will just toss errors all day. :( At MW@h it wasn't a big deal to unlock the CUDA app for Fermi GPUs. Get in contact with Crunch3r, that he do it also with his S@h 6.08 CUDA app. Please let me know if it work/ed. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14653 Credit: 200,643,578 RAC: 874 |
I just wish they would have something 64 bit that would work with the 400 line of Nvidia cards in Linux. The 64 bit 6.08 version will just toss errors all day. :( The Fermi incompatibility found in SETI Windows CUDA applications v6.08 and v6.09 was nothing to do with anything called "locking". It was, instead, the non-declaration of certain variables as "volatile" - a 'common optimisation' practiced by NVidia themselves in those early pre-Fermi CUDA builds. See paragraph 1.2.2 of the Fermiâ„¢ Compatibility Guide v1.3 for details. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
At MW@h it wasn't a big deal to unlock the CUDA app for Fermi GPUs. I was talking about the MW@h Windows CUDA app. AFAIK & IIRC, he changed only one/two entries in the code of the stock MW@h_0.24_cuda23 app and the app worked on Fermi chips. The.. major = x minor = x ..part - or something similar. I'm not a coder, so maybe I remember/declare wrong. I don't know if this will be so easy also for his Linux S@h CUDA app. If someone is interested to get a Linux S@h Fermi CUDA app, s/he should get in contact with Crunch3r. Here is his S@h account: http://setiathome.berkeley.edu/show_user.php?userid=11044. But I guess his website/forum (http://calbe.dw70.de) is the better way.. EDIT: Or maybe over the Lunatics site (http://lunatics.kwsn.net/3-linux/index.0.html). |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
At MW@h it wasn't a big deal to unlock the CUDA app for Fermi GPUs. I think you'll find Crunch3r has lost interest in Seti, Claggy |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I think you'll find Crunch3r has lost interest in Seti, Maybe if a few people ask, he would do it.. ..but I'm not up-to-date. |
aaronh Send message Joined: 27 Oct 99 Posts: 169 Credit: 1,442,686 RAC: 0 |
The Fermi incompatibility found in SETI Windows CUDA applications v6.08 and v6.09 was nothing to do with anything called "locking". Wow, good one. Throwing a few volatile's in the SVN source led to what seem to be *valid* results (GTX460, Ubuntu 64-bit)! Sure, it's not Fermi *optimised* but first steps are first steps... |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14653 Credit: 200,643,578 RAC: 874 |
The Fermi incompatibility found in SETI Windows CUDA applications v6.08 and v6.09 was nothing to do with anything called "locking". Congratulations - well done. Have a word with Urs Echternacht, either here or at lunatics - he can help you with test tools to confirm those validations properly. If you can put some quality assurance behind the work you've done so far, the Linux community will be grateful. As you say, get it working - reliably and with confidence - first: optimisation is the icing on the cake. |
aaronh Send message Joined: 27 Oct 99 Posts: 169 Credit: 1,442,686 RAC: 0 |
or at lunatics - he can help you with test tools to confirm those validations properly. If you can put some quality assurance behind the work you've done so far, the Linux community will be grateful. (BTW, anyone know why lunatics doesn't allow new accounts to be created?) |
ML1 Send message Joined: 25 Nov 01 Posts: 20326 Credit: 7,508,002 RAC: 20 |
(BTW, anyone know why lunatics doesn't allow new accounts to be created?) You need to contact a lunatic offlist or via PM to be invited/added. Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
aaronh Send message Joined: 27 Oct 99 Posts: 169 Credit: 1,442,686 RAC: 0 |
Throwing a few volatile's in the SVN source led to what seem to be *valid* results (GTX460, Ubuntu 64-bit)! Sure, it's not Fermi *optimised* but first steps are first steps... I've validated it against quite a few results locally (computed on CPU), and have confidence. I've adjusted several things, and have gotten it to the point where it runs about 12% faster than the stock windows client (which itself seems to run a little faster than the Lunatics client? How strange) completing a WU in an average of 10:42 on my card, with windows around 12:00. It took me a while to whittle the video ram usage down far enough to find out if it would run two WUs on 768MB of video ram. X seems to occupy a lot more than it actually needs. I'm guessing it's sort of like the way linux memory tends to fill with disk buffers/cache. It is faster to run two at once, but I think it might be a greater difference on higher end cards. The client simply has too high utilisation of my card. Perhaps one with more multiprocessors (the 460 only has 7). nvidia-smi reports 95% gpu utilisation, 60% memory utilisation. The latter is not memory in use, but rather memory transfers (cudaMemcpy) Average run times (not including early exit (-9) WUs: 642 seconds x1 WU 1096 seconds x2 WU = 548 seconds per WU Including (-9) WUs 484 seconds x1 WU I've started running it locally with an app_info.xml, and thus far have 55 consecutive valid tasks, and about 90 awaiting validation. No Invalid or Errors yet, although I'm expecting one (copied the wrong binary. Caught it after the first task errored out) Unfortunately, I've tried making a 32-bit linux version, but it has too many errors. Still tracking that down. 64-bit only, for now. Time to clean up the code, and make it presentable. Congratulations - well done. Have a word with Urs Echternacht, either here or at lunatics - he can help you with test tools to confirm those validations properly. If you can put some quality assurance behind the work you've done so far, the Linux community will be grateful. I've still yet to contact anyone... argh. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.