Linux x64 Cuda Multibeam (x41g)

Message boards : Number crunching : Linux x64 Cuda Multibeam (x41g)
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1181309 - Posted: 29 Dec 2011, 8:05:30 UTC
Last modified: 29 Dec 2011, 8:05:58 UTC

Linux x64 build of Cuda Multibeam x41g
Available now in the Lunatics, Multibeam for Linux Downloads section
, mirrored at Arkayn's Site

This build is directly from the X-branch, so changes from other prior Linux builds directly reflect current development efforts. This is an ongoing effort to gradually bring Linux support into greater focus in 2012.

Baseline Cuda X-branch features include:
- Tightened Validation characteristic against CPU builds (accuracy)
- Rarer '-12' triplet related errors
- Fault tolerant behaviour
- 'V7-readiness'
- Linux build uses identical application code to the Windows build (Only differences are OS/Driver/Library related)

Please see the readme for full release notes. READ THEM before you install.

Use at your own risk - 'your mileage may vary'

Thanks go to Aaron Haviland for the port & polish going into X branch for Linux, and it should mean that Cuda build Linux support can continue to develop as further refinements are introduced into the main codebase. Planned future upgrades currently include improvements to pulsefinding (VLAR behaviour) and removal of the prevalent remaining triplet related '-12' limitations.

Thanks,
Jason G

Extra note, to ALL users:
With gradually expanding Linux development, and experience gleaned from Windows development in recent years, User feedback has played a pivotal role in refinement in all areas. Along those lines User suggestions (Thanks Sunu!) have prompted that Linux development will require 'opening up' Lunatics site some. While there are immediate (technical) obstacles to doing so, the needs of both Linux developers & users are under detailed consideration at this time, along with planned modernisation of other platform information & support mechanisms. For the time being, while these changes are under careful planning and implementation, if you have feedback or need help, & are not a registered user at Lunatics, please create a thread here on Number Crunching forum. Requests for registration at Lunatics will continue to be considered on a case by case basis, and Development/PreRelease status offered only at the discretion of the developers in those areas.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1181309 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20265
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1181334 - Posted: 29 Dec 2011, 10:44:03 UTC - in response to Message 1181309.  

Excellent stuff there from the efforts of an excellent few.

I'll see if I can make time to resurrect an automatic installer...


Happy New Year

And happy fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1181334 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 1181371 - Posted: 29 Dec 2011, 15:47:44 UTC

Works well on a single card machine, crunching times are comparable to the x41g Windows app. Still trying to get a working dual fermi machine operational.

Nothing to do with the app, I have to get the desktop working and drivers installed first. (mutter, grumble, KDE-4 has set Linux back 10 years).

T.A.
ID: 1181371 · Report as offensive
Profile Khangollo
Avatar

Send message
Joined: 1 Aug 00
Posts: 245
Credit: 36,410,524
RAC: 0
Slovenia
Message 1181437 - Posted: 29 Dec 2011, 18:54:15 UTC

I can confirm that it works very well on GT 440 and GTX 285 (using drivers 280.13).
It is noticeably faster on VHARs (and that's what really counts) than both previous applications.
ID: 1181437 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1181459 - Posted: 29 Dec 2011, 20:25:45 UTC - in response to Message 1181437.  

I can confirm that it works very well on GT 440 and GTX 285 (using drivers 280.13).
It is noticeably faster on VHARs (and that's what really counts) than both previous applications.

Got it running this afternoon on an 8800 GTS 512 on a machine newly-installed with Scientific Linux CERN 6. (Bit of a change from 12 months ago when I was trying to compile up my own version from Jason's cvs but the Uni had turned off the heating over Christmas and we had an exceptionally cold winter; it was 9 C in my office and I couldn't control the mouse because I was shivering so much!) Running 10 WUs/hour at the moment, five have validated so it looks OK.

We're upgrading all our GPU systems to CUDA 4.1 at the moment, so it'd be nice to see the project move ahead to that level (some minor changes to APIs, etc. mean that problems are to be expected).

Moved a Palit GTX 460 with a broken fan (their blades are fragile!) into my work GRID server yesterday -- a Supermicro 1U server with forced airflow for two fanless GPUs. There's a C1060 in the other slot (you remove a plastic sheet over a grille cut-out in the casing to use a fan-equipped GPU in the server). It's running at 43 C under no load at the moment according to nvidia-smi. If the 8800 doesn't turn up any problems I'll try running on that machine too.
ID: 1181459 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1181512 - Posted: 30 Dec 2011, 0:32:00 UTC - in response to Message 1181459.  

...We're upgrading all our GPU systems to CUDA 4.1 at the moment, so it'd be nice to see the project move ahead to that level (some minor changes to APIs, etc. mean that problems are to be expected)...


Hi Ivan. Glad to hear you're back in the game :) Yeah development Cuda 4.1rc2 builds for Windows already exist under NDA, either building from the current X-branch svn head against 4.1rc2 SDK, or pestering Aaron for a test Linux build should work, assuming you're registered with nVidia so can get the SDK legally.

Jason

"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1181512 · Report as offensive
Profile aaronh
Volunteer tester
Avatar

Send message
Joined: 27 Oct 99
Posts: 169
Credit: 1,442,686
RAC: 0
United States
Message 1181525 - Posted: 30 Dec 2011, 1:31:11 UTC

Terror Australis: I'm seeing a lot of inconclusives for your GTS250. Re-running a few of the tasks locally on my GTX460, my results agree with your wing-mate, not you.

I'm going to keep watch, because it's possible that it might just be your card, and not the application. (At this time, ivan has no inconclusives, with a similar card.)
ID: 1181525 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1181528 - Posted: 30 Dec 2011, 1:53:28 UTC

I can see TA playing around with OC there ;). Don't know about TA's card & system, but a friend of mine had a couple & they were pretty hot runners, so I'd let TA finish stabilising it before looking too much deeper.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1181528 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 1181545 - Posted: 30 Dec 2011, 5:07:11 UTC - in response to Message 1181525.  

Terror Australis: I'm seeing a lot of inconclusives for your GTS250. Re-running a few of the tasks locally on my GTX460, my results agree with your wing-mate, not you.

I'm going to keep watch, because it's possible that it might just be your card, and not the application. (At this time, ivan has no inconclusives, with a similar card.)

I did have some heat problems with that machine, mainly the card and the memory. I've increased the ventilation and lowered the OC.

The card is running at stock speeds and crunching on GPU only

I'll fire it back up and see how it goes. Keep an eye on units crunched after today.

Hopefully I'll have the Fermi machine (2xGTX580's) running sometime today. I'm doing a clean install on a wiped HDD. I've a theory that my problems were due to the new install picking up residual files from the previous one.

T.A.
ID: 1181545 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1181623 - Posted: 30 Dec 2011, 18:14:08 UTC - in response to Message 1181512.  

...We're upgrading all our GPU systems to CUDA 4.1 at the moment, so it'd be nice to see the project move ahead to that level (some minor changes to APIs, etc. mean that problems are to be expected)...


Hi Ivan. Glad to hear you're back in the game :) Yeah development Cuda 4.1rc2 builds for Windows already exist under NDA, either building from the current X-branch svn head against 4.1rc2 SDK, or pestering Aaron for a test Linux build should work, assuming you're registered with nVidia so can get the SDK legally.

Jason

Sorry about the lack of communications, 2011 wasn't a year of great motivation following on from job and health matters in 2010; my job situation wasn't settled until March, but latest info is that I appear to be funded up until about a year from my nominal retirement in five year's time. My colleague and office-mate is registered for the 4-series CUDA (I'd registered years ago, not sure if that still works) so I just use his downloads to get the SDK. Battling a memory allocation problem in my cufft plans in the holographic movies at the moment; something's changed and I need to start looking at the memory map again (4Kx4K holograms are now failing on 512 MB cards, but still work on a C2070).

ID: 1181623 · Report as offensive
Profile ausymark

Send message
Joined: 9 Aug 99
Posts: 95
Credit: 10,175,128
RAC: 0
Australia
Message 1181762 - Posted: 31 Dec 2011, 2:29:29 UTC

Hi Team

Now running x41g under Ubuntu 11.10 - seems to be working fine, and between 10% to 30% faster compared to previously.

Will see how we go, but so far so good.

Great work team! :)

Cheers

ID: 1181762 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 1181827 - Posted: 31 Dec 2011, 6:04:13 UTC - in response to Message 1181525.  

Terror Australis: I'm seeing a lot of inconclusives for your GTS250. Re-running a few of the tasks locally on my GTX460, my results agree with your wing-mate, not you.

I'm going to keep watch, because it's possible that it might just be your card, and not the application. (At this time, ivan has no inconclusives, with a similar card.)

It looks like I scored about 50/50 on those inconclusives. There are still 2 to go where the card found 30 spikes which I think will go against me. I think we can blame them on overheating.

Since I restarted the machine there is only one so far. The wingman and I both found 4, 0, 1, 0 so why its an inconclusive I'm not sure.

Still battling to get the 2 card machine running, I can't get the OS to recognise both cards. Wish I could remember how I did it last time (bloody Altzheimer's)

T.A.
ID: 1181827 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1181882 - Posted: 31 Dec 2011, 12:09:53 UTC - in response to Message 1181827.  

Still battling to get the 2 card machine running, I can't get the OS to recognise both cards. Wish I could remember how I did it last time (bloody Altzheimer's)

T.A.

What do you see when you run nvidia-smi? I get:
[eesridr:BOINC] > nvidia-smi
Sat Dec 31 11:55:53 2011       
+------------------------------------------------------+                       
| NVIDIA-SMI 2.290.10   Driver Version: 290.10         |                       
|-------------------------------+----------------------+----------------------+
| Nb.  Name                     | Bus Id        Disp.  | Volatile ECC SB / DB |
| Fan   Temp   Power Usage /Cap | Memory Usage         | GPU Util. Compute M. |
|===============================+======================+======================|
| 0.  Tesla C1060               | 0000:02:00.0  Off    |       N/A        N/A |
|  35%   57 C  P8    Off /  Off |   0%    3MB / 4095MB |    0%     Default    |
|-------------------------------+----------------------+----------------------|
| 1.  GeForce GTX 460           | 0000:03:00.0  N/A    |       N/A        N/A |
|  51%   45 C  N/A   N/A /  N/A |   2%   13MB /  767MB |  N/A      Default    |
|-------------------------------+----------------------+----------------------|
| Compute processes:                                               GPU Memory |
|  GPU  PID     Process name                                       Usage      |
|=============================================================================|
|  1.           ERROR: Not Supported                                          |
+-----------------------------------------------------------------------------+

In the BOINC startup-code I have:
28-Dec-2011 13:42:05 [---] NVIDIA GPU 0 (not used): Tesla C1060 (driver version unknown, CUDA version 4010, compute capability 1.3, 4096MB, 622 GFLOPS peak)
28-Dec-2011 13:42:05 [---] NVIDIA GPU 1: GeForce GTX 460 (driver version unknown, CUDA version 4010, compute capability 2.1, 767MB, 605 GFLOPS peak)

Now, I had to do some faffing around when I installed the C1060 to get the driver activated without putting it in the xorg.conf file (still using the onboard Matrox VGA as the C1060 doesn't have video output), and I haven't removed that kludge at all even though the 460 is now in the X setup. Looks like I got the clues from http://forums.nvidia.com/index.php?showtopic=52629. The script goes in /etc/init.d/cuda -- I can't remember if I had to do anything else.

ID: 1181882 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1182132 - Posted: 1 Jan 2012, 14:04:55 UTC - in response to Message 1181882.  

Now, I had to do some faffing around when I installed the C1060 to get the driver activated without putting it in the xorg.conf file (still using the onboard Matrox VGA as the C1060 doesn't have video output), and I haven't removed that kludge at all even though the 460 is now in the X setup. Looks like I got the clues from http://forums.nvidia.com/index.php?showtopic=52629. The script goes in /etc/init.d/cuda -- I can't remember if I had to do anything else.

B*gger! When I worked up enough courage to try the GTX 460 in the Supermicro, it turned out Scientific Linux CERN 5 is too out-of-date to run the programme:
[eesridr:setiathome.berkeley.edu] > ldd setiathome_x41g_x86_64-pc-linux-gnu_cuda32 
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)

ID: 1182132 · Report as offensive
Profile Crunch3r
Volunteer tester
Avatar

Send message
Joined: 15 Apr 99
Posts: 1546
Credit: 3,438,823
RAC: 0
Germany
Message 1182226 - Posted: 1 Jan 2012, 21:16:07 UTC - in response to Message 1182132.  

Now, I had to do some faffing around when I installed the C1060 to get the driver activated without putting it in the xorg.conf file (still using the onboard Matrox VGA as the C1060 doesn't have video output), and I haven't removed that kludge at all even though the 460 is now in the X setup. Looks like I got the clues from http://forums.nvidia.com/index.php?showtopic=52629. The script goes in /etc/init.d/cuda -- I can't remember if I had to do anything else.

B*gger! When I worked up enough courage to try the GTX 460 in the Supermicro, it turned out Scientific Linux CERN 5 is too out-of-date to run the programme:
[eesridr:setiathome.berkeley.edu] > ldd setiathome_x41g_x86_64-pc-linux-gnu_cuda32 
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)


wanted to run it on my ION machine but got the same message here...

./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)


unfortunately i was unable to find a link to the source code to compile it myself on an old SuSE 11.2 (and it's not listed there -> http://lunatics.kwsn.net/index.php?module=Downloads;catd=2 either which is a bit odd since it's GPLed software)

Join BOINC United now!
ID: 1182226 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1182239 - Posted: 1 Jan 2012, 21:58:29 UTC - in response to Message 1182226.  



wanted to run it on my ION machine but got the same message here...

./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)


unfortunately i was unable to find a link to the source code to compile it myself on an old SuSE 11.2 (and it's not listed there -> http://lunatics.kwsn.net/index.php?module=Downloads;catd=2 either which is a bit odd since it's GPLed software)

Due to the way we distribute CMS software, I do have an installation of gcc 4.6.1 on the machine; pointing LD_LIBRARY_PATH at that got rid of the two warnings for GLIBCXX but the GLIBC error remains. Since there's no libc.so.* under the gcc libs, I presume that it's associated with the OS version rather than the compiler, so it may not be possible to get it running (my SLC5 installation is fully up-to-date with the CERN repositories). So maybe the only possibility is to compile on the target machine -- hopefully someone will chime in with a pointer to the source. Then I can upgrade the SLC5 machine to CUDA4.1 and have at compiling it (probably on the SLC6 machine first, since the 5 machine is a production server and our students use it for their PhD thesis work).
ID: 1182239 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1182272 - Posted: 1 Jan 2012, 23:25:22 UTC - in response to Message 1182226.  
Last modified: 2 Jan 2012, 0:21:38 UTC

unfortunately i was unable to find a link to the source code to compile it myself on an old SuSE 11.2 (and it's not listed there -> http://lunatics.kwsn.net/index.php?module=Downloads;catd=2 either which is a bit odd since it's GPLed software)


Why would you point the bone like that? When you were given full svn access from the start. Then you know full well the requirements are to make the source for release builds available on request. Both yourself & ivan have full access to even newer unreleased code than required.

If you require updated sources & forgot how to use the SVN, or would prefer a different method, just ask... and don't ever try beat me over the head with GPL again, I actually read it.

Happy New Year!

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1182272 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1182318 - Posted: 2 Jan 2012, 2:45:02 UTC
Last modified: 2 Jan 2012, 3:05:34 UTC

On the basis of lack of active development participation, I have posted x41g sources at http://lunatics.kwsn.net/index.php?module=Downloads;sa=dlview;id=313

Both Crunch3r's & Ivan's development privileges, svn & Lunatics, will now be revoked.

Try to communicate better if you want something, and do not attack me, you have now lost prerelease/development access by attempting to throw the GPL in my face, without even reading it.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1182318 · Report as offensive
Profile aaronh
Volunteer tester
Avatar

Send message
Joined: 27 Oct 99
Posts: 169
Credit: 1,442,686
RAC: 0
United States
Message 1182331 - Posted: 2 Jan 2012, 3:32:31 UTC - in response to Message 1182132.  

B*gger! When I worked up enough courage to try the GTX 460 in the Supermicro, it turned out Scientific Linux CERN 5 is too out-of-date to run the programme:
[eesridr:setiathome.berkeley.edu] > ldd setiathome_x41g_x86_64-pc-linux-gnu_cuda32 
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)


I thought I had checked that my build-system would provide binaries that would work on the many recent releases of major distributions (Ubuntu, Fedora, SuSe). I confess, there were a few distros that I had not checked versions against. (However, it appears it should work against the 6.x series of Scientific Linux)

From the README.txt:
Recent Linux distribution providing:
* linux kernel 2.6.15 or later
* libc6 (>= 2.11)
* libgcc1 (>= 4.1.1)
* libstdc++6 (>= 4.4.0)

For most of the distros I checked, this means anything released since 2009... it's now 2011.. 2012 I mean! Since we're dealing with a minimum CUDA driver that was released in Dec 2010 (260.19.26), I assumed my bases were covered.

I will definitely keep this in mind for future builds. I can eliminate the libstdc++/GLIBCXX issues by using an older gcc, or statically linking. However to fix the libc6 dependency, I would need to set up a new build environment... not happening this week.
ID: 1182331 · Report as offensive
Profile ivan
Volunteer tester
Avatar

Send message
Joined: 5 Mar 01
Posts: 783
Credit: 348,560,338
RAC: 223
United Kingdom
Message 1182457 - Posted: 2 Jan 2012, 19:52:58 UTC - in response to Message 1182331.  

B*gger! When I worked up enough courage to try the GTX 460 in the Supermicro, it turned out Scientific Linux CERN 5 is too out-of-date to run the programme:
[eesridr:setiathome.berkeley.edu] > ldd setiathome_x41g_x86_64-pc-linux-gnu_cuda32 
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)
./setiathome_x41g_x86_64-pc-linux-gnu_cuda32: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by ./setiathome_x41g_x86_64-pc-linux-gnu_cuda32)


I thought I had checked that my build-system would provide binaries that would work on the many recent releases of major distributions (Ubuntu, Fedora, SuSe). I confess, there were a few distros that I had not checked versions against. (However, it appears it should work against the 6.x series of Scientific Linux)

From the README.txt:
Recent Linux distribution providing:
* linux kernel 2.6.15 or later
* libc6 (>= 2.11)
* libgcc1 (>= 4.1.1)
* libstdc++6 (>= 4.4.0)

For most of the distros I checked, this means anything released since 2009... it's now 2011.. 2012 I mean! Since we're dealing with a minimum CUDA driver that was released in Dec 2010 (260.19.26), I assumed my bases were covered.

I will definitely keep this in mind for future builds. I can eliminate the libstdc++/GLIBCXX issues by using an older gcc, or statically linking. However to fix the libc6 dependency, I would need to set up a new build environment... not happening this week.

Not your fault -- the HEP community is too darned conservative. We do have beta releases of our CMS software under gcc4.5.1 and 4.6.1 but the bulk of the work is done under gcc4.3.4. :-( You're right, the prog does work under SLC6beta, but until they start (re-)releasing all the CMS software under the slc6_amd64 architecture I can't upgrade the departmental server -- hopefully we'll be out of beta for slc6 in a few months... If you can point me at the particular codebase, I can try my own build on SLC6/CUDA4.2 and then SLC5/CUDA4.2, if that's not treading on too many toes. (There's no particular point in me trying to run it on my Ubuntu Core2Quad until I seriously upgrade its 8500 GT card; similarly my home SLC5 system.)
ID: 1182457 · Report as offensive
1 · 2 · 3 · Next

Message boards : Number crunching : Linux x64 Cuda Multibeam (x41g)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.