Running SETI@home on an nVidia Fermi GPU

Message boards : Number crunching : Running SETI@home on an nVidia Fermi GPU
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 · Next

AuthorMessage
TheFreshPrince a.k.a. BlueTooth76
Avatar

Send message
Joined: 4 Jun 99
Posts: 210
Credit: 10,315,944
RAC: 0
Netherlands
Message 1016719 - Posted: 17 Jul 2010, 13:55:28 UTC - in response to Message 1016709.  

Ah, yes, the "use all gpus" option! Almost forgot that one.

Tomorrow I'll try my 9800GTX+ in my other PC and see if it works with the Fermi client. If it does, it should be possible to add it to my Crunchy.
Rig name: "x6Crunchy"
OS: Win 7 x64
MB: Asus M4N98TD EVO
CPU: AMD X6 1055T 2.8(1,2v)
GPU: 2x Asus GTX560ti
Member of: Dutch Power Cows
ID: 1016719 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14676
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1016782 - Posted: 17 Jul 2010, 16:12:40 UTC - in response to Message 1016719.  

Ah, yes, the "use all gpus" option! Almost forgot that one.

Tomorrow I'll try my 9800GTX+ in my other PC and see if it works with the Fermi client. If it does, it should be possible to add it to my Crunchy.

Switched to the stock v6.10_fermi on host 2901600, which is where my 9800GTX+ is currently living (Windows 7, driver 257.21). First task turned out to be a -9, which was a bit worrying, but all the rest seem to be running OK.

Dropped a few tasks during the transition, because I forgot to change the version numbers in the app_info I nicked from the Fermi host (bit tired at the moment). Apologies to wingmates, will try and de-/re-attach when finished.
ID: 1016782 · Report as offensive
Rich-E
Volunteer tester

Send message
Joined: 23 Feb 01
Posts: 41
Credit: 1,685,487
RAC: 0
United States
Message 1016957 - Posted: 18 Jul 2010, 0:43:54 UTC - in response to Message 1015889.  

...

At the beginning of the outage it seems you had 23 CUDA tasks in progress, so I'll assume those have been completed and are ready to upload when the outage ends. That should simplify the transition a bit.

...

Thanks for all your help. Everything got installed, all-be-it not in order, I think. I lunged forward when I should have backed up and lost my 23 CUDA results. However new tasks are finishing (I think) even faster than before. I hope they are okay, but I guess I will find out when award time comes. I just need to tweak the setup now to make sure I don't run out of work during the blackout periods.

Thanks to you'all for your help and have a good day.

Rich
ID: 1016957 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1017334 - Posted: 18 Jul 2010, 21:12:38 UTC
Last modified: 18 Jul 2010, 21:18:39 UTC

Here's a interesting bit of news regarding VLAR units. From tech news 14/7
We are beta testing a change whereby VLAR WUs are not scheduled onto GPUs. We hope to move this to the main project next week
I hope this will be a success


However new tasks are finishing (I think) even faster than before.
Rich

Yes they are completing successfully
ID: 1017334 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 1017827 - Posted: 20 Jul 2010, 1:14:38 UTC

I've finally cracked and ordered a GTX470.
What are the best driver and app versions to install for max. performance out of the beast ?
It will probably be sharing a MoBo with a GTX285. From reading this thread I see that the drivers and apps should be compatible. How will the performance of the 285 be affected compared to the current install of V191.07 + CUDA V6.09 ?

With the new drivers, is it possible to run multiple tasks on the 285 or is that strictly a Fermi trick ?

OS is Win XP-32

Thanks to all the posters on this thread for the help and advice that's been provided and the efforts made to get Fermi cards usable.

T.A.
ID: 1017827 · Report as offensive
TheFreshPrince a.k.a. BlueTooth76
Avatar

Send message
Joined: 4 Jun 99
Posts: 210
Credit: 10,315,944
RAC: 0
Netherlands
Message 1019145 - Posted: 24 Jul 2010, 10:05:59 UTC

On my 9800GTX+ (a fast card when it just came out) a WU takes up about 29 minutes average...

Incredible when I look at the GTX470... It does 3 WU's at a time in 21 minutes average...
Rig name: "x6Crunchy"
OS: Win 7 x64
MB: Asus M4N98TD EVO
CPU: AMD X6 1055T 2.8(1,2v)
GPU: 2x Asus GTX560ti
Member of: Dutch Power Cows
ID: 1019145 · Report as offensive
Profile Helli_retiered
Volunteer tester
Avatar

Send message
Joined: 15 Dec 99
Posts: 707
Credit: 108,785,585
RAC: 0
Germany
Message 1019147 - Posted: 24 Jul 2010, 10:16:13 UTC - in response to Message 1017827.  
Last modified: 24 Jul 2010, 10:27:06 UTC



... is it possible to run multiple tasks on the 285 or is that strictly a Fermi trick ?


Only for Fermi. If you try this on a non Fermi Card it would rise your calculation time about 4 times (..if you run 2 WU on one GPU).

Edit:
Found it:
http://setiathome.berkeley.edu/forum_thread.php?id=60592&nowrap=true#1011252

Helli
A loooong time ago: First Credits after SETI@home Restart
ID: 1019147 · Report as offensive
hbomber
Volunteer tester

Send message
Joined: 2 May 01
Posts: 437
Credit: 50,852,854
RAC: 0
Bulgaria
Message 1019152 - Posted: 24 Jul 2010, 10:34:24 UTC
Last modified: 24 Jul 2010, 10:36:05 UTC

My observations show, that saved time, running three tasks on Fermi is mostly from time wasted for CPU phase of unit calculation, in the beginning of each unit. When three tasks are running, GPU never sits idle, as it does in those 12 seconds(on my system) while one unit is being prepared. There is some increase bcs full GPU utilization also, but as much as those 4-5% more utilization, from 95% with one unit, to 99% with three units.
This still does not explain why two units are not gaining significant speed increase. Perhaps my tests were two short. If tho units start at same time, those CPU preparation is still wasted, GPU sitting Idle. Chances of overlapping preparation phase with more tasks is less likely.

And this all is very welcome for VHAR units(taking around two minutes here), where CPU phase is relatively long, compared with whole crunch time for an unit.
ID: 1019152 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 1019252 - Posted: 24 Jul 2010, 17:01:14 UTC

This app_info worked for me on a system using a Q6600, GTX470 and WinXP_32. The GTX 470 was later moved to a similar system using an E7200 processor with no problems. It's simple, no AP or anything fancy. You'll have to set your own <flops> values depending on your card and processor, mine were chosen empirically to give correct ETA's with a DCF of 1 in the client_state.xml file. The "ncpus" value of 0.05 works quite well as the 470 was only using around 2% of CPU time per thread. Note I'm still using the original AK_v8 app not AK_v8b, you'll have to change that bit if your using the later file.

Best of luck with it.

T.A.

<app_info>
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>AK_v8_win_SSSE3x.exe</name>
<executable/>
</file_info>
<file_info>
<name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>libfftw3f-3-1-1a_upx.dll</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<platform>windows_intelx86</platform>
<flops>60519353881.00</flops>
<file_ref>
<file_name>AK_v8_win_SSSE3x.exe</file_name>
<main_program/>
</file_ref>
</app_version>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>610</version_num>
<platform>windows_intelx86</platform>
<avg_ncpus>0.05</avg_ncpus>
<max_ncpus>0.05</max_ncpus>
<flops>320000000000</flops>
<plan_class>cuda</plan_class>
<file_ref>
<file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>libfftw3f-3-1-1a_upx.dll</file_name>
</file_ref>
<coproc>
<type>CUDA</type>
<count>0.33</count>
</coproc>
</app_version>
</app_info>
ID: 1019252 · Report as offensive
Profile MadMaC
Volunteer tester
Avatar

Send message
Joined: 4 Apr 01
Posts: 201
Credit: 47,158,217
RAC: 0
United Kingdom
Message 1021130 - Posted: 31 Jul 2010, 7:49:49 UTC

How do you configure the flops value if you are running more than 1 type of fermi card - I have a 470 and a 480 in the same system..
Last thing I read, I saw mention advising against using flops for the fermi cards - has this now changed and it is now the way to go??

Thx
ID: 1021130 · Report as offensive
Fulvio Cavalli
Volunteer tester
Avatar

Send message
Joined: 21 May 99
Posts: 1736
Credit: 259,180,282
RAC: 0
Brazil
Message 1021524 - Posted: 1 Aug 2010, 2:00:06 UTC - in response to Message 1015889.  
Last modified: 1 Aug 2010, 2:04:05 UTC

[/quote]
First, shut down BOINC completely and use Task Manager to be sure boinc.exe is no longer running. Then you can use the Lunatics installer and select only the appropriate AK_v8b CPU S@H Enhanced application plus the Astropulse CPU application. After the install finishes, add the Fermi information to app_info.xml. The needed files should still be in place, just check that every filename in the app_info.xml has a matching file. Then you can restart BOINC.
                                                                 Joe
[/quote]

Hi guys, there is a lot of very nice info on this post, but it is so much info that I may have lost something, mostly from language lack of skills :p
Im finally going for Fermi, with 2 GTX 460 ordered to replace my GTX 295 on one of my i7.
Basically, after installing the later BOINC and Nvidia drivers, what I have to do is what Josef say on this nice tip?
Ty all in advance!
ID: 1021524 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1021535 - Posted: 1 Aug 2010, 2:37:08 UTC - in response to Message 1021524.  


First, shut down BOINC completely and use Task Manager to be sure boinc.exe is no longer running. Then you can use the Lunatics installer and select only the appropriate AK_v8b CPU S@H Enhanced application plus the Astropulse CPU application. After the install finishes, add the Fermi information to app_info.xml. The needed files should still be in place, just check that every filename in the app_info.xml has a matching file. Then you can restart BOINC.
                                                                 Joe

Hi guys, there is a lot of very nice info on this post, but it is so much info that I may have lost something, mostly from language lack of skills :p
Im finally going for Fermi, with 2 GTX 460 ordered to replace my GTX 295 on one of my i7.
Basically, after installing the later BOINC and Nvidia drivers, what I have to do is what Josef say on this nice tip?
Ty all in advance!

I posted that for Rich-E, who had been running stock, including the 6.10 Fermi version. Your computers are hidden, but I guess from your RAC you are already running optimized and ought to look at other posts for advice.
                                                               Joe
ID: 1021535 · Report as offensive
Fulvio Cavalli
Volunteer tester
Avatar

Send message
Joined: 21 May 99
Posts: 1736
Credit: 259,180,282
RAC: 0
Brazil
Message 1021622 - Posted: 1 Aug 2010, 12:46:54 UTC - in response to Message 1021535.  


First, shut down BOINC completely and use Task Manager to be sure boinc.exe is no longer running. Then you can use the Lunatics installer and select only the appropriate AK_v8b CPU S@H Enhanced application plus the Astropulse CPU application. After the install finishes, add the Fermi information to app_info.xml. The needed files should still be in place, just check that every filename in the app_info.xml has a matching file. Then you can restart BOINC.
                                                                 Joe

Hi guys, there is a lot of very nice info on this post, but it is so much info that I may have lost something, mostly from language lack of skills :p
Im finally going for Fermi, with 2 GTX 460 ordered to replace my GTX 295 on one of my i7.
Basically, after installing the later BOINC and Nvidia drivers, what I have to do is what Josef say on this nice tip?
Ty all in advance!

I posted that for Rich-E, who had been running stock, including the 6.10 Fermi version. Your computers are hidden, but I guess from your RAC you are already running optimized and ought to look at other posts for advice.
                                                               Joe


Yes I am Josef. Im note sure what to do, but I guess I just need to add the Fermi information in the app_info.xml file and the proper files to my BOINC folder then?
ID: 1021622 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1021698 - Posted: 1 Aug 2010, 16:08:55 UTC - in response to Message 1021622.  

...
I guess I just need to add the Fermi information in the app_info.xml file and the proper files to my BOINC folder then?

Because you're converting from non-Fermi hardware to Fermi hardware, you'll need to ensure that all CUDA work is run using the stock 6.10 application. Assuming there's 6.08 cuda or 6.09 cuda23 work present, you'll need to edit the related app_version sections to use setiathome_6.10_windows_intelx86__cuda_fermi.exe and its associated DLLs. You should end up with an app_info.xml which does not have the old cuda app you were using nor its DLLs. It may have all 3 cuda types, but all must invoke the same processing.
                                                                   Joe
ID: 1021698 · Report as offensive
Fulvio Cavalli
Volunteer tester
Avatar

Send message
Joined: 21 May 99
Posts: 1736
Credit: 259,180,282
RAC: 0
Brazil
Message 1021839 - Posted: 2 Aug 2010, 3:09:05 UTC

Thank you very much for your aid. I will let you know if all goes well when new hardware arrives. Old brave GTX 295 goes to my trusty Q9550 to make a nice 30k rig :D
ID: 1021839 · Report as offensive
CHARLES JACKSON
Avatar

Send message
Joined: 17 May 99
Posts: 49
Credit: 39,349,563
RAC: 0
United States
Message 1023210 - Posted: 6 Aug 2010, 21:04:53 UTC

Hi Does any one know when the new Lunatic Unified Installer is do out. hopely it will fix the gtx 460- 480 problems
ID: 1023210 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1023216 - Posted: 6 Aug 2010, 21:19:14 UTC - in response to Message 1023210.  

Hi Does any one know when the new Lunatic Unified Installer is do out. hopely it will fix the gtx 460- 480 problems


Hi Charles. We have no schedule, but we're doing our best to make sure every aspect of this major technology leap is under control. It is of primary concern that the next installer releases handle some long standing issues so far unresolved in previous builds (stock & opt). There are some relatively minor issues to put to rest before release builds can be refined and fielded, and installers would shortly follow. How these minor issues pan out, will determine how long things take. Thanks for recognising the priority that these development tasks should take, and some of those things will be under consideration this weekend.

Jason (Lunatics)




"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1023216 · Report as offensive
CHARLES JACKSON
Avatar

Send message
Joined: 17 May 99
Posts: 49
Credit: 39,349,563
RAC: 0
United States
Message 1023234 - Posted: 6 Aug 2010, 22:42:33 UTC - in response to Message 1023216.  

Hi Jason Thank you for your fast reply. Thank you for all your hard work at Lunatics the work is of great value to me.
ID: 1023234 · Report as offensive
Profile Codeman05

Send message
Joined: 16 Dec 01
Posts: 33
Credit: 15,457,430
RAC: 0
United States
Message 1023762 - Posted: 8 Aug 2010, 19:09:12 UTC
Last modified: 8 Aug 2010, 19:53:30 UTC

Hey guys,

I just put together a seti box with 2x GTX460s...the performance is pretty poor however. I understand Fermi support isn't real solid at present, but my results seem worse than what I'm seeing browsing around.

System Info
Intel i7 920
2x GTX460
Windows 7-64bit
BOINC 6.11.4
Lunatics UI 0.36 (MB only)

My Fermi's are currently spending ~22-30 minutes per workunit...each.
According to GPUz, they are running at speed with ~75% GPU load.

I've tried running multiple WUs on the GPUs, and that yeilds really awefull results. GPU usage drops to ~40%, and complete about 0.25% in 5 minutes.

Below is my app_info that I cobbled together from this thread.
Any assistance would be great...or maybe this is to be expected (VLARs?) ??

Thank you

<app_info> 
    <app>
        <name>setiathome_enhanced</name>
    </app>
    <file_info>
        <name>AK_v8b_win_x64_SSSE3x.exe</name>
        <executable/>
    </file_info>
<file_info>
<name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>libfftw3f-3-1-1a_upx.dll</name>
<executable/>
</file_info>
    <app_version>
        <app_name>setiathome_enhanced</app_name>
        <version_num>603</version_num>
	<platform>windows_intelx86</platform>
        <file_ref>
           <file_name>AK_v8b_win_x64_SSSE3x.exe</file_name>
            <main_program/>
        </file_ref>
    </app_version>
    <app_version>
        <app_name>setiathome_enhanced</app_name>
        <version_num>603</version_num>
	<platform>windows_x86_64</platform>
        <file_ref>
           <file_name>AK_v8b_win_x64_SSSE3x.exe</file_name>
            <main_program/>
        </file_ref>
    </app_version>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>610</version_num>
<platform>windows_intelx86</platform>
<avg_ncpus>0.200000</avg_ncpus>
<max_ncpus>0.200000</max_ncpus>
<plan_class>cuda_fermi</plan_class>
<file_ref>
<file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>libfftw3f-3-1-1a_upx.dll</file_name>
</file_ref>
<coproc>
<type>CUDA</type>
<count>1</count>  <----- Changed to 0.5 when attempted to run 2x WUs/gpu
</coproc>
</app_version>
</app_info> 
ID: 1023762 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14676
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1023770 - Posted: 8 Aug 2010, 19:48:47 UTC - in response to Message 1023762.  

Hey guys,

I just put together a seti box with 2x GTX460s...the performance is pretty poor however. I understand Fermi support isn't real solid at present, but my results seem worse than what I'm seeing browsing around.

System Info
Intel i7 920
2x GTX460
Windows 7-64bit
BOINC 6.11.4
Lunatics UI 0.36 (MB only)

My Fermi's are currently spending ~30-45 minutes per workunit...each.
According to GPUz, they are running at speed with ~75% GPU load.

I've tried running multiple WUs on the GPUs, and that yeilds really awefull results. GPU usage drops to ~40%, and complete about 0.25% in 5 minutes.

Where are you reading those figures from? The only host on your account matching those specs is host 5471538, which seems to be spitting out the longer tasks in about 13 minutes each. That feels OK: my slightly more powerful GTX 470, in a Windows XP host (known to be faster because of driver differences), tends to run singletons in about ten minutes, though it can manage 3 in 24.
ID: 1023770 · Report as offensive
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 · Next

Message boards : Number crunching : Running SETI@home on an nVidia Fermi GPU


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.