Message boards :
Number crunching :
Running SETI@home on an nVidia Fermi GPU
Message board moderation
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 · Next
Author | Message |
---|---|
TheFreshPrince a.k.a. BlueTooth76 Send message Joined: 4 Jun 99 Posts: 210 Credit: 10,315,944 RAC: 0 |
Ah, yes, the "use all gpus" option! Almost forgot that one. Tomorrow I'll try my 9800GTX+ in my other PC and see if it works with the Fermi client. If it does, it should be possible to add it to my Crunchy. Rig name: "x6Crunchy" OS: Win 7 x64 MB: Asus M4N98TD EVO CPU: AMD X6 1055T 2.8(1,2v) GPU: 2x Asus GTX560ti Member of: Dutch Power Cows |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
Ah, yes, the "use all gpus" option! Almost forgot that one. Switched to the stock v6.10_fermi on host 2901600, which is where my 9800GTX+ is currently living (Windows 7, driver 257.21). First task turned out to be a -9, which was a bit worrying, but all the rest seem to be running OK. Dropped a few tasks during the transition, because I forgot to change the version numbers in the app_info I nicked from the Fermi host (bit tired at the moment). Apologies to wingmates, will try and de-/re-attach when finished. |
Rich-E Send message Joined: 23 Feb 01 Posts: 41 Credit: 1,685,487 RAC: 0 |
... Thanks for all your help. Everything got installed, all-be-it not in order, I think. I lunged forward when I should have backed up and lost my 23 CUDA results. However new tasks are finishing (I think) even faster than before. I hope they are okay, but I guess I will find out when award time comes. I just need to tweak the setup now to make sure I don't run out of work during the blackout periods. Thanks to you'all for your help and have a good day. Rich |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
Here's a interesting bit of news regarding VLAR units. From tech news 14/7 We are beta testing a change whereby VLAR WUs are not scheduled onto GPUs. We hope to move this to the main project next weekI hope this will be a success
Yes they are completing successfully |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
I've finally cracked and ordered a GTX470. What are the best driver and app versions to install for max. performance out of the beast ? It will probably be sharing a MoBo with a GTX285. From reading this thread I see that the drivers and apps should be compatible. How will the performance of the 285 be affected compared to the current install of V191.07 + CUDA V6.09 ? With the new drivers, is it possible to run multiple tasks on the 285 or is that strictly a Fermi trick ? OS is Win XP-32 Thanks to all the posters on this thread for the help and advice that's been provided and the efforts made to get Fermi cards usable. T.A. |
TheFreshPrince a.k.a. BlueTooth76 Send message Joined: 4 Jun 99 Posts: 210 Credit: 10,315,944 RAC: 0 |
On my 9800GTX+ (a fast card when it just came out) a WU takes up about 29 minutes average... Incredible when I look at the GTX470... It does 3 WU's at a time in 21 minutes average... Rig name: "x6Crunchy" OS: Win 7 x64 MB: Asus M4N98TD EVO CPU: AMD X6 1055T 2.8(1,2v) GPU: 2x Asus GTX560ti Member of: Dutch Power Cows |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
Only for Fermi. If you try this on a non Fermi Card it would rise your calculation time about 4 times (..if you run 2 WU on one GPU). Edit: Found it: http://setiathome.berkeley.edu/forum_thread.php?id=60592&nowrap=true#1011252 Helli A loooong time ago: First Credits after SETI@home Restart |
hbomber Send message Joined: 2 May 01 Posts: 437 Credit: 50,852,854 RAC: 0 |
My observations show, that saved time, running three tasks on Fermi is mostly from time wasted for CPU phase of unit calculation, in the beginning of each unit. When three tasks are running, GPU never sits idle, as it does in those 12 seconds(on my system) while one unit is being prepared. There is some increase bcs full GPU utilization also, but as much as those 4-5% more utilization, from 95% with one unit, to 99% with three units. This still does not explain why two units are not gaining significant speed increase. Perhaps my tests were two short. If tho units start at same time, those CPU preparation is still wasted, GPU sitting Idle. Chances of overlapping preparation phase with more tasks is less likely. And this all is very welcome for VHAR units(taking around two minutes here), where CPU phase is relatively long, compared with whole crunch time for an unit. |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
This app_info worked for me on a system using a Q6600, GTX470 and WinXP_32. The GTX 470 was later moved to a similar system using an E7200 processor with no problems. It's simple, no AP or anything fancy. You'll have to set your own <flops> values depending on your card and processor, mine were chosen empirically to give correct ETA's with a DCF of 1 in the client_state.xml file. The "ncpus" value of 0.05 works quite well as the 470 was only using around 2% of CPU time per thread. Note I'm still using the original AK_v8 app not AK_v8b, you'll have to change that bit if your using the later file. Best of luck with it. T.A. <app_info> <app> <name>setiathome_enhanced</name> </app> <file_info> <name>AK_v8_win_SSSE3x.exe</name> <executable/> </file_info> <file_info> <name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</name> <executable/> </file_info> <file_info> <name>cufft32_30_14.dll</name> <executable/> </file_info> <file_info> <name>cudart32_30_14.dll</name> <executable/> </file_info> <file_info> <name>libfftw3f-3-1-1a_upx.dll</name> <executable/> </file_info> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>603</version_num> <platform>windows_intelx86</platform> <flops>60519353881.00</flops> <file_ref> <file_name>AK_v8_win_SSSE3x.exe</file_name> <main_program/> </file_ref> </app_version> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>610</version_num> <platform>windows_intelx86</platform> <avg_ncpus>0.05</avg_ncpus> <max_ncpus>0.05</max_ncpus> <flops>320000000000</flops> <plan_class>cuda</plan_class> <file_ref> <file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name> <main_program/> </file_ref> <file_ref> <file_name>cudart32_30_14.dll</file_name> </file_ref> <file_ref> <file_name>cufft32_30_14.dll</file_name> </file_ref> <file_ref> <file_name>libfftw3f-3-1-1a_upx.dll</file_name> </file_ref> <coproc> <type>CUDA</type> <count>0.33</count> </coproc> </app_version> </app_info> |
MadMaC Send message Joined: 4 Apr 01 Posts: 201 Credit: 47,158,217 RAC: 0 |
How do you configure the flops value if you are running more than 1 type of fermi card - I have a 470 and a 480 in the same system.. Last thing I read, I saw mention advising against using flops for the fermi cards - has this now changed and it is now the way to go?? Thx |
Fulvio Cavalli Send message Joined: 21 May 99 Posts: 1736 Credit: 259,180,282 RAC: 0 |
[/quote] First, shut down BOINC completely and use Task Manager to be sure boinc.exe is no longer running. Then you can use the Lunatics installer and select only the appropriate AK_v8b CPU S@H Enhanced application plus the Astropulse CPU application. After the install finishes, add the Fermi information to app_info.xml. The needed files should still be in place, just check that every filename in the app_info.xml has a matching file. Then you can restart BOINC. Joe[/quote] Hi guys, there is a lot of very nice info on this post, but it is so much info that I may have lost something, mostly from language lack of skills :p Im finally going for Fermi, with 2 GTX 460 ordered to replace my GTX 295 on one of my i7. Basically, after installing the later BOINC and Nvidia drivers, what I have to do is what Josef say on this nice tip? Ty all in advance! |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
I posted that for Rich-E, who had been running stock, including the 6.10 Fermi version. Your computers are hidden, but I guess from your RAC you are already running optimized and ought to look at other posts for advice. Joe |
Fulvio Cavalli Send message Joined: 21 May 99 Posts: 1736 Credit: 259,180,282 RAC: 0 |
Yes I am Josef. Im note sure what to do, but I guess I just need to add the Fermi information in the app_info.xml file and the proper files to my BOINC folder then? |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
... Because you're converting from non-Fermi hardware to Fermi hardware, you'll need to ensure that all CUDA work is run using the stock 6.10 application. Assuming there's 6.08 cuda or 6.09 cuda23 work present, you'll need to edit the related app_version sections to use setiathome_6.10_windows_intelx86__cuda_fermi.exe and its associated DLLs. You should end up with an app_info.xml which does not have the old cuda app you were using nor its DLLs. It may have all 3 cuda types, but all must invoke the same processing. Joe |
Fulvio Cavalli Send message Joined: 21 May 99 Posts: 1736 Credit: 259,180,282 RAC: 0 |
Thank you very much for your aid. I will let you know if all goes well when new hardware arrives. Old brave GTX 295 goes to my trusty Q9550 to make a nice 30k rig :D |
CHARLES JACKSON Send message Joined: 17 May 99 Posts: 49 Credit: 39,349,563 RAC: 0 |
Hi Does any one know when the new Lunatic Unified Installer is do out. hopely it will fix the gtx 460- 480 problems |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Hi Does any one know when the new Lunatic Unified Installer is do out. hopely it will fix the gtx 460- 480 problems Hi Charles. We have no schedule, but we're doing our best to make sure every aspect of this major technology leap is under control. It is of primary concern that the next installer releases handle some long standing issues so far unresolved in previous builds (stock & opt). There are some relatively minor issues to put to rest before release builds can be refined and fielded, and installers would shortly follow. How these minor issues pan out, will determine how long things take. Thanks for recognising the priority that these development tasks should take, and some of those things will be under consideration this weekend. Jason (Lunatics) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
CHARLES JACKSON Send message Joined: 17 May 99 Posts: 49 Credit: 39,349,563 RAC: 0 |
Hi Jason Thank you for your fast reply. Thank you for all your hard work at Lunatics the work is of great value to me. |
Codeman05 Send message Joined: 16 Dec 01 Posts: 33 Credit: 15,457,430 RAC: 0 |
Hey guys, I just put together a seti box with 2x GTX460s...the performance is pretty poor however. I understand Fermi support isn't real solid at present, but my results seem worse than what I'm seeing browsing around. System Info Intel i7 920 2x GTX460 Windows 7-64bit BOINC 6.11.4 Lunatics UI 0.36 (MB only) My Fermi's are currently spending ~22-30 minutes per workunit...each. According to GPUz, they are running at speed with ~75% GPU load. I've tried running multiple WUs on the GPUs, and that yeilds really awefull results. GPU usage drops to ~40%, and complete about 0.25% in 5 minutes. Below is my app_info that I cobbled together from this thread. Any assistance would be great...or maybe this is to be expected (VLARs?) ?? Thank you <app_info> <app> <name>setiathome_enhanced</name> </app> <file_info> <name>AK_v8b_win_x64_SSSE3x.exe</name> <executable/> </file_info> <file_info> <name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</name> <executable/> </file_info> <file_info> <name>cufft32_30_14.dll</name> <executable/> </file_info> <file_info> <name>cudart32_30_14.dll</name> <executable/> </file_info> <file_info> <name>libfftw3f-3-1-1a_upx.dll</name> <executable/> </file_info> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>603</version_num> <platform>windows_intelx86</platform> <file_ref> <file_name>AK_v8b_win_x64_SSSE3x.exe</file_name> <main_program/> </file_ref> </app_version> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>603</version_num> <platform>windows_x86_64</platform> <file_ref> <file_name>AK_v8b_win_x64_SSSE3x.exe</file_name> <main_program/> </file_ref> </app_version> <app_version> <app_name>setiathome_enhanced</app_name> <version_num>610</version_num> <platform>windows_intelx86</platform> <avg_ncpus>0.200000</avg_ncpus> <max_ncpus>0.200000</max_ncpus> <plan_class>cuda_fermi</plan_class> <file_ref> <file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name> <main_program/> </file_ref> <file_ref> <file_name>cudart32_30_14.dll</file_name> </file_ref> <file_ref> <file_name>cufft32_30_14.dll</file_name> </file_ref> <file_ref> <file_name>libfftw3f-3-1-1a_upx.dll</file_name> </file_ref> <coproc> <type>CUDA</type> <count>1</count> <----- Changed to 0.5 when attempted to run 2x WUs/gpu </coproc> </app_version> </app_info> |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
Hey guys, Where are you reading those figures from? The only host on your account matching those specs is host 5471538, which seems to be spitting out the longer tasks in about 13 minutes each. That feels OK: my slightly more powerful GTX 470, in a Windows XP host (known to be faster because of driver differences), tends to run singletons in about ten minutes, though it can manage 3 in 24. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.