Running SETI@home on an nVidia Fermi GPU

Message boards : Number crunching : Running SETI@home on an nVidia Fermi GPU
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 15 · Next

AuthorMessage
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1005416 - Posted: 17 Jun 2010, 22:13:20 UTC - in response to Message 1005408.  
Last modified: 17 Jun 2010, 22:15:32 UTC


You might have better luck with version 1.9

Thanks Richard. According to reschedule 1.9 no tasks need to be moved according to log

    User testing for a reschedule
    CPU tasks: 0 (0 VLAR, 0 VHAR)
    GPU tasks: 0 (0 VLAR, 0 VHAR)
    No reschedule needed


I'm not sure I agree with the log. I will let the tasks run slowly on my Gpu


ID: 1005416 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1005432 - Posted: 17 Jun 2010, 23:10:48 UTC - in response to Message 1005404.  

This is the angle range for that work unit.. WU true angle range is : 0.012972. Task 1635808649 is definitely a VLAR.


PROUD MEMBER OF Team Starfire World BOINC
ID: 1005432 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1005477 - Posted: 18 Jun 2010, 1:50:52 UTC
Last modified: 18 Jun 2010, 1:59:31 UTC

In that case I'm most surprised that ReSchedule 1.9 says that there are no VLars /VHar. Under settings I've set the paths as:
    Boinc main path C:\Program Files\BOINC
    Boinc data path C:\ProgramData\BOINC

I'm using win 7 ultimate. Tasks are taking round 1 hour 22 minutes. Not sure why but I can't move them to the cpu


ID: 1005477 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 19 Sep 00
Posts: 3184
Credit: 446,358
RAC: 0
Germany
Message 1005601 - Posted: 18 Jun 2010, 7:25:33 UTC - in response to Message 1005477.  

Perhaps reschedule can't find them because they have the "wrong" plan_class (or whatever tag is appropriate).

Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)

SETI@home classic workunits 3,758
SETI@home classic CPU time 66,520 hours
ID: 1005601 · Report as offensive
TheFreshPrince a.k.a. BlueTooth76
Avatar

Send message
Joined: 4 Jun 99
Posts: 210
Credit: 10,315,944
RAC: 0
Netherlands
Message 1005772 - Posted: 18 Jun 2010, 16:30:33 UTC
Last modified: 18 Jun 2010, 16:33:04 UTC

I got work for my Fermi today :)
Now running 2 WU's on a GTX470 (607 @ 751 Mhz and 1.000volt).
Didn't get the red "app_info" messages and I run the Lunatics apps :)

It's actually an Asus ENGTX470 but I use the MSI Afterburner software for overclocking and fan regulation.

It works perfectly with the Asus and its free for download on the MSI site :)
ID: 1005772 · Report as offensive
TheFreshPrince a.k.a. BlueTooth76
Avatar

Send message
Joined: 4 Jun 99
Posts: 210
Credit: 10,315,944
RAC: 0
Netherlands
Message 1005781 - Posted: 18 Jun 2010, 16:53:58 UTC - in response to Message 1005775.  
Last modified: 18 Jun 2010, 16:56:14 UTC

I got work for my Fermi today :)
Now running 2 WU's on a GTX470 (607 @ 751 Mhz and 1.000volt).
Didn't get the red "app_info" messages and I run the Lunatics apps :)

It's actually an Asus ENGTX470 but I use the MSI Afterburner software for overclocking and fan regulation.

It works perfectly with the Asus and its free for download on the MSI site :)


AFAIK lunatics does not have an app that works with Fermi without creating useless results. The only app working with Fermis is the stock 610 at the moment. You're trashing (erroneous overflow) every WU with that app. That was the main reason for the server side upgrade we're now suffering from.

Haven't you paid any attention to all the Fermi threads here lately? Especially post in this very thread clearly says to not use lunatics app for FERMI

Since you have your computer(s) hidden, there's no way to tell what happens to your WU's.

I strongly urge you to stop using the lunatics app for your Fermi, and step back to the stock 610 app.


I think I should have been more clear ;)

From what I read my app_info.xml is correct.
I use the Lunatics apps for the CPU only, Fermi is using the "standard" app.

<app_info>
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>AK_v8b_win_x64_SSSE3x.exe</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<platform>windows_intelx86</platform>
<file_ref>
<file_name>AK_v8b_win_x64_SSSE3x.exe</file_name>
<main_program/>
</file_ref>
</app_version>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<platform>windows_x86_64</platform>
<file_ref>
<file_name>AK_v8b_win_x64_SSSE3x.exe</file_name>
<main_program/>
</file_ref>
</app_version>
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>libfftw3f-3-1-1a_upx.dll</name>
<executable/>
</file_info>
<file_info>
<name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_30_14.dll</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>610</version_num>
<avg_ncpus>0.300000</avg_ncpus>
<max_ncpus>0.300000</max_ncpus>
<flops>57462450464</flops>
<plan_class>cuda_fermi</plan_class>
<file_ref>
<file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>libfftw3f-3-1-1a_upx.dll</file_name>
</file_ref>
<coproc>
<type>CUDA</type>
<count>0.5</count>
</coproc>
</app_version>
</app_info>
ID: 1005781 · Report as offensive
TheFreshPrince a.k.a. BlueTooth76
Avatar

Send message
Joined: 4 Jun 99
Posts: 210
Credit: 10,315,944
RAC: 0
Netherlands
Message 1005791 - Posted: 18 Jun 2010, 17:17:14 UTC - in response to Message 1005784.  


Your current setup is OK, when you told me the whole story :-)



:P Sorry :P
ID: 1005791 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1005925 - Posted: 18 Jun 2010, 23:40:59 UTC - in response to Message 1005601.  
Last modified: 18 Jun 2010, 23:41:47 UTC

Perhaps reschedule can't find them because they have the "wrong" plan_class (or whatever tag is appropriate).

How can I tell if this info has been set/ set correctly?
ID: 1005925 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1005933 - Posted: 18 Jun 2010, 23:52:08 UTC - in response to Message 1005925.  

Speedy, the rescheduler won't work with the fermi. It only recognizes 6.08 and 6.09 it cannot do the 6.10 Fermi plan_class.


PROUD MEMBER OF Team Starfire World BOINC
ID: 1005933 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1006015 - Posted: 19 Jun 2010, 1:48:26 UTC
Last modified: 19 Jun 2010, 1:58:27 UTC

Ok thanks. I'm running them on my gpu they are taking about 1 hour 24 minutes each. I have another 2 after this 1 completes it's 18% done

So I'm correct in saying theres no way to send tasks from a GTX 470 Gpu to the Cpu?
ID: 1006015 · Report as offensive
Profile Questor Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 3 Sep 04
Posts: 471
Credit: 230,506,401
RAC: 157
United Kingdom
Message 1006100 - Posted: 19 Jun 2010, 8:19:23 UTC - in response to Message 1005933.  
Last modified: 19 Jun 2010, 8:22:47 UTC

Speedy, the rescheduler won't work with the fermi. It only recognizes 6.08 and 6.09 it cannot do the 6.10 Fermi plan_class.


My fermi machine had almost run out of tasks on GPU but had plenty of suitable CPU tasks so I did a bit of testing on this.

The rescheuled tool seems to work OK with all tasks as long as you are rebranding from GPU to CPU.

The problem occurs when you rebrand from CPU to GPU.

The issue seems to be more about the plan_class entry rather than version number.

608 tasks were plan_class cuda
609 tasks are plan_class cuda_23 (although some people have left 609 as cuda)
610 tasks are plan_class_fermi

Moving tasks from CPU to GPU, Reschedule has to add a plan_class entry for tasks as it does not exist for CPU tasks.

After taking a full backup and stopping network access I ran reschedule to move tasks from CPU to GPU and examined the client_state.xml file.

There are 3 relevant sections in the file - file_info, workunit and result. Result has a version number and plan_class, workunit has version number and file_info has neither.

All workunit entries that had been changed to GPU had a version number of 610 - correct.

All 'result' entries that had been changed to GPU had a plan class entry of >cuda< rather than >cuda_fermi<.
I did a "case sensitive" [there are 2 other entries which contain >CUDA< which should not change] search and replace using Notepad from >cuda< to >cuda_fermi<, restarted BOINC and all worked OK - no lost workunits because of missing apps.

I believe the same is true of 609 tasks also as >cuda_23< was adopted - so the same thing should apply but I havent actually tested this yet.

Anyone continuing to use >cuda< rather than >cuda_23< or >cuda_fermi< "should" find that reschedule works OK.

This does of course mean manual intervention in the use of Reschedule - not ideal but better than letting your GPU run dry.
i.e.
1. Stop BOINC (and include the running applications)
2. Run Reschedule to move tasks from CPU to GPU
3. Manually edit client_state file to change plan_class where necessary
4. Restart BOINC.

N.B. If you run Reschedule while BOINC is running, it will automatically restart BOINC before you have a chance to edit the file and all incorrect tasks will be dropped.

I have only tested this on one machine so caution should be used if your are attempting this - especially with the shortage of new tasks at present (you dont want to lose any precious tasks!) and definitely take a backup before doing this so that all files can be restored to before Rescheduling until you are confident that all is working OK.


John.
GPU Users Group



ID: 1006100 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1006111 - Posted: 19 Jun 2010, 8:55:22 UTC

Can I please have a example of what could need to be changed client state file?
ID: 1006111 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14687
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1006116 - Posted: 19 Jun 2010, 9:09:36 UTC - in response to Message 1005933.  

Speedy, the rescheduler won't work with the fermi. It only recognizes 6.08 and 6.09 it cannot do the 6.10 Fermi plan_class.

If you would all just read a bit more of the thread - specifically, to my conversation with MadMaC on 16 June.

Questor (just now) is absolutely right, and has confirmed what we found then: ReScheduler puts the correct <version_num> into the file, but the wrong <plan_class>.

Manually changing every <plan_class> to <cuda fermi> obviously works, but the alternative is to change your app_info file so that BOINC knows how to handle the reschuled tasks. We got into a slight muddle with x64 applications last time, so here'e a slightly different suggestion.

Open your app_info.xml file for editing. (Usual rules - plain text only, Notepad in ANSI mode is fine). Locate the <app_version> ... </app_version> containing your Fermi application.

Duplicate the entire section (copy and paste), including the two bracketing tags <app_version> and </app_version>. In one copy, change the <plan_class> from cuda_fermi to cuda: leave the other alone.

Repeat the above paragraph if you have any more app_version sections containing the Fermi application. Save your changes.

And that's it. Rescheduler should work automatically again.
ID: 1006116 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1643
Credit: 12,921,799
RAC: 89
New Zealand
Message 1006124 - Posted: 19 Jun 2010, 9:34:15 UTC

Thanks Richard. I'm going to leave client state file alone as I'm scard I'm going to crash my tasks. Thanks all the same
ID: 1006124 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6659
Credit: 121,090,076
RAC: 0
United States
Message 1006181 - Posted: 19 Jun 2010, 13:18:49 UTC

Good morning! Based on careful reading of this thread, I tried to get my app_info file straightened out with the latest file names. I got 3 GPU units, but they errored out instantly, so I realize I need help. This is what I have so far. The CPU and AP portions work perfectly, as they are the result of the Lunatics installer, but clearly the Fermi portions are flawed somehow.

<app_info>
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>AK_v8b_win_x64_SSSE3x.exe</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<platform>windows_intelx86</platform>
<file_ref>
<file_name>AK_v8b_win_x64_SSSE3x.exe</file_name>
<main_program/>
</file_ref>
</app_version>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<platform>windows_x86_64</platform>
<file_ref>
<file_name>AK_v8b_win_x64_SSSE3x.exe</file_name>
<main_program/>
</file_ref>
</app_version>
<app>
<name>astropulse_v505</name>
</app>
<file_info>
<name>ap_5.05r409_SSE.exe</name>
<executable/>
</file_info>
<app_version>
<app_name>astropulse_v505</app_name>
<version_num>505</version_num>
<platform>windows_intelx86</platform>
<file_ref>
<file_name>ap_5.05r409_SSE.exe</file_name>
<main_program/>
</file_ref>
</app_version>
<app_version>
<app_name>astropulse_v505</app_name>
<version_num>505</version_num>
<platform>windows_x86_64</platform>
<file_ref>
<file_name>ap_5.05r409_SSE.exe</file_name>
<main_program/>
</file_ref>
</app_version>

<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_30_14.dll</name>
<executable/>
</file_info>
<file_info>
<name>libfftw3f-3-1-1a_upx.dll</name>
<executable/>
</file_info>

<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>610</version_num>
<avg_ncpus>0.100000</avg_ncpus>
<max_ncpus>0.100000</max_ncpus>
<platform>windows_intelx86_64</platform>
<plan_class>cuda_fermi</plan_class>
<file_ref>
<file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>libfftw3f-3-1-1a_upx.dll</file_name>
</file_ref>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
</app_version>

<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>610</version_num>
<avg_ncpus>0.100000</avg_ncpus>
<max_ncpus>0.100000</max_ncpus>
<platform>windows_x86_64</platform>
<plan_class>cuda_fermi</plan_class>
<file_ref>
<file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>libfftw3f-3-1-1a_upx.dll</file_name>
</file_ref>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
</app_version>

<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>610</version_num>
<avg_ncpus>0.100000</avg_ncpus>
<max_ncpus>0.100000</max_ncpus>
<platform>windows_intelx86_64</platform>
<plan_class>cuda_fermi</plan_class>
<file_ref>
<file_name>setiathome_6.10_windows_intelx86__cuda_fermi.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_30_14.dll</file_name>
</file_ref>
<file_ref>
<file_name>libfftw3f-3-1-1a_upx.dll</file_name>
</file_ref>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
</app_version>

Thank you for any help.

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1006181 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14687
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1006189 - Posted: 19 Jun 2010, 13:43:22 UTC - in response to Message 1006181.  

Good morning! Based on careful reading of this thread, I tried to get my app_info file straightened out with the latest file names. I got 3 GPU units, but they errored out instantly, so I realize I need help. This is what I have so far. The CPU and AP portions work perfectly, as they are the result of the Lunatics installer, but clearly the Fermi portions are flawed somehow.

It would be better if you posted a representative sub-set of error messages, so we know what we're looking for.
ID: 1006189 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6659
Credit: 121,090,076
RAC: 0
United States
Message 1006194 - Posted: 19 Jun 2010, 13:58:08 UTC - in response to Message 1006189.  

Good morning! Based on careful reading of this thread, I tried to get my app_info file straightened out with the latest file names. I got 3 GPU units, but they errored out instantly, so I realize I need help. This is what I have so far. The CPU and AP portions work perfectly, as they are the result of the Lunatics installer, but clearly the Fermi portions are flawed somehow.

It would be better if you posted a representative sub-set of error messages, so we know what we're looking for.


At the time BOINC hadn't reported yet,and all BOINC said was computation error. Here is a link to one of the failed units.

http://setiathome.berkeley.edu/result.php?resultid=1638383755

Thank you! I really feel bad causing even one error.

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1006194 · Report as offensive
Profile Questor Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 3 Sep 04
Posts: 471
Credit: 230,506,401
RAC: 157
United Kingdom
Message 1006195 - Posted: 19 Jun 2010, 13:58:15 UTC - in response to Message 1006116.  
Last modified: 19 Jun 2010, 13:59:17 UTC

Speedy, the rescheduler won't work with the fermi. It only recognizes 6.08 and 6.09 it cannot do the 6.10 Fermi plan_class.

If you would all just read a bit more of the thread - specifically, to my conversation with MadMaC on 16 June.

Questor (just now) is absolutely right, and has confirmed what we found then: ReScheduler puts the correct <version_num> into the file, but the wrong <plan_class>.

Manually changing every <plan_class> to <cuda fermi> obviously works, but the alternative is to change your app_info file so that BOINC knows how to handle the reschuled tasks. We got into a slight muddle with x64 applications last time, so here'e a slightly different suggestion.

Open your app_info.xml file for editing. (Usual rules - plain text only, Notepad in ANSI mode is fine). Locate the <app_version> ... </app_version> containing your Fermi application.

Duplicate the entire section (copy and paste), including the two bracketing tags <app_version> and </app_version>. In one copy, change the <plan_class> from cuda_fermi to cuda: leave the other alone.

Repeat the above paragraph if you have any more app_version sections containing the Fermi application. Save your changes.

And that's it. Rescheduler should work automatically again.


Found your madMac conversation now. It's hard keeping up some times - I don't know how you do it!
So the rebranded tasks just get processed with the extra >cuda< section of app_info and all original unbrandeded GPU tasks are processed with the original >cuda_fermi< section.
GPU Users Group



ID: 1006195 · Report as offensive
Profile Questor Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 3 Sep 04
Posts: 471
Credit: 230,506,401
RAC: 157
United Kingdom
Message 1006197 - Posted: 19 Jun 2010, 14:05:32 UTC - in response to Message 1005416.  
Last modified: 19 Jun 2010, 14:11:41 UTC


You might have better luck with version 1.9

Thanks Richard. According to reschedule 1.9 no tasks need to be moved according to log

    User testing for a reschedule
    CPU tasks: 0 (0 VLAR, 0 VHAR)
    GPU tasks: 0 (0 VLAR, 0 VHAR)
    No reschedule needed


I'm not sure I agree with the log. I will let the tasks run slowly on my Gpu



Looking back at this 'Speedy' post, are you still getting 0 tasks showing up when you run the Reschedule tool? If you actually have CPU/GPU tasks but it shows a 0 count you may be suffering from a problem I had where a slight (very difficult to spot) corruption in the client_state.xml causes reschedule to show 0 tasks even though BOINC works perfectly OK.
GPU Users Group



ID: 1006197 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14687
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1006203 - Posted: 19 Jun 2010, 14:23:12 UTC - in response to Message 1006194.  

Good morning! Based on careful reading of this thread, I tried to get my app_info file straightened out with the latest file names. I got 3 GPU units, but they errored out instantly, so I realize I need help. This is what I have so far. The CPU and AP portions work perfectly, as they are the result of the Lunatics installer, but clearly the Fermi portions are flawed somehow.

It would be better if you posted a representative sub-set of error messages, so we know what we're looking for.

At the time BOINC hadn't reported yet,and all BOINC said was computation error. Here is a link to one of the failed units.

http://setiathome.berkeley.edu/result.php?resultid=1638383755

Thank you! I really feel bad causing even one error.

Steve

IIRC, "Exit status -185 (0xffffffffffffff47)" may refer to not having the correct DLL files either linked via app_info, or present in the project directory. But I'm 100 miles away from the nearest CUDA card this weekend, so it's hard to check.

Or:

Anyone else reading Steve's app_info as having three identical Fermi sections, all with

<version_num>610</version_num> 
<platform>windows_intelx86_64</platform>
<plan_class>cuda_fermi</plan_class>

Read back over my conversations with MadMaC, but I think I'd try that with at least one each of:

<platform> windows_intelx86
<plan_class> cuda_fermi

and

<platform> windows_intelx86
<plan_class> cuda

(in the original format, of course: I've just shown it like that to emphasise the changes)

The DLL references look OK - just check the files themselves are still there.....
ID: 1006203 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 15 · Next

Message boards : Number crunching : Running SETI@home on an nVidia Fermi GPU


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.