Joined: 25 Sep 06
just wondering it it would be feasible to use your graphics card/ Physics card as a large floating point processor to help speed up SETI@Home and/or BOINC
nVidia already has a C compiler for there GPUs http://developer.nvidia.com/object/cuda.html
and both ATi/AMD were/are looking at [and possibly are] using the slave GPU(s) [when using multiple GPUs] as a Physics Processing unit
Even better would be a API that was designed to do this sort of stuff but that's if you can get all the companies to follow on [but dreams are free right?]
Joined: 9 Apr 02
See this thread for such discussion.
©2018 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.