Message boards :
Number crunching :
Post your BOINC Startup 'CUDA' Info
Message board moderation
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 · Next
Author | Message |
---|---|
Pappa Send message Joined: 9 Jan 00 Posts: 2562 Credit: 12,301,681 RAC: 0 |
19.11.2009 03:57:09 NVIDIA GPU 0: GeForce GTS 250 (driver version 19555, CUDA version 3000, compute capability 1.1, 512MB, 470 GFLOPS peak) Actually it is a change in the Boinc 6.10.xx series that has changed how the benchmark is preformed. The 6.6.xx Boinc version would have shown it as something like this.
Regards Please consider a Donation to the Seti Project. |
Crun-chi Send message Joined: 3 Apr 99 Posts: 174 Credit: 3,037,232 RAC: 0 |
19.11.2009 03:57:09 NVIDIA GPU 0: GeForce GTS 250 (driver version 19555, CUDA version 3000, compute capability 1.1, 512MB, 470 GFLOPS peak) As always: why those guys who has CUDA 3.0 doesnot distribute dlls / suppose there is only 2 DLL like "old CUDA 2.3. But no; they will always tell they cannot give anyone because it is prebeta ( or something like that) It look like that two DLL.s are more secret than whole Pentagon :) I am cruncher :) I LOVE SETI BOINC :) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
The 3.0 DLLs have been tested at Lunatics, and showed at best a 2% performance increase over 2.3 - nothing like the 30%+ achieved by 2.3 over 2.2, or 2.2 over 2.1. So nothing worth losing your licence over. And losing your licence is important: if they deliberately flouted the developer NDAs (Non Disclosure Agreements), they could lose their access to pre-release development tools, or closed technical message boards where problems can be reported and overcome. |
j mercer Send message Joined: 3 Jun 99 Posts: 2422 Credit: 12,323,733 RAC: 1 |
Apply for NVIDIA’s GPU Computing registered developer program like I replied to you in your asking for copies. Here's links: http://www.nvidia.com/object/cuda_get.html http://nvdeveloper.nvidia.com/content/GPUComputingDeveloperApplication/frmDeveloperRegistration.asp ... |
Crun-chi Send message Joined: 3 Apr 99 Posts: 174 Credit: 3,037,232 RAC: 0 |
The 3.0 DLLs have been tested at Lunatics, and showed at best a 2% performance increase over 2.3 - nothing like the 30%+ achieved by 2.3 over 2.2, or 2.2 over 2.1. So nothing worth losing your licence over. Many of them are on that forum only for CUDA toolkit, and even loosing license they will not lose anything important: because that persons are not developer or something like that: they are people like ordinary one: just have lucky and get access to nvidia forum :) I am cruncher :) I LOVE SETI BOINC :) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
Many of them are on that forum only for CUDA toolkit, and even loosing license they will not lose anything important: because that persons are not developer or something like that: they are people like ordinary one: just have lucky and get access to nvidia forum :) If you can write code like Jason or Raistmer then get stuck in - we still need to solve the VLAR problem. Otherwise, there are clearly two grades of 'ordinary' people. |
Crun-chi Send message Joined: 3 Apr 99 Posts: 174 Credit: 3,037,232 RAC: 0 |
The 3.0 DLLs have been tested at Lunatics, and showed at best a 2% performance increase over 2.3 - nothing like the 30%+ achieved by 2.3 over 2.2, or 2.2 over 2.1. So nothing worth losing your licence over. so when they tested 3.0 DLLS on Lunatics ,that is ok ,and if they put on Internet that is not ok :) How interesting :) I am cruncher :) I LOVE SETI BOINC :) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
so when they tested 3.0 DLLS on Lunatics ,that is ok ,and if they put on Internet that is not ok :) Testing is an important part of development (as you must know, from your 'Volunteer tester' tag). And believe me, if those tests had revealed another 30% increase, they would have been falling over themselves to develop a safe, reliable, tested solution to use the 3.0 DLLs for SETI (ensuring that the tasks validate, and hence generate credit, when compared with older versions) - while still keeping their licences to develop with 3.1, 3.2,4.0, ... |
hiamps Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 |
"CUDA device: GeForce GTX 275 (driver version 18585, compute capability 1.3, 896MB, est. 123GFLOPS) CUDA device: GeForce GTX 285 (driver version 18171, CUDA version 1.3, 1024MB, est. 127GFLOPS)" Was wondering if this is a good comparison? Seems for the money the GTX 275 is almost as fast as the GTX 285. Is there a big difference between Gddr3 and ddr3 memory? They both seem to have 240 processors. Anyone have both? Official Abuser of Boinc Buttons... And no good credit hound! |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
[http://www.nvidia.com/object/cuda_learn_products.html] [http://www.nvidia.com/object/product_geforce_gtx_275_us.html] [http://www.nvidia.com/object/product_geforce_gtx_285_us.html] GTX275 - 633/1404/1134 - (123 GFLOPS ?) GTX285 - 648/1476/1242 - (127 GFLOPS ?) [GPU/shader/RAM] The upper mentioned GPUs are at stock speed? My OCed GTX260-216 have: EVGA GTX260 Core216 SSC - 675/1458/1152 -> 112 GFLOPS GIGABYTE GTX260(-216) SOC - 680/1500/1250 -> 117 GFLOPS The GIGABYTE have higher speeds as a GTX285! But O.K., 30 (240) or 27 (216) CUDA (shader) cores. Because of CUDA.. more shader cores more shader speed more RAM speed more GPU core speed (maybe) -> more performance |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
BTW. All GTX2xx GPUs have GDDR3 RAM. The GTX285 have 512-bit and the GTX275 / GTX260-216 have 448-bit 'Memory Interface Width'. But, if this higher 'RAM width' give so much additional performance? |
[AF>france>pas-de-calais]symaski62 Send message Joined: 12 Aug 05 Posts: 258 Credit: 100,548 RAC: 0 |
31/10/2009 09:07:21 NVIDIA GPU 0: GeForce 8400 GS (driver version 19062, CUDA version 2030, compute capability 1.1, 256MB, 43 GFLOPS peak) to 27/11/2009 23:45:24 NVIDIA GPU 0: GeForce 8400 GS (driver version 19562, CUDA version 3000, compute capability 1.1, 256MB, 43 GFLOPS peak) @+ |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
BOINC V6.6.x After update to BOINC V6.10.18: The EVGA brothers : NVIDIA GPU 0: GeForce GTX 260 (driver version 19038, CUDA version 2030, compute capability 1.3, 896MB, 630 GFLOPS peak) NVIDIA GPU 1: GeForce GTX 260 (driver version 19038, CUDA version 2030, compute capability 1.3, 896MB, 630 GFLOPS peak) NVIDIA GPU 2: GeForce GTX 260 (driver version 19038, CUDA version 2030, compute capability 1.3, 896MB, 630 GFLOPS peak) NVIDIA GPU 3: GeForce GTX 260 (driver version 19038, CUDA version 2030, compute capability 1.3, 896MB, 630 GFLOPS peak) The GIGABYTE: NVIDIA GPU 0: GeForce GTX 260 (driver version 19038, CUDA version 2030, compute capability 1.3, 896MB, 653 GFLOPS peak) |
Stef Send message Joined: 21 Dec 08 Posts: 1 Credit: 176,496 RAC: 0 |
NVIDIA GPU 0: GeForce 8800 GTS 512 (driver version 19562, CUDA version 3000, compute capability 1.1, 512MB, 442 GFLOPS peak) |
Luke Send message Joined: 31 Dec 06 Posts: 2546 Credit: 817,560 RAC: 0 |
Not that I use my Graphics card for crunching, but I'll post here anyway. 28/11/2009 3:17:45 p.m. NVIDIA GPU 0: GeForce 8600M GT (driver version 18681, CUDA version 2020, compute capability 1.1, 256MB, 61 GFLOPS peak) Hopefully the GTX260 C216 Superclock in my gaming build will do a bit better! - Luke. |
Mahoujin Tsukai Send message Joined: 21 Jul 07 Posts: 147 Credit: 2,204,402 RAC: 0 |
I'm curious, How many GFLOPS can an Intel Core 2 Quad Q6600 do at stock speed? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
I'm curious, About 2.4 per core, or 9.6 in total. Floating Point speed is very close to stock clock for the Core 2s. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
I'm curious, For the Core 2 architecture, each core has multiple execution units, one of which can do 4 SIMD floating point operations per clock, another can do one floating point operation per clock. So 2.4 * 5 = 12 peak GFLOPS per core in terms comparable to the way the GPUs are rated. Joe |
angler Send message Joined: 19 Oct 00 Posts: 33 Credit: 880,214 RAC: 0 |
NVIDIA GPU 0: GeForce 9600 GT (driver version 19107, CUDA version 2030, compute capability 1.1, 512MB, 208 GFLOPS peak |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14676 Credit: 200,643,578 RAC: 874 |
For the Core 2 architecture, each core has multiple execution units, one of which can do 4 SIMD floating point operations per clock, another can do one floating point operation per clock. So 2.4 * 5 = 12 peak GFLOPS per core in terms comparable to the way the GPUs are rated.Joe Looking for a direct comparison between the old and new BOINC estimates, I found Martin's GTS 250: CUDA device: GeForce GTS 250 (driver version 0, CUDA version 1.1, 512MB, est. 84GFLOPS) I reckon my 'est. 2.4GFLOPS' (benchmark per core) is pretty close to your '12 GFLOPS peak': it's just they way we count 'em. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.