Message boards :
Number crunching :
Special App and Kepler Architecture
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
for $120, you could have bought a GTX 1060 3GB, which would use ~100w on SETI, and still be faster than the 690 since it can use the latest special app. And those prices or lower are on eBay. :) I also am seeing some pretty good prices on gtx 1070 Ti's. Tom A proud member of the OFA (Old Farts Association). |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
I looked at the 1060's but the cuda cores were only 1280 and the GTX 690 is a dual GPU with 3072 cuda cores and a 512 memory bus, Thanks, at first glance the 690 looked promising to me, the 256bit per gpu was higher than the 196bit for the 1060, so on the surface, I thought that 256 bit with the 3072 cuda cores it would do a lot better. I didn't realize the older archetecture would be that much of a hindrance. I will have to reevaluate and devise a new plan :) HA! |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
for $120, you could have bought a GTX 1060 3GB, which would use ~100w on SETI, and still be faster than the 690 since it can use the latest special app. I looked on ebay but also had it in my mind I wanted dual GPU cards lol - maybe that was a bad plan now since it was so old. |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
I looked at the 1060's but the cuda cores were only 1280 and the GTX 690 is a dual GPU with 3072 cuda cores and a 512 memory bus, I noticed the GTX690 isn't even on the chart or the Titan Z, is it because they count WU on each GPU separately instead of combined? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
I noticed the GTX690 isn't even on the chart or the Titan Z, is it because they count WU on each GPU separately instead of combined? More likely there just aren't enough of them around (there has to be a minimum number of individual cards returning valid work for that model to make the charts). Grant Darwin NT |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
No. That system has 7 GPUs (6x 1080ti, 1x 1080) Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Wiggo Send message Joined: 24 Jan 00 Posts: 34754 Credit: 261,360,520 RAC: 489 |
The biggest problem with dual GPU cards here is that they have a much shorter life span than singles do due to the amount of heat that they have to dissipate. Cheers. |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
Weird, under your account is stating {63} Nvidia for coprocessors |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Weird, under your account is stating {63} Nvidia for coprocessors Some people have worked out how to get around the 100 WU server side limits. Grant Darwin NT |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
Weird, under your account is stating {63} Nvidia for coprocessors oh, meaning that they run more than one WU per GPU? Where would one go to figure this out :) |
Wiggo Send message Joined: 24 Jan 00 Posts: 34754 Credit: 261,360,520 RAC: 489 |
No that is not the reason, the reason is to make sure that enough work is on hand to get through server outages without running out of GPU work. ;-)oh, meaning that they run more than one WU per GPU? Where would one go to figure this out :)Weird, under your account is stating {63} Nvidia for coprocessorsSome people have worked out how to get around the 100 WU server side limits. Cheers. |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
No that is not the reason, the reason is to make sure that enough work is on hand to get through server outages without running out of GPU work. ;-)oh, meaning that they run more than one WU per GPU? Where would one go to figure this out :)Weird, under your account is stating {63} Nvidia for coprocessorsSome people have worked out how to get around the 100 WU server side limits. Ah, gotcha, Tuesdays I run out of work on my 1080, I can't imagine how fast having 6 or 7 1080s would run out of work - so tricking the app to report more GPU's than you really have allows you to download more WU's to run? |
Wiggo Send message Joined: 24 Jan 00 Posts: 34754 Credit: 261,360,520 RAC: 489 |
Exactly. :-)Ah, gotcha, Tuesdays I run out of work on my 1080, I can't imagine how fast having 6 or 7 1080s would run out of work - so tricking the app to report more GPU's than you really have allows you to download more WU's to run?No that is not the reason, the reason is to make sure that enough work is on hand to get through server outages without running out of GPU work. ;-)oh, meaning that they run more than one WU per GPU? Where would one go to figure this out :)Weird, under your account is stating {63} Nvidia for coprocessorsSome people have worked out how to get around the 100 WU server side limits. Cheers. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Ah, gotcha, Tuesdays I run out of work on my 1080, I can't imagine how fast having 6 or 7 1080s would run out of work Actually the time to be out of work is approximately the same with one or 7 GPU's since for each GPU you add 100 WU more. so a 1 GPU hosts could DL 100 WU and a 7 GPU could DL 7x100. A 1 GPU host crunch 1 WU at a time, a 7 GPU host 7. So they empty the cache at approximately the same rate. The main problem is because the Linux Special Sauce are so fast and optimized. They could crunch a WU in a 1080Ti in less than 60 Secs. So 100 WU holds for around 100 min an the outages normally takes 4-6 hrs, just do the math. Just to clarify. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
Ah, gotcha, Tuesdays I run out of work on my 1080, I can't imagine how fast having 6 or 7 1080s would run out of work Yup, all things equal (using the same apps), more GPUs wont drain the cache faster, since the cache gets proportionally larger. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
About the 690. Few years ago i used to run a fleet (about 8 ) with several host with 2, 3 or even 4 690 per host. At the time they are some of the top seti crunchers, but now the things changed. The Linux Special sauce builds changes everything. If you want to squeeze all you can from your hosts this is what you could do: Change the 690 to your windows host where they could run OpenCL builds only. Then be sure you leave 1 CPU core free for each one of the GPU (2 per 690). Search for the optimized parameters for that GPU (i not have them anymore) but you could ask Mike for some help. If you can't find, PM i will try to look on my old messages what i use in that time. On your Linux boxes buy the best GPU you could afford with a minimum Compute capacity of CUDA 5.0 If you can, something like the 1060 or up are the best choices. They are good bargains on the top 10x0 series on e-bay. Look specially the 1070 (Ti or not) they have one of the best cost x power x production performances. Obviously the RX20x0 are superior crunchers, but they cost are superior too. If your host is powering a 690 now (who is power hungry), i'm sure it could power any Top GPU available on the market. So don't worry about that. Some could tell about the 750Ti but there are relatively old GPU's now. Some of the newer builds could not work in there in the next years. Install the Linux Special sauce builds and enjoy their amazing crunching speeds. My 0.02 |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Based on Vyper's 2080Ti host and the current mix of work the tasks finish up around 40 seconds without -nobs. So around an hour for 100 tasks and that gpu. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
So if I have 7 GPU (7x100WU) - but I am able to trick the app to think I have 65 gpu's (65x100WU) but only really have 7 GPU's I can download 65x100 instead of 7x100 thus having a bigger cache of WU's to work from ... correct? Please tell me how to enable this magic sorcery :) |
J3P-0 Send message Joined: 1 Dec 11 Posts: 45 Credit: 25,258,781 RAC: 180 |
About the 690. Thanks, I am going to give up on the 690's unfortunately and switch to something others referenced like the 1060's since they support the Special App and perform way better, I really like the concept of dual GPU cards ever since I saw an old ATI quad GPU demo card in early 2000's even before they came out with SLI and Crossfire. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
short answer, yes. but you need to trick the project servers at Berkely, not the app. also there is a maximum GPU count of 64 due to memory allocation issues. you have to edit the boinc source code and compile a custom new version of the boinc client to do it. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.