Message boards :
Number crunching :
New app?
Message board moderation
Previous · 1 . . . 13 · 14 · 15 · 16 · 17 · 18 · 19 . . . 20 · Next
Author | Message |
---|---|
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Ah right, Not Cuda, so not in my experience :) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
qbit Send message Joined: 19 Sep 04 Posts: 630 Credit: 6,868,528 RAC: 0 |
Hey Jason, I just discovered that a V8 cuda version is now live on beta also. I stop here for the moment and do a bit of testing over there. I'm curious if the cuda version puts less stress on the system and how it compares in performance to the open cl version. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Hey Jason, I just discovered that a V8 cuda version is now live on beta also. I stop here for the moment and do a bit of testing over there. I'm curious if the cuda version puts less stress on the system and how it compares in performance to the open cl version. I have no experience with with the OpenCL one, but, if I understand correctly, running 2 or 3 up with the Cuda app to load the bigger GPUs is a bit less demanding of CPU. single instance I think the OpenCL should go faster in specific situations (until next Cuda optimisation round anyway) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Shawn Rothermund Send message Joined: 13 Feb 03 Posts: 132 Credit: 79,997,445 RAC: 123 |
Good evening I am currently running beta(gpu only) alongside of seti main on my daily driver and when I was watching last night the v8 cuda 50's,42's and a couple of 32's were running in way less time than the estimated time the cuda 50's in particular were running a half hour or more less than the estimated time. The opencl ones I am not sure but I think that they were running in less time as well I will pay more attention when I get more beta work since I have beta suspended right now since I have some ap 7.10's and some v7 cuda 42 resends to finish that were being skipped over by beta. ME AND MY BOY LOOKING FOR ET |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
You'll find that the estimates are wrong for the first batch of tasks, but settle down pretty quickly. Pay attention to the actual times, not the estimates. |
Shawn Rothermund Send message Joined: 13 Feb 03 Posts: 132 Credit: 79,997,445 RAC: 123 |
Thank you I will do that especially since the first run was off by that much it will give me a sort of benchmark for everything when it goes main. Also 1 question for who ever might have an answer after everything settles down I would like to try running the lunatics app to optimize the performance of my systems but I do not think that my current skill level is up to it without some learning on my part so I was wondering if there is any way to learn what I need to to make this happen because I am willing to learn. Thanks to everyone for any help on this matter. ME AND MY BOY LOOKING FOR ET |
William Send message Joined: 14 Feb 13 Posts: 2037 Credit: 17,689,662 RAC: 0 |
Thank you I will do that especially since the first run was off by that much it will give me a sort of benchmark for everything when it goes main. Also 1 question for who ever might have an answer after everything settles down I would like to try running the lunatics app to optimize the performance of my systems but I do not think that my current skill level is up to it without some learning on my part so I was wondering if there is any way to learn what I need to to make this happen because I am willing to learn. Thanks to everyone for any help on this matter. 'The Lunatics app' is usually supplied as an installer that only requires minimum expertise and knowledge from the user. Specifically you need to know what your CPU is capable of ( though these days we do recognition and only present available options ) and what type of GPU you have (intel, NV ati) along with the 'class' is belongs to ( e.g. like pre-fermi, Kepler, maxwell). If in doubt we always open a question thread and stand by for help. NB at this point all the GPU apps you get via the installer are the same you get directly from the project, since the project has been relying on volunteers to supply those for a few years. You will get optimised CPU apps though, so overall increase performance. Only when we've switched from version update back to optimisation does the installer usually carry newer apps than stock project. And even then, GPU apps get handed to the project and eventually turn up as stock. This is a slow process though, so with the installer you're often a generation or two ahead. [we simply can update faster, when an app gets cleared for release, as we are several people working on it, as opposed to one.] So to answer your question, first the installer needs to be out - and we need information from the project [for compatibility reasons] that only becomes apparent when the GPU apps have been released. After that we should be able to deliver quite quickly. Richard has been working flat out on the installer, to get it to the point, where it's ready to ship as soon as the information from the project is in. Second, as it is an installer, you should be fine. It only gets complicated and requires skills when you do manual installs (I.e. write your own app_info.xml). It's not too difficult, there are templates to work from, but mistakes usually wipe your cache. With regard to learning, if you want to give manual install a try, check the sticky thread and then start a dedicated thread with your questions. (Don't be afraid to ask basic ones. I'm sure there's enough people who wonder how to get app_info.xml to work but don't want to ask. ) We may not be able to devote much time to it though until after the dust has settled. So maybe you'll want to wait. :D you'd be only installing stock as anon at this point anyway, just to get familiar with how it's done. A person who won't read has no advantage over one who can't read. (Mark Twain) |
Shawn Rothermund Send message Joined: 13 Feb 03 Posts: 132 Credit: 79,997,445 RAC: 123 |
Thank you for the info I was planning on waiting for everything to settle down before trying and maybe not until feb. after I do some planned upgrades to my main computer(new mobo,cpu,gpu,ect.) Thanks again. ME AND MY BOY LOOKING FOR ET |
qbit Send message Joined: 19 Sep 04 Posts: 630 Credit: 6,868,528 RAC: 0 |
Hey Jason, I just discovered that a V8 cuda version is now live on beta also. I stop here for the moment and do a bit of testing over there. I'm curious if the cuda version puts less stress on the system and how it compares in performance to the open cl version. You may be right there Jason, I think I can already tell by the noise from the fans if the OpenCl app or one of the Cuda apps is running *gg* Does anybody know if there are newer Cuda apps planned? "Newest" one seems to be 5.0 atm, but current cuda version is 7.5 |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
You may be right there Jason, I think I can already tell by the noise from the fans if the OpenCl app or one of the Cuda apps is running *gg* Certainly are planned. For the moment we're covering the 'standard set', so as to prove v8 operation without sacrificing any generation. Part of the reason stock has paused at 5, is that without using newer features the generic/old-school code in use tends to run slower each version beyond that. Petris provided a number of key optimisations that leverage some of the new features (Some that I've also tested), and we'll be preparing to put the best proven ones in, in a way that won;t sacrifice the older gen cards. That will allow us to eventually have our cake and eat it too, by adding a bit more flexibility. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Does anybody know if there are newer Cuda apps planned? "Newest" one seems to be 5.0 atm, but current cuda version is 7.5 The current plan is to concentrate on refreshing the current range (23, 32, 42, 50) to v8 standards, and get them out of the door first. After that, there are two possible considerations: one is to import specifically SETI optimisations, e.g. to parallelise autocorrs on Maxwell-class cards, which are a bottleneck at the moment, and the other is to progress to newer CUDA versions. I don't know which would produce the greater increase in scientific throughput, at this stage. |
mr.mac52 Send message Joined: 18 Mar 03 Posts: 67 Credit: 245,882,461 RAC: 0 |
Thanks guys for laying out the short term goals as well as the longer term targets. The info the two of you provides are really helpful and I appreciate the time and talent you support the project with. John |
qbit Send message Joined: 19 Sep 04 Posts: 630 Credit: 6,868,528 RAC: 0 |
Yeah, thx for all the infos, folks! Newer/optimized cuda apps could be really nice, I remember that somebody once said that the current V7 apps use just about 20% of the potential of Cuda. If this Info is correct and if my understanding is correct also that would mean that we could process tasks up to 5 times as fast as now. But I guess you need a team of professional coders to come even close to that. But every little bit helps. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Yeah, thx for all the infos, folks! Current stock Cuda application compute efficiency is about 5% on larger GPUs single instance. NV's CUFFT library acheives ~10% efficiency (with their millions of dollars investment and narrow focus, lol), and Petri's also ~10%-15%. I predict combining Petri's optimisations with some other special sauce we will *eventually* get to around 20%-30% compute efficiency (so between 4-6x throughput on later gen GPUs for the same power, or alternatively loading lighter, or some configurable balance) [Edit:] --> compute_efficiency = computeOnlyOperations/(elapsed*theoretical_peak) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
SongBird Send message Joined: 23 Oct 01 Posts: 104 Credit: 164,826,157 RAC: 297 |
Could I use the Cuda App from beta on main? If so, can someone link me to it? |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
It is highly recommended that Beta apps stay on beta. I think Richard posted something about consequences if you are caught. So best to leave testing apps on the testing site. |
SongBird Send message Joined: 23 Oct 01 Posts: 104 Credit: 164,826,157 RAC: 297 |
It is highly recommended that Beta apps stay on beta. I think it was about V7 app being used for V8 workunits. I've already used beta apps openly (the OpenCL one) and nobody said I should not. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
It is highly recommended that Beta apps stay on beta. We've still been finding gothcas with v8 up until yesterday (new version being prepared for testing on beta just last night), suggesting Eric's pushing the precision hard. We suspect that the GBT / Guppi tasks v8 needs to support may well change the picture some more, so are taking extra care. From past experience, it's very difficult to recall something broken. [Much easier to do everything we can to make sure it's right] "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
It is highly recommended that Beta apps stay on beta. I don't think you should be running ANY Beta app on Main, there is a reason they are Beta apps, they are still being tested! If they want them on Main they will release it, or at least announce it! |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Yeah, thx for all the infos, folks! Jason makes a good points but I think you are overlooking something, just because you might be able to run 5 tasks at a time (and if the driver doesn't fail, computer freezes, etc) doesn't necessarily mean you should. You need to look at how long it takes to run those work units. You might be able to run 5 but at 2.5 times the amount of time it would take to run 3 at a time. So then you are actually being counter productive. This all goes back to seeing how fast increasing amounts of work units on graphic cards take. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.