Message boards :
Number crunching :
Lunatics Windows Installer v0.38 release notes
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 11 · Next
Author | Message |
---|---|
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66330 Credit: 55,293,173 RAC: 49 |
In this thread was the question about, if non-/ (1 WU) & Fermi (2+ WUs) GPUs in one machine would be possible. After a bit of searching I found the proper blog entry and here's the link to the blog entry in question. I do wish this procedure didn't need to be done, As the current way of running more than one wu at a time is a global setting for all gpus and not one setting per card(card 0=0.25, card 1=1, card 2=1, card 3=1, card 4=1, card 5=1, card 6=1), to Me the current way seems like a kludge, But getting this done would require some work on Boinc possibly & I gather that isn't going to happen. But that's just My opinion. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
Lint trap Send message Joined: 30 May 03 Posts: 871 Credit: 28,092,319 RAC: 0 |
I thought the 460 was working a litle harder than before x38g was running, but I had no idea... The pic is a portion of OpenHardwareMonitor's display. I have not reset the readings since installing x38g yesterday. The columns are "Sensor", "Value" (the current value) and "Max". I've seen the memory controller readings hit the 90's before, but never 100 or more! Martin /edit/ OHM version 0.3.2 Beta |
Cliff Harding Send message Joined: 18 Aug 99 Posts: 1432 Credit: 110,967,840 RAC: 67 |
Installed v0.38 on both systems on day 1. Also installed NVIDIA driver 275.33, but had to drop back to 270.61 because of compatibilty issues with other software. A-SYS I7/950, GTX460SE -- Loaded CUDA 6.10(FERMI) for first time and run times are approx 8-9 min, while 6.08 has tripled in average time from approx 15 min to 45 min. B-SYS Q6600, GTX275/GTS250 -- Loaded CUDA 6.10(FERMI) for first time and run times for the GTX275 side of the card approx 8-9 min and tripled time for CUDA 6.08. The GTS250 side was getting times of 15 min for 6.10 and average time for 6.08 at 45-60 min. Regressed to v0.37 on the B-SYS because of the running times after exhausting all CUDA 6.10 WUs. Keeping the new version on the A-SYS because the run times for the 6.10 are much better than the 6.08. Is there any way I can exclude the 6.08 WUs from the mix? |
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
Okay, I see a couple of questions I might be able to answer. I was one of the Beta testers at Lunatics so I got a bit of a head start running the new installer. I also noticed the amount of inconclusives and mentioned them to Jason G. His reply was that the new app is much more accurate than either stock or the old opt_apps so the numbers don't quite match. So far all my inconclusives have validated against every type of app I've been paired with except, of course people trying to run Fermi cards with the old V12 or people with problems with their cards. As I said, all of mine have validated. As to the question about the 6.08 versus the 6.10. They are the same, the name is only changed by what app you were running before you got the new installer. Once you run out of all your old work marked as 6.08 you will get only 6.10 work units. I have noticed I got a bunch of work that is quite a bit longer to do than the rest. This is the work being done not the app running them. I haven't looked to make sure but they are probably very close to being VLARs. As the angle range decreases the time it takes to complete them increases. As to running more than one work unit at a time, it depends on your system. Guido.man, I don't know why your 560Ti has a problem running more than one. You should be able to run at least four with very little loss of time. Did you remember to divide the time by the number of work units you are doing at a time? Yes, running more than one will slow them down but you are running more than one in that same amount of time. With the old v0.32 app I was running two at a time on my little GTS 450 because three at a time took too long. With the new app and the 275.33 driver I have increased to three at a time in about the same time it took me to run two at a time with the old app. You should see at least that much of an improvement. You may want to take a look at this thread http://setiathome.berkeley.edu/forum_thread.php?id=63429 and see if it might help you. I have also pointed JasonG to it and he is working on something to help out with that problem. I hope I was able to help out with these questions and I'm sure if I missed something, one of the others will correct me or answer what I didn't. PROUD MEMBER OF Team Starfire World BOINC |
Allan Taylor Send message Joined: 31 Jan 00 Posts: 32 Credit: 270,259 RAC: 0 |
I decided to try the new installer to see if the astropulse would work any better on my ATI card (I was having a lot of restarts before). After running the installer I see that the app info has two app version entries for the ati astropulse. One is 505 and the other is 506. Everything is the same between them, just the version number is different. Why are there two entries? Thanks Richard and Josef. I just wanted to be sure it wasn't a mistake somewhere. I'll leave it alone and see how the astropulse work now. |
Careface Send message Joined: 6 Jun 03 Posts: 128 Credit: 16,561,684 RAC: 0 |
Ah! I never bothered to check priority, as I just assumed it was always been in below normal :) Turns out the system-wide hanging I get is 10seconds into loading the WU, when the thread gets bumped up from below normal, to normal :) So I'll try out an automatic priority changer and see if that helps. Cheers :) |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66330 Credit: 55,293,173 RAC: 49 |
I had to turn My fans up to the max, somehow I forgot about the blasted things after the 275.33 driver upgrade. It's a good thing I have a fully armed and equipped swamp cooler a foot from the PC, problem is I hate running the cooler at night if It's cool enough out and 77F(25C) almost is cool enough presently and was good enough before the upgrade, as My gpus were at 82C with the fans at 100% and the gpu temps had been at 84C and climbing before that, so it's a good and hot driver. It's a good thing I have 3 radiators for the water cooled cruncher, as I'll need all 3 most likely for all 6 gtx295 cards, the i5 750 cpu and the motherboard itself. The gpu temps falling now, so far its 76C and maybe falling. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Yes, I see the application priority uplift from 'below normal' to 'normal' here too. I wasn't aware that was part of the plan - or it may have slipped in at an earlier stage of the x37/x38 development cycle, and I just missed it. I'll ask the developer. |
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 572 |
Just a quick test, using driver 275.33, 3 x GTX470 OC. 1 per card 9 to 11 min per WU 2 per card 15 to 17 min per WU 3 per card 20 to 22 min per WU Video driver now seems stable, no more driver restarts and no downclocking, temps are running a bit higher than v0.37 - 266.58, will have to remove OC during warmer weather. Kevin |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
Just ran the new installer, then modified the app_info for 2 Work Units at a time. On my GTX460 shorties used to take around 5:30, now they appear to be going through in around 4:30. A significant speedup. Well done. Grant Darwin NT |
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
Sorry Guido.man, I saw this.. Not much to be gained by running more than 1 WU at a time.and I guess I read it wrong. Thought you were complaining about not gaining much by running more than one at a time. I see by your little math test that running more than one on your rig is actually faster than running one at a time. One at a time times three equals 273 seconds as opposed to three at a time taking 261 seconds. 12 seconds doesn't sound like much at first glance but when you add in the time to switch to the next work unit and get it loaded up and running it really adds up. PROUD MEMBER OF Team Starfire World BOINC |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Sorry Guido.man, I saw this..Not much to be gained by running more than 1 WU at a time.and I guess I read it wrong. Thought you were complaining about not gaining much by running more than one at a time. I see by your little math test that running more than one on your rig is actually faster than running one at a time. One at a time times three equals 273 seconds as opposed to three at a time taking 261 seconds. 12 seconds doesn't sound like much at first glance but when you add in the time to switch to the next work unit and get it loaded up and running it really adds up. I suggested that it would be helpful for somebody to carry out that test in the context of Eroc's host with one GTX460 and one GTX260 GPU. The question was whether the productivity increase of moving from 1 task to 3 tasks on the GTX460 was worth sacrificing the contribution of the GTX260 for - since I don't think it would be wise even to attempt 3 tasks at once on that card. If the gain from running tasks 3-up is under 5%, then I think my gut instinct that, for those running mixed cards from different generations, using all cards but restricted to one instance per card is potentially the better option. Unless to have a second host you can transplant the card into, of course. |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
One might also consider, if the gain is only minimal if using for example one 460 and one 260, if it's worth using the extra power that's needed. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
One might also consider, if the gain is only minimal if using for example one 460 and one 260, if it's worth using the extra power that's needed. Yes, for somebody with an older graphics card lying around, say after an upgrade, there's always a three-way power choice to be made. 1) Put both cards in one host, to save CPU/motherboard/HDD/etc power overheads. 2) Put the second card in a second host, to allow the new card to work at peak efficiency. 3) Retire the old card completely. Each upgrader will have to find their own solution, bearing in mind the age and power (in both senses) of the retired card, the availability of a spare computer (that would be powered up anyway), and so on. All we can do is provide some figures to help users make their own decision. |
S@NL - eFMer - efmer.com/boinc Send message Joined: 7 Jun 99 Posts: 512 Credit: 148,746,305 RAC: 0 |
I disabled 2 of the 8 HT's otherwise the system was too sluggish to work with. Use 2 GTX 295 on that machine and the temps seem to have gone up a couple of degrees. TThrottle Control your temperatures. BoincTasks The best way to view BOINC. Anza Borrego Desert hiking. |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66330 Credit: 55,293,173 RAC: 49 |
Here's a pretty good comparison of the differences in x32f(266.58) and x38g(275.33) in processing time on My current pair of GTX295 cards(BFG+EVGA), Oh and both cards are stock air cooled cards w/their fans set @ 100%. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66330 Credit: 55,293,173 RAC: 49 |
Here's a pretty good comparison of the differences in x32f(266.58) and x38g(275.33) in processing time on My current pair of GTX295 cards(BFG+EVGA), Oh and both cards are stock air cooled cards w/their fans set @ 100%. Doing a quick and dirty calculation, It's a 55.8441558441558% increase in crunching power here. Mwahahahaha!!! Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Yes, I see the application priority uplift from 'below normal' to 'normal' here too. I wasn't aware that was part of the plan - or it may have slipped in at an earlier stage of the x37/x38 development cycle, and I just missed it. I'll ask the developer. Yes, I'd forgotten an issue which arose last month during testing. It was found that the new CUDA app for nVidia cards suffered a big loss in performance at 'below normal' priority when certain other applications were running on the same machine. We're looking into it to see if the incompatibilities can be ironed out. In the meantime, if anyone finds that the sluggish screen behaviour is too much to live with, we can suggest: a) Free up one or two CPU cores, depending on the number of tasks running, as Fred (efmer) has already posted. b) Use Process Lasso (free download) to restore the x38g application to 'below normal' priority. c) Re-install the x32f_preview application from the v0.37 installer. Although that's no longer available to download direct from the Lunatics website, I can make a copy available if anybody hasn't kept their copy. |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
I for one would be interested in knowing what "certain other applications" there's talk about :) Is it apps that all are running, maybe built into windows? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Answered by PM. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.