Lunatics Windows Installer v0.38 release notes

Message boards : Number crunching : Lunatics Windows Installer v0.38 release notes
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 11 · Next

AuthorMessage
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65842
Credit: 55,293,173
RAC: 49
United States
Message 1118878 - Posted: 19 Jun 2011, 2:58:21 UTC - in response to Message 1118869.  
Last modified: 19 Jun 2011, 2:59:48 UTC

In this thread was the question about, if non-/ (1 WU) & Fermi (2+ WUs) GPUs in one machine would be possible.

It would be possible, check out there:
NC subforum : Running mixed nvidia hardware most optimal on the same host


- Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. -

After a bit of searching I found the proper blog entry and here's the link to the blog entry in question. I do wish this procedure didn't need to be done, As the current way of running more than one wu at a time is a global setting for all gpus and not one setting per card(card 0=0.25, card 1=1, card 2=1, card 3=1, card 4=1, card 5=1, card 6=1), to Me the current way seems like a kludge, But getting this done would require some work on Boinc possibly & I gather that isn't going to happen. But that's just My opinion.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1118878 · Report as offensive
Profile Lint trap

Send message
Joined: 30 May 03
Posts: 871
Credit: 28,092,319
RAC: 0
United States
Message 1118906 - Posted: 19 Jun 2011, 4:22:51 UTC
Last modified: 19 Jun 2011, 4:25:13 UTC

I thought the 460 was working a litle harder than before x38g was running, but I had no idea... The pic is a portion of OpenHardwareMonitor's display. I have not reset the readings since installing x38g yesterday. The columns are "Sensor", "Value" (the current value) and "Max".



I've seen the memory controller readings hit the 90's before, but never 100 or more!

Martin

/edit/ OHM version 0.3.2 Beta
ID: 1118906 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1432
Credit: 110,967,840
RAC: 67
United States
Message 1118910 - Posted: 19 Jun 2011, 4:50:20 UTC

Installed v0.38 on both systems on day 1. Also installed NVIDIA driver 275.33, but had to drop back to 270.61 because of compatibilty issues with other software.

A-SYS I7/950, GTX460SE -- Loaded CUDA 6.10(FERMI) for first time and run times are approx 8-9 min, while 6.08 has tripled in average time from approx 15 min to 45 min.

B-SYS Q6600, GTX275/GTS250 -- Loaded CUDA 6.10(FERMI) for first time and run times for the GTX275 side of the card approx 8-9 min and tripled time for CUDA 6.08. The GTS250 side was getting times of 15 min for 6.10 and average time for 6.08 at 45-60 min.

Regressed to v0.37 on the B-SYS because of the running times after exhausting all CUDA 6.10 WUs.

Keeping the new version on the A-SYS because the run times for the 6.10 are much better than the 6.08. Is there any way I can exclude the 6.08 WUs from the mix?
ID: 1118910 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1118913 - Posted: 19 Jun 2011, 5:32:49 UTC

Okay, I see a couple of questions I might be able to answer. I was one of the Beta testers at Lunatics so I got a bit of a head start running the new installer. I also noticed the amount of inconclusives and mentioned them to Jason G. His reply was that the new app is much more accurate than either stock or the old opt_apps so the numbers don't quite match. So far all my inconclusives have validated against every type of app I've been paired with except, of course people trying to run Fermi cards with the old V12 or people with problems with their cards. As I said, all of mine have validated.

As to the question about the 6.08 versus the 6.10. They are the same, the name is only changed by what app you were running before you got the new installer. Once you run out of all your old work marked as 6.08 you will get only 6.10 work units. I have noticed I got a bunch of work that is quite a bit longer to do than the rest. This is the work being done not the app running them. I haven't looked to make sure but they are probably very close to being VLARs. As the angle range decreases the time it takes to complete them increases.

As to running more than one work unit at a time, it depends on your system. Guido.man, I don't know why your 560Ti has a problem running more than one. You should be able to run at least four with very little loss of time. Did you remember to divide the time by the number of work units you are doing at a time? Yes, running more than one will slow them down but you are running more than one in that same amount of time. With the old v0.32 app I was running two at a time on my little GTS 450 because three at a time took too long. With the new app and the 275.33 driver I have increased to three at a time in about the same time it took me to run two at a time with the old app. You should see at least that much of an improvement. You may want to take a look at this thread http://setiathome.berkeley.edu/forum_thread.php?id=63429 and see if it might help you. I have also pointed JasonG to it and he is working on something to help out with that problem.

I hope I was able to help out with these questions and I'm sure if I missed something, one of the others will correct me or answer what I didn't.


PROUD MEMBER OF Team Starfire World BOINC
ID: 1118913 · Report as offensive
Allan Taylor

Send message
Joined: 31 Jan 00
Posts: 32
Credit: 270,259
RAC: 0
Canada
Message 1118924 - Posted: 19 Jun 2011, 6:42:12 UTC - in response to Message 1118806.  

I decided to try the new installer to see if the astropulse would work any better on my ATI card (I was having a lot of restarts before). After running the installer I see that the app info has two app version entries for the ati astropulse. One is 505 and the other is 506. Everything is the same between them, just the version number is different. Why are there two entries?

Just to ensure as smooth as possible an installation process. People running the stock (Windows) application will have the work already cached (if any) marked as version 505: people running some previous optimised application might have set their app_info.xml files to version 506 to match the (later) stock Linux bulid.

Having both numbers in the installer files merely ensures that neither group of users loses any work during the installation.


Thanks Richard and Josef. I just wanted to be sure it wasn't a mistake somewhere. I'll leave it alone and see how the astropulse work now.
ID: 1118924 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1118950 - Posted: 19 Jun 2011, 8:30:50 UTC - in response to Message 1118854.  


If the GPU calculate CUDA and you use the GPU also for other things, it's normal that both influence each other.

It looks like the new x38g_cuda32 app use 'normal' priority, x32f_cuda30 had 'lower than normal' (if I look/ed in Windows Task-Manager).

Just an idea..
If you have probs, maybe take Fred's nice eFMer Priority tool and reduce the priority of the new CUDA app.

On my E7600 & GTX260 OC machine x38g_cuda32 run very well (what you see in my test thread).
But, the GTX260 OC is only used for crunching. No screen connected. Screen at onboard (Intel) GPU.
I have set 'high' - and see directly at starts of the CUDA WUs ~ 1 second delayed reaction of the system for other things. But that's O.K., max CUDA performance! ;-)


Ah! I never bothered to check priority, as I just assumed it was always been in below normal :) Turns out the system-wide hanging I get is 10seconds into loading the WU, when the thread gets bumped up from below normal, to normal :)

So I'll try out an automatic priority changer and see if that helps. Cheers :)
ID: 1118950 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65842
Credit: 55,293,173
RAC: 49
United States
Message 1118953 - Posted: 19 Jun 2011, 8:53:27 UTC
Last modified: 19 Jun 2011, 8:53:47 UTC

I had to turn My fans up to the max, somehow I forgot about the blasted things after the 275.33 driver upgrade. It's a good thing I have a fully armed and equipped swamp cooler a foot from the PC, problem is I hate running the cooler at night if It's cool enough out and 77F(25C) almost is cool enough presently and was good enough before the upgrade, as My gpus were at 82C with the fans at 100% and the gpu temps had been at 84C and climbing before that, so it's a good and hot driver. It's a good thing I have 3 radiators for the water cooled cruncher, as I'll need all 3 most likely for all 6 gtx295 cards, the i5 750 cpu and the motherboard itself. The gpu temps falling now, so far its 76C and maybe falling.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1118953 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14655
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1118960 - Posted: 19 Jun 2011, 9:41:21 UTC

Yes, I see the application priority uplift from 'below normal' to 'normal' here too. I wasn't aware that was part of the plan - or it may have slipped in at an earlier stage of the x37/x38 development cycle, and I just missed it. I'll ask the developer.
ID: 1118960 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1118964 - Posted: 19 Jun 2011, 10:17:14 UTC - in response to Message 1118775.  


I may be wrong, but I don't think anyone has done the straightforward 1/2/3 task per card comparisons yet for x38g - if anyone has, please post them here.


Just a quick test, using driver 275.33, 3 x GTX470 OC.

1 per card 9 to 11 min per WU

2 per card 15 to 17 min per WU

3 per card 20 to 22 min per WU

Video driver now seems stable, no more driver restarts and no downclocking, temps are running a bit higher than v0.37 - 266.58, will have to remove OC during warmer weather.


Kevin


ID: 1118964 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13765
Credit: 208,696,464
RAC: 304
Australia
Message 1118965 - Posted: 19 Jun 2011, 10:17:41 UTC - in response to Message 1118960.  


Just ran the new installer, then modified the app_info for 2 Work Units at a time.
On my GTX460 shorties used to take around 5:30, now they appear to be going through in around 4:30. A significant speedup.

Well done.
Grant
Darwin NT
ID: 1118965 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1118992 - Posted: 19 Jun 2011, 13:33:59 UTC - in response to Message 1118982.  

Sorry Guido.man, I saw this..
Not much to be gained by running more than 1 WU at a time.
and I guess I read it wrong. Thought you were complaining about not gaining much by running more than one at a time. I see by your little math test that running more than one on your rig is actually faster than running one at a time. One at a time times three equals 273 seconds as opposed to three at a time taking 261 seconds. 12 seconds doesn't sound like much at first glance but when you add in the time to switch to the next work unit and get it loaded up and running it really adds up.


PROUD MEMBER OF Team Starfire World BOINC
ID: 1118992 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14655
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1118999 - Posted: 19 Jun 2011, 14:01:04 UTC - in response to Message 1118992.  

Sorry Guido.man, I saw this..
Not much to be gained by running more than 1 WU at a time.
and I guess I read it wrong. Thought you were complaining about not gaining much by running more than one at a time. I see by your little math test that running more than one on your rig is actually faster than running one at a time. One at a time times three equals 273 seconds as opposed to three at a time taking 261 seconds. 12 seconds doesn't sound like much at first glance but when you add in the time to switch to the next work unit and get it loaded up and running it really adds up.

I suggested that it would be helpful for somebody to carry out that test in the context of Eroc's host with one GTX460 and one GTX260 GPU. The question was whether the productivity increase of moving from 1 task to 3 tasks on the GTX460 was worth sacrificing the contribution of the GTX260 for - since I don't think it would be wise even to attempt 3 tasks at once on that card.

If the gain from running tasks 3-up is under 5%, then I think my gut instinct that, for those running mixed cards from different generations, using all cards but restricted to one instance per card is potentially the better option.

Unless to have a second host you can transplant the card into, of course.
ID: 1118999 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1119003 - Posted: 19 Jun 2011, 14:29:02 UTC

One might also consider, if the gain is only minimal if using for example one 460 and one 260, if it's worth using the extra power that's needed.
ID: 1119003 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14655
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1119006 - Posted: 19 Jun 2011, 14:41:35 UTC - in response to Message 1119003.  

One might also consider, if the gain is only minimal if using for example one 460 and one 260, if it's worth using the extra power that's needed.

Yes, for somebody with an older graphics card lying around, say after an upgrade, there's always a three-way power choice to be made.

1) Put both cards in one host, to save CPU/motherboard/HDD/etc power overheads.
2) Put the second card in a second host, to allow the new card to work at peak efficiency.
3) Retire the old card completely.

Each upgrader will have to find their own solution, bearing in mind the age and power (in both senses) of the retired card, the availability of a spare computer (that would be powered up anyway), and so on. All we can do is provide some figures to help users make their own decision.
ID: 1119006 · Report as offensive
Profile S@NL - eFMer - efmer.com/boinc
Volunteer tester
Avatar

Send message
Joined: 7 Jun 99
Posts: 512
Credit: 148,746,305
RAC: 0
United States
Message 1119028 - Posted: 19 Jun 2011, 15:27:07 UTC - in response to Message 1119006.  
Last modified: 19 Jun 2011, 15:28:09 UTC

I disabled 2 of the 8 HT's otherwise the system was too sluggish to work with.
Use 2 GTX 295 on that machine and the temps seem to have gone up a couple of degrees.
TThrottle Control your temperatures. BoincTasks The best way to view BOINC. Anza Borrego Desert hiking.
ID: 1119028 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65842
Credit: 55,293,173
RAC: 49
United States
Message 1119042 - Posted: 19 Jun 2011, 15:53:43 UTC

Here's a pretty good comparison of the differences in x32f(266.58) and x38g(275.33) in processing time on My current pair of GTX295 cards(BFG+EVGA), Oh and both cards are stock air cooled cards w/their fans set @ 100%.

The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1119042 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65842
Credit: 55,293,173
RAC: 49
United States
Message 1119105 - Posted: 19 Jun 2011, 17:51:51 UTC - in response to Message 1119042.  

Here's a pretty good comparison of the differences in x32f(266.58) and x38g(275.33) in processing time on My current pair of GTX295 cards(BFG+EVGA), Oh and both cards are stock air cooled cards w/their fans set @ 100%.

Doing a quick and dirty calculation, It's a 55.8441558441558% increase in crunching power here. Mwahahahaha!!!
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1119105 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14655
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1119122 - Posted: 19 Jun 2011, 18:39:53 UTC - in response to Message 1118960.  

Yes, I see the application priority uplift from 'below normal' to 'normal' here too. I wasn't aware that was part of the plan - or it may have slipped in at an earlier stage of the x37/x38 development cycle, and I just missed it. I'll ask the developer.

Yes, I'd forgotten an issue which arose last month during testing.

It was found that the new CUDA app for nVidia cards suffered a big loss in performance at 'below normal' priority when certain other applications were running on the same machine. We're looking into it to see if the incompatibilities can be ironed out.

In the meantime, if anyone finds that the sluggish screen behaviour is too much to live with, we can suggest:

a) Free up one or two CPU cores, depending on the number of tasks running, as Fred (efmer) has already posted.
b) Use Process Lasso (free download) to restore the x38g application to 'below normal' priority.
c) Re-install the x32f_preview application from the v0.37 installer. Although that's no longer available to download direct from the Lunatics website, I can make a copy available if anybody hasn't kept their copy.
ID: 1119122 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1119124 - Posted: 19 Jun 2011, 18:52:24 UTC

I for one would be interested in knowing what "certain other applications" there's talk about :) Is it apps that all are running, maybe built into windows?
ID: 1119124 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14655
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1119134 - Posted: 19 Jun 2011, 19:40:03 UTC - in response to Message 1119124.  

Answered by PM.
ID: 1119134 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 11 · Next

Message boards : Number crunching : Lunatics Windows Installer v0.38 release notes


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.