Lunatics Windows Installer v0.40 release notes

Message boards : Number crunching : Lunatics Windows Installer v0.40 release notes
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 14 · Next

AuthorMessage
LadyL
Volunteer tester
Avatar

Send message
Joined: 14 Sep 11
Posts: 1679
Credit: 5,230,097
RAC: 0
Message 1211478 - Posted: 29 Mar 2012, 9:42:06 UTC - in response to Message 1211285.  

Will those work to fix this on the ATI client as well? I don't understand why half of my WUs run normally and half of them crap out.



James - we just deliver the parcel. That's a question to put to Raistmer and ask him to look into and possibly fix.
I'm not the Pope. I don't speak Ex Cathedra!
ID: 1211478 · Report as offensive
LadyL
Volunteer tester
Avatar

Send message
Joined: 14 Sep 11
Posts: 1679
Credit: 5,230,097
RAC: 0
Message 1211482 - Posted: 29 Mar 2012, 9:48:09 UTC - in response to Message 1211365.  

A couple of comments on freeing one core, as a refinement but not as a recommendation.

For any host with up to 100 CPU cores, setting 99% frees one core. BOINC rounds down to the nearest fraction.

I wouldn't use that setting. Instead, when setting the <count> fields to control how many GPU tasks run at once I'd set the <avg_ncpus> fields such that when all GPUs had work there would be a CPU core freed, but if GPU work was running out all CPU cores would go back to doing pure CPU tasks. For a single GPU situation, <avg_ncpus> would be the same as or a tiny bit higher than <count>, for multiple GPUs it would scale down:

            1 GPU       2 GPUs       3 GPUs
<count>  <avg_ncpus>  <avg_ncpus>  <avg_ncpus>
  0.5        0.5          0.25         0.167
  0.33       0.34         0.167        0.112
  0.25       0.25         0.125        0.084
  0.2        0.2          0.1          0.067
                                                                   Joe


Joe are you sure it's 1 and not >1? Besides the float representation may lead to values slightly smaller than 1 when adding up... I'd rather play safe and add another % of fraction.
I'm not the Pope. I don't speak Ex Cathedra!
ID: 1211482 · Report as offensive
LadyL
Volunteer tester
Avatar

Send message
Joined: 14 Sep 11
Posts: 1679
Credit: 5,230,097
RAC: 0
Message 1211489 - Posted: 29 Mar 2012, 10:37:19 UTC - in response to Message 1211428.  

It's possible to extend the CUDA app with cmdline settings, so the members could in-/decrease the priority himself?

Everything is possible. Whether it's practical is a differnt question and whether I sanction it... IMO the less knobs the better. There is such a thing as too many tuning options. PnP - not endless fumbling to find another half % of speed.

It's possible to make a bench-test tool, a very easy for noobs like me, one click and the program say which app (CPU extension usage) is the best/fastest for the machine?

For S@h Enhanced (MultiBeam) and Astropulse apps?

This would be very helpful and nice..

Sutaru, you've been running benches for ages.
If somebody has too much time, they are very welcome to take our test WUs and our benching scripts and write a nice colourful program, that does the thinking for you. Anyway, as far as I know, we are on our way to get rid of that bit. Would certainly make my life easier.

I just wanted to make a bench-test for to see which AP 6.01 app (r555 vs. r557) is faster on my machine, but I failed..

http://lunatics.kwsn.net/index.php?module=Downloads;catd=44

looks like I haven't gotten around to put the AP bench online. Ok, now.
unzip preserving folder strcuture, add apps, add WUs - available separately.
I'm not the Pope. I don't speak Ex Cathedra!
ID: 1211489 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211503 - Posted: 29 Mar 2012, 12:53:03 UTC

In the last 24hrs I have noticed that the cpu scheduler has been preempting the GPU tasks and running CPU tasks at high-priority. The machine (A-SYS) is an i7/750/6Gb/ram/Win7 Ultimate/64-bit/EVGA GTX460SE/EVGA GTS250, BOINC 7.0.22, Lunatics 0.40. All app_info.xml settings are default except the GPU count. Machine is running at 85% CPU (6 cores), leaving 2 cores for GPU processing at 2 tasks each.

29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 07jn11ad.2890.1299.14.10.112_1 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 15my11aa.14592.481.6.10.229_0 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 15my11aa.7272.1708.9.10.247_1 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 07jn11ad.2890.1299.14.10.113_0 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 17jn11ac.25044.322838.4.10.148_2 (removed from memory)


was the last time that preempting occurred and is still in effect. The CPU tasks started doing this with deadlines of 10 April and estimated run times for them are approx 30 minutes each. It is now working on ones for 11 April.

I exited the BM, stopping all work and restarted Boinc with nothing changing. Also recycled the machine with the same results.

I also have 8 AP6 6.01 (6 d/l'ed on 27 March, 2 on 28 March) with expected deadline of 21/22 April. When the first of these tasks came in they had a estimated running time of appro 120 hours each. During the last two days I have noticed the estimated run time go from 120 to 71 to 172 hours.

Questions:
1) Why are the GPU tasks being preempted when this did not occur in Lunatics 0.39?

2) Is the scheduler just clearing out the machine as fast as possible to allow room for the AP6 tasks?

3) Do I have a major problem here? I don't want to regress to Lunatics 0.39 because of the new AP6 units.

4) Do I need to test the new _41x?


I don't buy computers, I build them!!
ID: 1211503 · Report as offensive
Profile red-ray
Avatar

Send message
Joined: 24 Jun 99
Posts: 308
Credit: 9,029,848
RAC: 0
United Kingdom
Message 1211508 - Posted: 29 Mar 2012, 13:01:50 UTC - in response to Message 1211503.  
Last modified: 29 Mar 2012, 13:02:11 UTC

I get the same all the time when my DCF jumps after a slow GPU finishes. What is your current DCF? You may wish to update cc_config.xml to show the DCF changes.

<cc_config>
<log_flags>
<dcf_debug>1</dcf_debug>
ID: 1211508 · Report as offensive
tbret
Volunteer tester
Avatar

Send message
Joined: 28 May 99
Posts: 3378
Credit: 289,886,997
RAC: 30,772
United States
Message 1211510 - Posted: 29 Mar 2012, 13:06:16 UTC - in response to Message 1211503.  

In the last 24hrs I have noticed that the cpu scheduler has been preempting the GPU tasks and running CPU tasks at high-priority. The machine (A-SYS) is an i7/750/6Gb/ram/Win7 Ultimate/64-bit/EVGA GTX460SE/EVGA GTS250, BOINC 7.0.22, Lunatics 0.40. All app_info.xml settings are default except the GPU count. Machine is running at 85% CPU (6 cores), leaving 2 cores for GPU processing at 2 tasks each.

29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 07jn11ad.2890.1299.14.10.112_1 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 15my11aa.14592.481.6.10.229_0 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 15my11aa.7272.1708.9.10.247_1 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 07jn11ad.2890.1299.14.10.113_0 (removed from memory)
29-Mar-2012 00:17:37 [SETI@home] [cpu_sched] Preempting 17jn11ac.25044.322838.4.10.148_2 (removed from memory)


was the last time that preempting occurred and is still in effect. The CPU tasks started doing this with deadlines of 10 April and estimated run times for them are approx 30 minutes each. It is now working on ones for 11 April.

I exited the BM, stopping all work and restarted Boinc with nothing changing. Also recycled the machine with the same results.

I also have 8 AP6 6.01 (6 d/l'ed on 27 March, 2 on 28 March) with expected deadline of 21/22 April. When the first of these tasks came in they had a estimated running time of appro 120 hours each. During the last two days I have noticed the estimated run time go from 120 to 71 to 172 hours.

Questions:
1) Why are the GPU tasks being preempted when this did not occur in Lunatics 0.39?

2) Is the scheduler just clearing out the machine as fast as possible to allow room for the AP6 tasks?

3) Do I have a major problem here? I don't want to regress to Lunatics 0.39 because of the new AP6 units.

4) Do I need to test the new _41x?


Must be something to do with your specific situation.

My eight-banger AMD is running 4 cores only, but two nVidia cards two at a time with no interference. Like you, I'm looking forward to seeing what AVX does with AP v6 when I finally get to them.
ID: 1211510 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211511 - Posted: 29 Mar 2012, 13:20:04 UTC - in response to Message 1211508.  

I get the same all the time when my DCF jumps after a slow GPU finishes. What is your current DCF? You may wish to update cc_config.xml to show the DCF changes.

<cc_config>
<log_flags>
<dcf_debug>1</dcf_debug>


Modified the cc_config.xml and reread it.
03/29/2012 09:06:07 | | Re-reading cc_config.xml
03/29/2012 09:06:07 | | Config: use all coprocessors
03/29/2012 09:06:07 | Milkyway@Home | Config: excluded GPU. Type: all. App: milkyway. Device: 1
03/29/2012 09:06:07 | | log flags: file_xfer, sched_ops, task, cpu_sched, dcf_debug, sched_op_debug
03/29/2012 09:06:11 | SETI@home | Computation for task 06jn11ab.31414.9883.7.10.59_0 finished
03/29/2012 09:06:11 | SETI@home | [dcf] DCF: 1.006398->1.006192, raw_ratio 1.004339, adj_ratio 0.997954
03/29/2012 09:06:11 | SETI@home | Starting task 20my11ad.5048.16018.13.10.110_0 using setiathome_enhanced version 603 in slot 2
03/29/2012 09:06:13 | SETI@home | Started upload of 06jn11ab.31414.9883.7.10.59_0_0
03/29/2012 09:06:23 | SETI@home | Computation for task 06jn11aa.24762.11110.3.10.232_0 finished
03/29/2012 09:06:23 | SETI@home | [dcf] DCF: 1.006192->1.005995, raw_ratio 1.004220, adj_ratio 0.998039
03/29/2012 09:06:23 | SETI@home | Starting task 24my11af.14050.16427.3.10.118_0 using setiathome_enhanced version 603 in slot 7



I don't buy computers, I build them!!
ID: 1211511 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 13034
Credit: 143,618,867
RAC: 198,305
United Kingdom
Message 1211513 - Posted: 29 Mar 2012, 13:24:08 UTC - in response to Message 1211503.  

(removed from memory)

If you have a reasonably strong machine with a decent amount of memory and disk (swap file) space - i.e. almost anything built in the last ten years - you will find it more efficient to select the preference to

Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes'
ID: 1211513 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211515 - Posted: 29 Mar 2012, 13:32:26 UTC - in response to Message 1211513.  

(removed from memory)

If you have a reasonably strong machine with a decent amount of memory and disk (swap file) space - i.e. almost anything built in the last ten years - you will find it more efficient to select the preference to

Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes'


Changing the preference is not a problem, which I will try. My question is, why the preemption in the first place? Never was never an issue until after installing the new Lunatics.


I don't buy computers, I build them!!
ID: 1211515 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 13034
Credit: 143,618,867
RAC: 198,305
United Kingdom
Message 1211518 - Posted: 29 Mar 2012, 13:38:33 UTC - in response to Message 1211515.  

(removed from memory)

If you have a reasonably strong machine with a decent amount of memory and disk (swap file) space - i.e. almost anything built in the last ten years - you will find it more efficient to select the preference to

Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes'

Changing the preference is not a problem, which I will try. My question is, why the preemption in the first place? Never was never an issue until after installing the new Lunatics.

I would expect that your first guess was right - it's likely to be because you now have Astropulse v6 tasks in your cache. They will have two characteristics that you can see in BOINC Manager:

1) An estimated runtime which (falsely) thinks that the tasks will take 150-200 hours.

2) A deadline which is closer than all MB tasks except shorties.

Once you've processed 50 AP v6 tasks or so, the estimates will - quite suddently - return to sanity, and normal service will be resumed. Remind me to reply to Eric's email thoughts on that subject.
ID: 1211518 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211521 - Posted: 29 Mar 2012, 13:56:52 UTC - in response to Message 1211518.  

(removed from memory)

If you have a reasonably strong machine with a decent amount of memory and disk (swap file) space - i.e. almost anything built in the last ten years - you will find it more efficient to select the preference to

Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes'

Changing the preference is not a problem, which I will try. My question is, why the preemption in the first place? Never was never an issue until after installing the new Lunatics.

I would expect that your first guess was right - it's likely to be because you now have Astropulse v6 tasks in your cache. They will have two characteristics that you can see in BOINC Manager:

1) An estimated runtime which (falsely) thinks that the tasks will take 150-200 hours.

2) A deadline which is closer than all MB tasks except shorties.

Once you've processed 50 AP v6 tasks or so, the estimates will - quite suddently - return to sanity, and normal service will be resumed. Remind me to reply to Eric's email thoughts on that subject.


Currently I have 53 MB tasks with a deadline before 21 April & 67 after 22 April, some of which have an approx running time of 1-2 hrs. So what happens to the 64 CUDA tasks that have a deadline of 11 April?

How much is this depended on BOINC 7.0.22 and how much on Lunatics 0.40?


I don't buy computers, I build them!!
ID: 1211521 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 13034
Credit: 143,618,867
RAC: 198,305
United Kingdom
Message 1211523 - Posted: 29 Mar 2012, 14:11:17 UTC - in response to Message 1211521.  

(removed from memory)

If you have a reasonably strong machine with a decent amount of memory and disk (swap file) space - i.e. almost anything built in the last ten years - you will find it more efficient to select the preference to

Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes'

Changing the preference is not a problem, which I will try. My question is, why the preemption in the first place? Never was never an issue until after installing the new Lunatics.

I would expect that your first guess was right - it's likely to be because you now have Astropulse v6 tasks in your cache. They will have two characteristics that you can see in BOINC Manager:

1) An estimated runtime which (falsely) thinks that the tasks will take 150-200 hours.

2) A deadline which is closer than all MB tasks except shorties.

Once you've processed 50 AP v6 tasks or so, the estimates will - quite suddently - return to sanity, and normal service will be resumed. Remind me to reply to Eric's email thoughts on that subject.

Currently I have 53 MB tasks with a deadline before 21 April & 67 after 22 April, some of which have an approx running time of 1-2 hrs. So what happens to the 64 CUDA tasks that have a deadline of 11 April?

How much is this depended on BOINC 7.0.22 and how much on Lunatics 0.40?

CUDA tasks will run in their own queue, in their own order, while MB and AP fight over the CPUs.

Most of this is down to SETI@home, a little is due to BOINC (any/every version), and the only contribution by Lunatics v0.40 is that it allows you to process Astropulse v6 in the first place.

Arrrgh, no - BOINC v7.0.22 and BOINC v7.0.23 (only) have a specific bug which may stop CUDA processing under these circumstances. Better to use v7.0.20 (or .21 - though I haven't tested that one). v7.0.24 isn't available yet, but will - subject to testing - have the fix I've been working on with David.

Did I hear anybody mention the words 'high priority'?
ID: 1211523 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7104
Credit: 147,313,424
RAC: 0
Germany
Message 1211524 - Posted: 29 Mar 2012, 14:15:14 UTC - in response to Message 1211489.  

It's possible to extend the CUDA app with cmdline settings, so the members could in-/decrease the priority himself?

Everything is possible. Whether it's practical is a differnt question and whether I sanction it... IMO the less knobs the better. There is such a thing as too many tuning options. PnP - not endless fumbling to find another half % of speed.

It would be nice to have this possibility, so I wouldn't need to use a 3rd party tool for to increase the priority..

It's possible to make a bench-test tool, a very easy for noobs like me, one click and the program say which app (CPU extension usage) is the best/fastest for the machine?

For S@h Enhanced (MultiBeam) and Astropulse apps?

This would be very helpful and nice..

Sutaru, you've been running benches for ages.
If somebody has too much time, they are very welcome to take our test WUs and our benching scripts and write a nice colourful program, that does the thinking for you. Anyway, as far as I know, we are on our way to get rid of that bit. Would certainly make my life easier.

Yes, but it didn't worked. I tested it with two tools.
Now I know it through a few tests, r555 need a .DLL file - which I thought is only with a CUDA app needed.

I just wanted to make a bench-test for to see which AP 6.01 app (r555 vs. r557) is faster on my machine, but I failed..

http://lunatics.kwsn.net/index.php?module=Downloads;catd=44

looks like I haven't gotten around to put the AP bench online. Ok, now.
unzip preserving folder strcuture, add apps, add WUs - available separately.

Thanks!

I used the short AP WU: http://lunatics.kwsn.net/index.php?module=Downloads;sa=dlview;id=232.

The result:

AP6_win_x86_SSE_CPU_r555.exe
Elapsed 290.766 secs
CPU 288.547 secs

ap_6.01r557_SSE2_331_AVX.exe
Elapsed 276.469 secs
CPU 274.297 secs

I did everything fine, so it means the r557 app run faster on my machine*?

[* Intel Core2 Duo E7600 @ 3.06 GHz, DDR2 800/5-5-5-18 (all stock, not OCed), WinXP 32bit]


- Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. -
ID: 1211524 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211528 - Posted: 29 Mar 2012, 14:32:13 UTC - in response to Message 1211523.  

(removed from memory)

If you have a reasonably strong machine with a decent amount of memory and disk (swap file) space - i.e. almost anything built in the last ten years - you will find it more efficient to select the preference to

Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes'

Changing the preference is not a problem, which I will try. My question is, why the preemption in the first place? Never was never an issue until after installing the new Lunatics.

I would expect that your first guess was right - it's likely to be because you now have Astropulse v6 tasks in your cache. They will have two characteristics that you can see in BOINC Manager:

1) An estimated runtime which (falsely) thinks that the tasks will take 150-200 hours.

2) A deadline which is closer than all MB tasks except shorties.

Once you've processed 50 AP v6 tasks or so, the estimates will - quite suddently - return to sanity, and normal service will be resumed. Remind me to reply to Eric's email thoughts on that subject.

Currently I have 53 MB tasks with a deadline before 21 April & 67 after 22 April, some of which have an approx running time of 1-2 hrs. So what happens to the 64 CUDA tasks that have a deadline of 11 April?

How much is this depended on BOINC 7.0.22 and how much on Lunatics 0.40?

CUDA tasks will run in their own queue, in their own order, while MB and AP fight over the CPUs.

Most of this is down to SETI@home, a little is due to BOINC (any/every version), and the only contribution by Lunatics v0.40 is that it allows you to process Astropulse v6 in the first place.

Arrrgh, no - BOINC v7.0.22 and BOINC v7.0.23 (only) have a specific bug which may stop CUDA processing under these circumstances. Better to use v7.0.20 (or .21 - though I haven't tested that one). v7.0.24 isn't available yet, but will - subject to testing - have the fix I've been working on with David.

Did I hear anybody mention the words 'high priority'?


High priority was stated in my first post here. It seems the 7.0.22 is the culprit and I will immediatedly revert back to 7.20, and will let you know what happens.


I don't buy computers, I build them!!
ID: 1211528 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211533 - Posted: 29 Mar 2012, 14:45:18 UTC

BINGO!! Problem solved. Many thanks to everyone and especially Richard Haselgrove who nailed the main problem! Reverted to 7.0.20 and GPU activity immediately started. Some went into High Priority to catch up as was expected. Back to my B-SYS rebuild which is almost completed.


I don't buy computers, I build them!!
ID: 1211533 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 13034
Credit: 143,618,867
RAC: 198,305
United Kingdom
Message 1211535 - Posted: 29 Mar 2012, 14:47:22 UTC - in response to Message 1211528.  

High priority was stated in my first post here. It seems the 7.0.22 is the culprit and I will immediatedly revert back to 7.20, and will let you know what happens.

Ah, I see what happened. Yes, all the clues are there in your first post, but my tired old brain failed to line them up and join the dots, even though it's one of those problems which has a mental flag of "my bug" on it.

Then, when the penny eventually dropped, I did a search of the thread for 'high priority', and failed to find your 'high-priority'. D****d literal computer searches - why can't they ignore hyphens?

Anyway, we got there in the end. v7.0.20 should be better in the short term.
ID: 1211535 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1420
Credit: 102,247,823
RAC: 38,138
United States
Message 1211539 - Posted: 29 Mar 2012, 14:55:46 UTC - in response to Message 1211535.  

High priority was stated in my first post here. It seems the 7.0.22 is the culprit and I will immediatedly revert back to 7.20, and will let you know what happens.

Ah, I see what happened. Yes, all the clues are there in your first post, but my tired old brain failed to line them up and join the dots, even though it's one of those problems which has a mental flag of "my bug" on it.

Then, when the penny eventually dropped, I did a search of the thread for 'high priority', and failed to find your 'high-priority'. D****d literal computer searches - why can't they ignore hyphens?

Anyway, we got there in the end. v7.0.20 should be better in the short term.


From an old man, you are probably much younger than I. That said, knowing how hard you have been working on everything and taking out time to solve my little problem; TAKE 5-10 MINUTES FOR YOURSELF, believe me it will be worth it. We can't afford to have you burned out before your time.


I don't buy computers, I build them!!
ID: 1211539 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 13034
Credit: 143,618,867
RAC: 198,305
United Kingdom
Message 1211542 - Posted: 29 Mar 2012, 15:00:48 UTC - in response to Message 1211539.  

From an old man, you are probably much younger than I.

Aging by the minute. :-(

That said, knowing how hard you have been working on everything and taking out time to solve my little problem; TAKE 5-10 MINUTES FOR YOURSELF, believe me it will be worth it. We can't afford to have you burned out before your time.

Actually, I managed to get out for half-an-hour's stroll round the village in the spring sunshine, between a couple of the posts there. The benches outside the pub are looking particularly inviting for later this evening.
ID: 1211542 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1211558 - Posted: 29 Mar 2012, 15:28:30 UTC - in response to Message 1211441.  

Think you should take the opinion that "no news, is good news". People don't usually hesitate to complain.


Words of wisdom. Installer seems idiot-proof to me, and I should know since I'm (still) Boinc-stupid:) Thanx to all who have worked on the package and it's insides!

I would like to take this opportunity to to add a couple bullet-points to your never-ending wish-list. They will make you gurus cringe, I know that, but hopefully they will make enough sense that you won't hold what I'm about to say against me. So in a philosophical tone, not a demanding one, here goes:

Dare a say "sticky"?:) Here's the thing... I need a way to know who needs my PC and when, for what and for how long. If I can volunteer my little laptop to run something for somebody (for Seti/Seti Beta or Lunatics) I'd be more than happy to. Please don't take this the wrong way but I've read this whole thread and still have no idea if there is some version of some app over at Lunatics that I could run for you guys and help you get just that little bit closer to eventually releasing it. And the other day over at Beta I had to ask Eric what he needed us to crunch. I'm sure most of you will agree that it's not a Project Scientist's job to answer my questions, and I'd rather not do it again. So if there was a place I could check in and see that Astro vX over at Beta needs crunching for a few weeks or which Lunatics app vX needs crunching and for how long, it would be great! In summary, I'd love to help but I have no way of knowing how to help.

This next part is almost completely philosophical. If I had a "pretty" benchmark app I would go OCD on the thing. I'd check every WHQL nVidia driver from late 250's to current 290's. I'd turn Windows eye-candy, services and processes on and off. I'd play with nVidia settings. And whenever Jason-G would come out of left-field with, "Oh, you know, it could be your Wi-Fi that's interfering with crunching" I'd check that too:) I say "pretty" because I don't know how to work with black & white windows. I need installers and progress bars and buttons I can click on and things to hold my hand:) Of course I'm sure you guys have next-to NO time for such a thing which is why I started this paragraph with, "This next part is almost completely philosophical".

OK, break's over! Back to the Lunatics Windows Installer v0.40:)
ID: 1211558 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 2988
Credit: 11,909,178
RAC: 7,512
United States
Message 1211568 - Posted: 29 Mar 2012, 15:56:31 UTC - in response to Message 1211510.  

My eight-banger AMD is running 4 cores only, but two nVidia cards two at a time with no interference. Like you, I'm looking forward to seeing what AVX does with AP v6 when I finally get to them.

From what I've seen in my handy spreadsheet so far.. same hardware and everything from r409 (v505) to r557 with AVX (v6), v6 runs about 3,000 seconds faster. Down from ~43,000 seconds median to ~39,500 seconds median. Some quick math says that's ~9% faster. Now does SSE2 have anything to do with it, or is that all AVX? Either way, faster is faster.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1211568 · Report as offensive
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 14 · Next

Message boards : Number crunching : Lunatics Windows Installer v0.40 release notes


 
©2019 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.