lunatics GPU x38g + 296.10 Driver

Message boards : Number crunching : lunatics GPU x38g + 296.10 Driver
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
LadyL
Volunteer tester
Avatar

Send message
Joined: 14 Sep 11
Posts: 1679
Credit: 5,230,097
RAC: 0
Message 1239494 - Posted: 1 Jun 2012, 16:07:54 UTC

Current public release history:

x41g 	- improved reliability
	- preliminary GPU cross-generation computation precision matching

x38g    - improved cross GPU generation precision match, and
          preliminary optimisation experiments

x32f 	- Initial release based on setiathome_enhanced 6.09 sources,
	- Fermi GPU compatibility



Updating to x41g is certainly advisable.

But the most important reason for an upgrade is that you are running a CPU app that has a bug. To be precise some processing might be missed in some special cases, resulting in invalids due to missed signals.
I'm not the Pope. I don't speak Ex Cathedra!
ID: 1239494 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1239527 - Posted: 1 Jun 2012, 17:46:42 UTC - in response to Message 1239494.  

I upgraded to x41. Let's see how it works out. Did you mean GPU and not CPU? CPU apps has never had any problems.


ID: 1239527 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1239539 - Posted: 1 Jun 2012, 17:52:46 UTC - in response to Message 1239470.  


If you don't want to increase the Voltage then try to decrease the MHz (e.g. to stock NVIDIA values or lower)
The errors (false signals) and downclock are symptoms of bad Voltage/MHz combination.



ASUS TOP version has a stock clock of 900MHz. Quite difficult to change it. Can you use some NVIDIA panel to downclock the speed of GPU?

ID: 1239539 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1239548 - Posted: 1 Jun 2012, 18:08:24 UTC - in response to Message 1239539.  
Last modified: 1 Jun 2012, 18:08:50 UTC


If you don't want to increase the Voltage then try to decrease the MHz (e.g. to stock NVIDIA values or lower)
The errors (false signals) and downclock are symptoms of bad Voltage/MHz combination.



ASUS TOP version has a stock clock of 900MHz. Quite difficult to change it. Can you use some NVIDIA panel to downclock the speed of GPU?


The 900Mhz are not a "stock" clock, it is a "factory overclock". Here we often call "stock" the original values that NVIDIA proposed at design for the chipset and not the deffault factory value that the vendors used.

There is an utility from NVIDIA but its not very reliable. (At least I was never able to get it working at startup and after upgrading the drivers it stopped to work at all...)

You can change the Core Clocks and/or voltages using the Afterburner app (which is from MSI, but it works for any NVIDIA, I think that EVGA Pressicion app also works for any NVIDIA). I dont know if Assus has a propietary app to deal with these settings...
ID: 1239548 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1239571 - Posted: 1 Jun 2012, 18:28:12 UTC - in response to Message 1239527.  

I upgraded to x41. Let's see how it works out. Did you mean GPU and not CPU? CPU apps has never had any problems.

She did mean exactly what she wrote - the CPU app was found to have problems. A small, unobtrusive bug that nobody noticed, but a bug nevertheless.

From the Lunatics Windows Installer v0.39 release notes (28 November 2011):

CPU apps - upgraded to AKv8b2.

This is to fix a bug in the triplet finding - in rare cases triplets were being missed. As it took over two years to notice this bug, we believe it only affects a very minor amount of tasks.
ID: 1239571 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1239673 - Posted: 1 Jun 2012, 21:26:10 UTC - in response to Message 1239548.  

The update to x41 does not seem to be foolproof either:

Just look at the latest inconclusive result:

http://setiathome.berkeley.edu/result.php?resultid=2455409165


Stderr output
<core_client_version>6.10.60</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 2 CUDA device(s):
Device 1: GeForce GTX 560 Ti, 1023 MiB, regsPerBlock 32768
computeCap 2.1, multiProcs 8
clockRate = 1800000
Device 2: GeForce GTX 560 Ti, 1023 MiB, regsPerBlock 32768
computeCap 2.1, multiProcs 8
clockRate = 1800000
In cudaAcc_initializeDevice(): Boinc passed DevPref 2
setiathome_CUDA: CUDA Device 2 specified, checking...
Device 2: GeForce GTX 560 Ti is okay
SETI@home using CUDA accelerated device GeForce GTX 560 Ti
Priority of process raised successfully
Priority of worker thread raised successfully
Cuda Active: Plenty of total Global VRAM (>300MiB).
All early cuFft plans postponed, to parallel with first chirp.

) _ _ _)_ o _ _
(__ (_( ) ) (_( (_ ( (_ (
not bad for a human... _)

Multibeam x41g Preview, Cuda 3.20

Legacy setiathome_enhanced V6 mode.
Work Unit Info:
...............
WU true angle range is : 0.421534
VRAM: cudaMalloc((void**) &dev_cx_DataArray, 1048576x 8bytes = 8388608bytes, offs256=0, rtotal= 8388608bytes
VRAM: cudaMalloc((void**) &dev_cx_ChirpDataArray, 1179648x 8bytes = 9437184bytes, offs256=0, rtotal= 17825792bytes
VRAM: cudaMalloc((void**) &dev_flag, 1x 8bytes = 8bytes, offs256=0, rtotal= 17825800bytes
VRAM: cudaMalloc((void**) &dev_WorkData, 1179648x 8bytes = 9437184bytes, offs256=0, rtotal= 27262984bytes
VRAM: cudaMalloc((void**) &dev_PowerSpectrum, 1048576x 4bytes = 4194304bytes, offs256=0, rtotal= 31457288bytes
VRAM: cudaMalloc((void**) &dev_t_PowerSpectrum, 1048584x 4bytes = 1048608bytes, offs256=0, rtotal= 32505896bytes
VRAM: cudaMalloc((void**) &dev_GaussFitResults, 1048576x 16bytes = 16777216bytes, offs256=0, rtotal= 49283112bytes
VRAM: cudaMalloc((void**) &dev_PoT, 1572864x 4bytes = 6291456bytes, offs256=0, rtotal= 55574568bytes
VRAM: cudaMalloc((void**) &dev_PoTPrefixSum, 1572864x 4bytes = 6291456bytes, offs256=0, rtotal= 61866024bytes
VRAM: cudaMalloc((void**) &dev_NormMaxPower, 16384x 4bytes = 65536bytes, offs256=0, rtotal= 61931560bytes
VRAM: cudaMalloc((void**) &dev_flagged, 1048576x 4bytes = 4194304bytes, offs256=0, rtotal= 66125864bytes
VRAM: cudaMalloc((void**) &dev_outputposition, 1048576x 4bytes = 4194304bytes, offs256=0, rtotal= 70320168bytes
VRAM: cudaMalloc((void**) &dev_PowerSpectrumSumMax, 262144x 12bytes = 3145728bytes, offs256=0, rtotal= 73465896bytes
VRAM: cudaMallocArray( &dev_gauss_dof_lcgf_cache, 1x 8192bytes = 8192bytes, offs256=16, rtotal= 73474088bytes
VRAM: cudaMallocArray( &dev_null_dof_lcgf_cache, 1x 8192bytes = 8192bytes, offs256=168, rtotal= 73482280bytes
VRAM: cudaMalloc((void**) &dev_find_pulse_flag, 1x 8bytes = 8bytes, offs256=0, rtotal= 73482288bytes
VRAM: cudaMalloc((void**) &dev_t_funct_cache, 1966081x 4bytes = 7864324bytes, offs256=0, rtotal= 81346612bytes
Thread call stack limit is: 1k
cudaAcc_free() called...
cudaAcc_free() running...
cudaAcc_free() PulseFind freed...
cudaAcc_free() Gaussfit freed...
cudaAcc_free() AutoCorrelation freed...
cudaAcc_free() DONE.
Cuda sync'd & freed.
Preemptively acknowledging a safe Exit on error->
SETI@Home Informational message -9 result_overflow
NOTE: The number of results detected exceeds the storage space allocated.

Flopcounter: 22928380080152.254000

Spike count: 29
Pulse count: 0
Triplet count: 1
Gaussian count: 0
Worker preemptively acknowledging an overflow exit.->
called boinc_finish
boinc_exit(): requesting safe worker shutdown ->
boinc_exit(): received safe worker shutdown acknowledge ->

</stderr_txt>
]]>

ID: 1239673 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1239678 - Posted: 1 Jun 2012, 21:30:00 UTC - in response to Message 1239673.  

There is however an interesting effect on the power usage. The new Cuda Client uses 40-50W less power in total for 2 GPUs.


ID: 1239678 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1239995 - Posted: 2 Jun 2012, 5:22:40 UTC - in response to Message 1239548.  

I dont know if Assus has a propietary app to deal with these settings...


Yes, its called SmartDoctor.


Kevin


ID: 1239995 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1240004 - Posted: 2 Jun 2012, 5:36:01 UTC - in response to Message 1239673.  
Last modified: 2 Jun 2012, 5:50:17 UTC

The update to x41 does not seem to be foolproof either:


Correct: 560ti were the start of an advanced card targeted into the performance/price market, which created a number of issues. Here's my best recollection of the list of 560ti related caveats/checklist-items

1) Factory overclock cards shipped with insufficient core voltage to sustain stable Cuda crunching. Most Vendors shipped 900MHz cards with 1.05-1.68 V core, which is 'fine', lower than that is likely insufficient, add core voltage or reduce clocks. 'Game stable' isn't 'Cuda stable', as different parts of the chip are used, and artefacts aren't tolerable in number crunching situations like they are with games.
2) If the 266.66 drivers that came with the card were ever used, they tended to corrupt the Cuda compute cache within Windows, which requires seeking out the hidden folder & cleaning out to fix (more info will be needed if those drivers were used, driver advanced clean install option doesn't clean this)
3) If you have Solid State Drives, particulary Crucial M3(?) series, ensure their firmware is up to date, may apply to others
4) Keep temps at or below mid 80's Celsius for these GPUs, using a higher fixed fan speed, and ensure good airflow.
[Edit:]
5) You need a good PSU with plenty of 12V current. As these are factory overclocked, the reference minminal specced system power does not provide enough 12V current/overhead for 24x7 crunching over an extended period. Overkill is needed with these GPUs, particularly in multiples with factory OC.

HTH,
Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1240004 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1240030 - Posted: 2 Jun 2012, 7:03:54 UTC - in response to Message 1240004.  

The update to x41 does not seem to be foolproof either:


Correct: 560ti were the start of an advanced card targeted into the performance/price market, which created a number of issues. Here's my best recollection of the list of 560ti related caveats/checklist-items

1) Factory overclock cards shipped with insufficient core voltage to sustain stable Cuda crunching. Most Vendors shipped 900MHz cards with 1.05-1.68 V core, which is 'fine', lower than that is likely insufficient, add core voltage or reduce clocks. 'Game stable' isn't 'Cuda stable', as different parts of the chip are used, and artefacts aren't tolerable in number crunching situations like they are with games.
2) If the 266.66 drivers that came with the card were ever used, they tended to corrupt the Cuda compute cache within Windows, which requires seeking out the hidden folder & cleaning out to fix (more info will be needed if those drivers were used, driver advanced clean install option doesn't clean this)
3) If you have Solid State Drives, particulary Crucial M3(?) series, ensure their firmware is up to date, may apply to others
4) Keep temps at or below mid 80's Celsius for these GPUs, using a higher fixed fan speed, and ensure good airflow.
[Edit:]
5) You need a good PSU with plenty of 12V current. As these are factory overclocked, the reference minminal specced system power does not provide enough 12V current/overhead for 24x7 crunching over an extended period. Overkill is needed with these GPUs, particularly in multiples with factory OC.

HTH,
Jason


1) The problematic second card I have not yet ever connected to a monitor. So I do not know how many artefacts it has. Maybe I should swap for the orignal Card and see the results.

2) The 266 drivers may have been used for the first card before upgrade, but the first 560Ti card had never any issues with stability or wrong results. More info on how to clear out the CUDA Cache would be appreciated. I did have some problems with stability on the first card temporarily when windows proposed the first time to uprage the drivers from 285.62. The output was corrupt. A rollback got the first card back on track. My question is that is the cache GPU specific? Could you have a situation in, which only the cache for a particular card has a corrupt cache?

3) I do not use SSDs yet.

4) The GPU temp is roughly 74 degrees for the primary 560Ti, which does not have any problems and around 60 degrees for the secondary 560Ti, which does have the problem.

5) As mentioned earlier I am using the Corsair HX650 which provides a single rail of 52A of juice for the 12V rail.
ID: 1240030 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1240074 - Posted: 2 Jun 2012, 9:27:39 UTC
Last modified: 2 Jun 2012, 9:40:37 UTC

yes, #1 and the compute cache issue will probably need further attention. That driver cache is hashed by application/kernel & compute capability, so will apply to all similar cards in the system from the start.

Aside, I'm not entirely convinced that a HX650 is 'enough' for a factory overclocked 560ti + system, but by all means prove me wrong on that. My superclocked 560ti (running stock factory overclock 900Mhz, I run an AX850 gold)

[Edit:] 2 x superclocked 560 ti's on a HX650 ? If so, um, no, no chance. Not enough 'peak' current headroom.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1240074 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1240076 - Posted: 2 Jun 2012, 9:47:35 UTC
Last modified: 2 Jun 2012, 9:48:19 UTC

On My system, the Cuda 'ComputeCache' folder is located at:
C:\Users\Jason\AppData\Roaming\NVIDIA

Assuming you have newer drivers than the bogus Day 0 ones:
1) Stop BOINC
2) Delete the 'ComputeCache' folder inside a similar path to above
3) Reboot (clears driver greeblies)
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1240076 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1240172 - Posted: 2 Jun 2012, 15:24:51 UTC - in response to Message 1240030.  

1) The problematic second card I have not yet ever connected to a monitor. So I do not know how many artefacts it has. Maybe I should swap for the orignal Card and see the results.

2) The 266 drivers may have been used for the first card before upgrade, but the first 560Ti card had never any issues with stability or wrong results. More info on how to clear out the CUDA Cache would be appreciated. I did have some problems with stability on the first card temporarily when windows proposed the first time to uprage the drivers from 285.62. The output was corrupt. A rollback got the first card back on track. My question is that is the cache GPU specific? Could you have a situation in, which only the cache for a particular card has a corrupt cache?

3) I do not use SSDs yet.

4) The GPU temp is roughly 74 degrees for the primary 560Ti, which does not have any problems and around 60 degrees for the secondary 560Ti, which does have the problem.

5) As mentioned earlier I am using the Corsair HX650 which provides a single rail of 52A of juice for the 12V rail.


You know, I see a pattern here.
I have exactly the same issue with my second 560TI, It does't downclock but 15% (for Einstein) to 30% (for optimized SETI) of the WUs crunched on the second card becomes invalid. No one becomes invalid from the first card.

Anyway, I guess that mines are not dowclocking because I have a 1200W PSU and Ive raised the voltages.

I was thinking that the second card was faulty, but this thread mades me think that may be there is something else going on....
May be, the ComputeCache is different for each card... Inside the ComputeCache there are 3 folders ("2", "6" and "e"), and there are 2 folders inside the GLCache (with a hash-like name each)... Ive cleaned it. Lets see what happens with the invalids now...
ID: 1240172 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1240196 - Posted: 2 Jun 2012, 16:14:04 UTC
Last modified: 2 Jun 2012, 16:16:34 UTC

That'll be a great thing to eliminate from suspicion. If only to put ComputeCache corruption from old drivers in the clear, it still narrows the field.

[Edit:] did you ever swap the cards in their slots to see if it follows the card, stays with the slot, or clears up due to a good old fashioned re-seating ?


Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1240196 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1240205 - Posted: 2 Jun 2012, 16:40:03 UTC - in response to Message 1240196.  

That'll be a great thing to eliminate from suspicion. If only to put ComputeCache corruption from old drivers in the clear, it still narrows the field.

[Edit:] did you ever swap the cards in their slots to see if it follows the card, stays with the slot, or clears up due to a good old fashioned re-seating ?


Jason


I've made a little app that was reading the results pages to get the stderr data of every finished WU, to get the statistics for each card. I had to run it for more than a month to get all the results from a certain day classified as valid or invalid...

Whith that app I did several tests, re-seated them, changed PSU cables, changed PSU, but I did not swapped them... It was intended as the next step, but Ive got a hughe amount of work and Ive not had time...

My workaround solution was to devote each host to different projects and as Einstein was failling less I devoted the 560TIs to it...
(By the way, one project per host, not only made my RAC more stable, that trick rised my RAC far above than expected...)

If the clean of the compute cache doesnt work Ill try to get time to swap them or even to put them in different hosts (which was not possible before due to having only one PSU ready for those cards)

ID: 1240205 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1240229 - Posted: 2 Jun 2012, 17:14:03 UTC - in response to Message 1240205.  

My ComupteCache folders have subfolders from 0 - f

Name Date modified

0 12.11.2011
1 12.11.2011
2 1.6.2012
3 23.5.2012
4 1.6.2012
5 16.5.2012
6 4.11.2011
7 10.11.2011
8 12.11.2011
9 16.5.2012
a 11.11.2011
b 12.11.2011
c 11.11.2011
d 6.11.2011
f 23.5.2012

I actually feel that the x41 produces more wrong results than the previous x38.

Let's see how the compute cache clearing affects the system.

Is there a way to see the completed WU results before they appear on SETI web page? Isn't the stderr.txt deleted from slot folder on completion?

ID: 1240229 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34258
Credit: 79,922,639
RAC: 80
Germany
Message 1240233 - Posted: 2 Jun 2012, 17:20:28 UTC


I actually feel that the x41 produces more wrong results than the previous x38.


Thats not the app for sure.



With each crime and every kindness we birth our future.
ID: 1240233 · Report as offensive
Mark Lybeck

Send message
Joined: 9 Aug 99
Posts: 245
Credit: 216,677,290
RAC: 173
Finland
Message 1240236 - Posted: 2 Jun 2012, 17:23:35 UTC - in response to Message 1240233.  

OK here is the folder contents after clearing and restart:

C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache>dir *.* /s
Volume in drive C has no label.
Volume Serial Number is E444-1BF6

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 <DIR> 2
02.06.2012 20:19 <DIR> 3
02.06.2012 20:19 <DIR> f
02.06.2012 20:19 36 index
1 File(s) 36 bytes

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache\2

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 <DIR> b
0 File(s) 0 bytes

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache\2\b

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 9 740 f073e6
1 File(s) 9 740 bytes

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache\3

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 <DIR> f
0 File(s) 0 bytes

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache\3\f

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 3 583 f8a53e
1 File(s) 3 583 bytes

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache\f

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 <DIR> f
0 File(s) 0 bytes

Directory of C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache\f\f

02.06.2012 20:19 <DIR> .
02.06.2012 20:19 <DIR> ..
02.06.2012 20:19 3 575 0e23c7
1 File(s) 3 575 bytes

Total Files Listed:
4 File(s) 16 934 bytes
20 Dir(s) 991 387 648 bytes free

C:\Users\Mark\AppData\Roaming\NVIDIA\ComputeCache>


ID: 1240236 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1240238 - Posted: 2 Jun 2012, 17:26:27 UTC - in response to Message 1240233.  
Last modified: 2 Jun 2012, 17:31:36 UTC


I actually feel that the x41 produces more wrong results than the previous x38.


Thats not the app for sure.


LoL, better not be, because my 560ti gets essentially an error rate of zero (beyond known issues[, including some machine CPU cooling quirks that need looking at]) from the start. It would mean that the machine, boinc, nVidia and x41 series applications are conspiring to paint a false picture of working 'better' on this end, which would drive me into the funny farm.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1240238 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1240240 - Posted: 2 Jun 2012, 17:32:21 UTC - in response to Message 1240236.  

OK here is the folder contents after clearing and restart:


Somewhat cleaner looking, so we'll see how it goes.

"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1240240 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : lunatics GPU x38g + 296.10 Driver


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.