Message boards :
Number crunching :
Test, 6.09 -> x41g, 190.38 -> 285.58
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
After my 1st test, 6.09 vs. 6.10 vs. V11 vs. V12 vs. V12b won stock 6.09 & nVIDIA driver 190.38. (Message 1000430) Because no S@h WUs and I had time, I made a new test: inter alia 6.09 (the winner of my last test) vs. x32f_preview (from Lunatics Installer V0.37). [fastest in green] [nVIDIA driver 197.13] Lunatics_x32f_win32_cuda30_preview.exe TaskName: PG030429731429144.wu - 857.594 secs Elapsed - 56.328 secs CPU time TaskName: PG039172342915667.wu - 649.484 secs Elapsed - 46.625 secs CPU time TaskName: PG043184314852926.wu - 554.141 secs Elapsed - 43.500 secs CPU time TaskName: PG23752934904573.wu - 129.297 secs Elapsed - 23.266 secs CPU time [nVIDIA driver 260.99 (only driver installed. Not nView, not PhysX)] Lunatics_x32f_win32_cuda30_preview.exe TaskName: PG030429731429144.wu - 883.078 secs Elapsed - 67.750 secs CPU time TaskName: PG039172342915667.wu - 670.281 secs Elapsed - 57.281 secs CPU time TaskName: PG043184314852926.wu - 571.297 secs Elapsed - 48.078 secs CPU time TaskName: PG23752934904573.wu - 134.750 secs Elapsed - 24.344 secs CPU time [nVIDIA driver 260.99 (driver, nView, PhysX installed)] Lunatics_x32f_win32_cuda30_preview.exe TaskName: PG030429731429144.wu - 882.344 secs Elapsed - 65.750 secs CPU time TaskName: PG039172342915667.wu - 669.875 secs Elapsed - 54.063 secs CPU time TaskName: PG043184314852926.wu - 571.016 secs Elapsed - 49.563 secs CPU time TaskName: PG23752934904573.wu - 134.625 secs Elapsed - 25.047 secs CPU time [nVIDIA driver 260.99 (driver, nView, PhysX installed)] setiathome_6.09_windows_intelx86__cuda23.exe TaskName: PG030429731429144.wu - 830.688 secs Elapsed - 78.922 secs CPU time TaskName: PG039172342915667.wu - 632.500 secs Elapsed - 68.953 secs CPU time TaskName: PG043184314852926.wu - 546.609 secs Elapsed - 61.359 secs CPU time TaskName: PG23752934904573.wu - 138.891 secs Elapsed - 29.125 secs CPU time [nVIDIA driver 260.99 (driver, nView, PhysX installed)] setiathome_6.10_windows_intelx86__cuda_fermi.exe TaskName: PG030429731429144.wu - 859.547 secs Elapsed - 78.891 secs CPU time TaskName: PG039172342915667.wu - 654.031 secs Elapsed - 66.531 secs CPU time TaskName: PG043184314852926.wu - 567.750 secs Elapsed - 59.219 secs CPU time TaskName: PG23752934904573.wu - 138.141 secs Elapsed - 29.156 secs CPU time And finally, again back to 190.38 and 6.09: [nVIDIA driver 190.38] setiathome_6.09_windows_intelx86__cuda23.exe TaskName: PG030429731429144.wu - 798.891 secs Elapsed - 66.156 secs CPU time TaskName: PG039172342915667.wu - 606.000 secs Elapsed - 56.594 secs CPU time TaskName: PG043184314852926.wu - 524.359 secs Elapsed - 51.984 secs CPU time TaskName: PG23752934904573.wu - 130.953 secs Elapsed - 26.781 secs CPU time At PG23752934904573.wu Lunatics_x32f_win32_cuda30_preview.exe is ~ 2 secs faster as setiathome_6.09_windows_intelx86__cuda23.exe only because the CUDA WU preparation time on CPU is shorter. And the winner is (again) 190.38 and stock 6.09. I repeat here the note from the first message here in this thread: I made this test on my system*, the result could be different on your system. This thread is not for to recommend a CUDA application. You could read it, think about and decide what you could do. [* Intel Core2 Duo E7600 @ 3.06 GHz, DDR2 800/5-5-5-18, GIGABYTE GTX260 SOC @ 680/1500/1250 MHz (manufacturer OCed), WinXP Home 32bit] I guess the Lunatics_x32f_win32_cuda30_preview.exe could be faster on Fermi (GTX4xx/5xx) GPUs. [EDIT of the title of this thread from 'Test, 6.09 vs. 6.10 vs. V11 vs. V12 vs. V12b' to 'Test, 6.09 / 6.10 / V11 / V12 / V12b / x32f'.] |
SciManStev Send message Joined: 20 Jun 99 Posts: 6652 Credit: 121,090,076 RAC: 0 |
One thing I can say for sure. There are far fewer -12 errors with the Lunatics Installer. I was getting 2 to 4 a day, and that dropped to maybe 1 a week using the Lunatics Installer. My VLAR time on GPU's also dropped about 20%, but still crunching a VLAR on a GPU, will make your screen quite sluggish. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
Thanks Sutaru for your efforts. As always informative and accurate What would be interesting to see is a comparison between a 200 series card running 190.xx/191.xx drivers and V6.09 and an equivalent 400 series Fermi card running under the latest drivers and using Lunatics V0.37. My own, strictly empirical, experience is that when running V258.96 drivers and V0.37 there was nothing between a GTX285 and a GTX470 in crunching time (the cards were paired in the same box). I have since been able to split them up and will run some tests when the new box is a "goer". I will also be able to test if there is any real advantage in running multiple units on a Fermi card or not. Regards T.A. |
Miep Send message Joined: 23 Jul 99 Posts: 2412 Credit: 351,996 RAC: 0 |
A point to note besides speed considerations is, that (at least on my Vista system) 258.96/x32f_30 requires less GPU memory to run. [edit: I've only got 256MB and can't run the faster 2.3 dll on pre 256 drivers] Speedwise a beta test build with 3.1 dlls was faster than the current public x32f_30, but 3.1 dlls are buggy and can not be used in general. We are still waiting for general release of 3.2... To reiterate: the main advantage of x32f_30 is it runs on Fermi, as opposed to old V12, and we could see quite a few cases of people going for optimized apps and ending up with non-compatible V12 on their fermis. (which is one of the reasons x32f_30 is tagged as preview - it is an interim release to provide a fermi compatible app in the installer). And as Steve has pointed out, it produces less -12 errors than V12. Carola ------- I'm multilingual - I can misunderstand people in several languages! |
Joshua Send message Joined: 19 Mar 06 Posts: 13 Credit: 420,540 RAC: 0 |
Great work! This must have taking you some time to finish, but well worth it for individuals who have 200's series cards. I would like to see one of these benchmarks for 400's series cards.. Great job! and keep posting those benchmarks! P.S. This is my 1st post yay!! Any program is only as good as it is useful. -Linus Torvalds |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I repeat the note from the first message here in this thread: I made this test on my system*, the result could be different on your system. This thread is not for to recommend a CUDA application. You could read it, think about and decide what you could do. [* Intel Core2 Duo E7600 @ 3.06 GHz, DDR2 800/5-5-5-18, GIGABYTE GTX260 SOC @ 680/1500/1250 MHz (manufacturer OCed), WinXP Home 32bit] After Lunatics Installer V0.37 with x32f_cuda30 app, the stock S@h Enhanced 6.09 cuda23 app was still the fastest app on my system (Intel Core2 Duo E7600 with GTX260 OC) (2nd/last test here). After release of Lunatics Installer V0.38 with x38g_cuda32 app it's again time for to make a test - or not? ;-) [fastest in green] [slowest in red] The winner combi to now: [nVIDIA driver 190.38] setiathome_6.09_windows_intelx86__cuda23.exe TaskName: PG030429731429144.wu - 798.891 secs Elapsed - 66.156 secs CPU time TaskName: PG039172342915667.wu - 606.000 secs Elapsed - 56.594 secs CPU time TaskName: PG043184314852926.wu - 524.359 secs Elapsed - 51.984 secs CPU time TaskName: PG23752934904573.wu - 130.953 secs Elapsed - 26.781 secs CPU time [compared with 275.33 & x38g_cuda32] The newer nVIDIA drivers and new CUDA app: [nVIDIA driver 266.58] Lunatics_x38g_win32_cuda32.exe TaskName: PG030429731429144.wu - 789.844 secs Elapsed - 73.344 secs CPU time TaskName: PG039172342915667.wu - 594.359 secs Elapsed - 61.016 secs CPU time TaskName: PG043184314852926.wu - 513.859 secs Elapsed - 57.297 secs CPU time TaskName: PG23752934904573.wu - 127.000 secs Elapsed - 27.109 secs CPU time [nVIDIA driver 275.33] Lunatics_x38g_win32_cuda32.exe TaskName: PG030429731429144.wu - 783.828 secs Elapsed - 78.906 secs CPU time TaskName: PG039172342915667.wu - 588.703 secs Elapsed - 68.266 secs CPU time TaskName: PG043184314852926.wu - 508.594 secs Elapsed - 61.484 secs CPU time TaskName: PG23752934904573.wu - 123.969 secs Elapsed - 29.953 secs CPU time [compared with 190.38 & 6.09_cuda23] Last test (outside of the comparison) with.. [nVIDIA driver 275.33] setiathome_6.09_windows_intelx86__cuda23.exe TaskName: PG030429731429144.wu - 834.469 secs Elapsed - 89.906 secs CPU time TaskName: PG039172342915667.wu - 635.891 secs Elapsed - 77.984 secs CPU time TaskName: PG043184314852926.wu - 550.000 secs Elapsed - 70.047 secs CPU time TaskName: PG23752934904573.wu - 139.844 secs Elapsed - 33.469 secs CPU time So if I calculated correctly (in the background) '190.38 & 6.09_cuda23 vs. 275.33 & x38g_cuda32'.. The gain on GPU and loss on CPU.. +/- the same RAC on my/this (whole) machine. My machine stay/work with 275.33 and x38g_cuda32 app. IMHO, all machines (also with non Fermi (up to GTX2xx) grafic cards) should do the same now. Thanks and well done Lunatics crew! ;-) [EDIT of the title of this thread from 'Test, 6.09 / 6.10 / V11 / V12 / V12b / x32f' to 'Test, 6.09/6.10/V11/V12/V12b/x32f/x38g'.] - Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. - |
skildude Send message Joined: 4 Oct 00 Posts: 9541 Credit: 50,759,529 RAC: 60 |
Or a clean install, do all tests with oldest set of drivers, update drivers run next set of tests, etc etc. Actually this is very incorrect. My system running an ATI 6970 is very sensitive to the driver versions. having tried 11.3, 11.4, 11.5 and then dropping back to 11.2 it's necessary to run driversweeper in every install. ATI uninstall does not remove all drivers as driversweeper will demonstrate when you use it. Not removing the extra drivers can and will cause WU's to fail with computation errors. Perhaps the nVidia cards aren't as sensitive but a few of us found that the leftover drivers had undesirable effects. In a rich man's house there is no place to spit but his face. Diogenes Of Sinope |
SciManStev Send message Joined: 20 Jun 99 Posts: 6652 Credit: 121,090,076 RAC: 0 |
Perhaps the nVidia cards aren't as sensitive but a few of us found that the leftover drivers had undesirable effects. I have bounced all over the place with Nvidia drivers by just installing over the top of the old (or newer) one keeping my setting intact without any issues. That's not to say others won't, but Nvidia does not seem to be as sensitive as ATI. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
Mike Send message Joined: 17 Feb 01 Posts: 34255 Credit: 79,922,639 RAC: 80 |
I also upgrade over the top usually. Only revert back with removing and disksweeper. But it depends on SDK before version 11.3. Had no trouble between 11.3 - 11.6. But i´m waiting for SDK 2.5 8/2011. With each crime and every kindness we birth our future. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Or a clean install, do all tests with oldest set of drivers, update drivers run next set of tests, etc etc. You replied to a message which is over one year old. Maybe to this time it was well for him. ;-) If I up-/downgrade my nVIDIA drivers, I always use Windows/Software for to uninstall the present driver first. No additional software. [EDIT: BTW. This is a nVIDIA related thread. ;-D] - Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. - |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
On 400 series (FERMIs), GPUs, I had to UPDate the driver to 275.33, this also gave some speedup and am running now 4 MB WUs on the GTX480, driven by X9650(@3510MHz). Still having issues with the ATIs(2x EAH5870 on an i7-2600 and INTEL DP67BG)Board, P67 chipset, 6x SATA2(IDE/AHCI/RAID), USB 2.0 and 3.0, 32GByte DDRIII-1333MHz, 8GiG installed, but strangsly reporting completely different loads: reporting 100% constant load on card 0 and ~0%- 21%, (in spikes) on the second card. Also produced errors, some due to missing a deadline, other couldn't be checked. But the i7-2600, doen't crash or hangs, after the new v0.38 (x38g;64BIT), using BOINC 6.10.60 64BIT. Also doing 1 WU per GPU, with cmd line parameter period_iterations=5 and 1 instance per device. For AstroPulse, I use cmd-line: UNROLL=12, ffa_block 6144 and block_fetch 2048, changed from [i]ffa_block=4096 also block_fetch=4096, but haven't much AstroPulse WU's atm. Also doing 1 per GPU. Test/time purposes. OFF TOPIC (And still Bêta testing rev.177s sucessor rev.229 and/or rev.246) ON Topic Again, ;) Also 'lost' Dwarf, 2nd production host. Still establishing the damage and what caused it, not really of "old age". (Heat is probably 'involved') Thats why my main production host, WIN XP64(X9650@3.51GHz& GTX480) has no case, atm. for quite some time now, about 6 month and works fine. Even makes less noise, some 'cases' seem to amplify the fan noise, which is quite normal, it's all iron(resonation)(Ever seen a metal loudspeaker box? Maybe >500Hz) I have to apologies about Dwarf, which will not been able to meet the deadlines, if I can;t get it to work this weekend.................?! After a heavy thunderstorm right above, where I live, garden furniture, was found 2 stories higher, 6 chairs 2 on a roof 1 story high and one smashed through the window of the kitchen, the table, was recovered(in thousand peaces, a block away!) The other 4 are still missing Power also went out and burn marks on 2 windows, where Dwarf used to reside. Damage to the electrical installation was that big, that some cables had completely "disappeared", burned! And a 140KWATT, diesel-motor-generator, had to be used, to make repaires. We have so called 'smart meters', which makes a log, about use, how high at what time, etc. (Those discharges are quite strong, but statical electricity can't be converted to 'useable electricity' at all.) On the 3rd floor, all electrical equipment was damaged, cause the zero (NUL) wire burned and caused a phase-shift, so ~400Volt in stead of 240 was available for a washing machine a dryer, 2 PCs a TV and all lights, were gone, too! Ohh boy, looked like a war zone, took 3 days, to get everything working, again and clean up, what a mess. But 2 QUADs and an i7-2600, kept working, they all had a Kill-a-Watt attached and Power-Surge Protection. Almost a miracle, cause lightning hadn't hit the top of the building but, a "spot" on the 1st floor, where 4 air-pumps have an outlet, got a direct hit and also the electrical circuit, got a direct hit! Scary, I've to admit. Sorry for completely derailing this thread, wont do it again. I'm pleased with the new apps. |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
And I noticed more power draw, on the 2 rig I installed x38g. XP64 rig, was using 380Watt, doing 4 MB on OC'ed CPU and 4 on GTX480, also running @900MHz. i.e 850MHz. Now it uses 425Watt. I7-2600+2x EAH5870 used to draw 290-335Watt, with x38g, it uses 320-465Watt. Also running 2 WUs per GPU, instead of one. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
FYI, I had time and made again my famous bench-test.. ;-) For new readers of this thread, read carefully the first message of this thread (in short, if you have the same hard-/software the result could be the same. If you have other hardware (Specially the new Fermi series GTX4xx/5xx. IIRC, AFAIK, the Lunatics_x..._win32_cuda32.exe app series is more and more Fermi optimized.) and/or an other OS the result could be different). [green = fastest/shortest] Two results of my last tests: [nVIDIA driver 190.38] setiathome_6.09_windows_intelx86__cuda23.exe TaskName: PG030429731429144.wu - 798.891 secs Elapsed - 66.156 secs CPU time TaskName: PG039172342915667.wu - 606.000 secs Elapsed - 56.594 secs CPU time TaskName: PG043184314852926.wu - 524.359 secs Elapsed - 51.984 secs CPU time TaskName: PG23752934904573.wu - 130.953 secs Elapsed - 26.781 secs CPU time [nVIDIA driver 275.33] Lunatics_x38g_win32_cuda32.exe TaskName: PG030429731429144.wu - 783.828 secs Elapsed - 78.906 secs CPU time TaskName: PG039172342915667.wu - 588.703 secs Elapsed - 68.266 secs CPU time TaskName: PG043184314852926.wu - 508.594 secs Elapsed - 61.484 secs CPU time TaskName: PG23752934904573.wu - 123.969 secs Elapsed - 29.953 secs CPU time Here my new test: [nVIDIA driver 275.33] Lunatics_x41g_win32_cuda32.exe TaskName: PG030429731429144.wu - 811.281 secs Elapsed - 83.141 secs CPU time TaskName: PG039172342915667.wu - 600.453 secs Elapsed - 69.828 secs CPU time TaskName: PG043184314852926.wu - 518.109 secs Elapsed - 61.969 secs CPU time TaskName: PG23752934904573.wu - 123.969 secs Elapsed - 28.844 secs CPU time [nVIDIA driver 285.58] Lunatics_x38g_win32_cuda32.exe TaskName: PG030429731429144.wu - 793.922 secs Elapsed - 94.594 secs CPU time TaskName: PG039172342915667.wu - 596.281 secs Elapsed - 81.734 secs CPU time TaskName: PG043184314852926.wu - 514.781 secs Elapsed - 71.875 secs CPU time TaskName: PG23752934904573.wu - 125.391 secs Elapsed - 33.781 secs CPU time [nVIDIA driver 285.58] Lunatics_x41g_win32_cuda32.exe TaskName: PG030429731429144.wu - 821.078 secs Elapsed - 101.953 secs CPU time TaskName: PG039172342915667.wu - 607.938 secs Elapsed - 83.594 secs CPU time TaskName: PG043184314852926.wu - 524.063 secs Elapsed - 73.609 secs CPU time TaskName: PG23752934904573.wu - 125.563 secs Elapsed - 30.719 secs CPU time [EDIT of the title of this thread from 'Test, 6.09/6.10/V11/V12/V12b/x32f/x38g' to 'Test, 6.09 -> x41g, 190.38 -> 285.58'] - Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. - |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
FYI, So what if the latest Lunatics app is a Knat's cock slower (on your hardware) if the validation rate is improved, less -12's, less inconclusives, less wasted time, less time waiting for an extra wingman to validate an inconclusive, So the Science is returned sooner and Credit granted sooner, Claggy |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Good info. Looks like new driver = no good. I would agree with Claggy about the newer app. The older one might be a few % faster. However the newer app will probably have more work done correctly and probably = higher RAC. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
Vielen dank, Sutaru! (Have you tried the 270.61 WHQL? I think it may be - only a few seconds - faster on my mobile GT218 chip than the 275.33) |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
So what if the latest Lunatics app is a Knat's cock slower (on your hardware) if the validation rate is improved, less -12's, less inconclusives, less wasted time, less time waiting for an extra wingman to validate an inconclusive, So the Science is returned sooner and Credit granted sooner, I made only a speed comparison, nothing more - nothing less. Everybody can decide which app he would like to use.. I made/make this bench-tests for all members, also or specially for the Lunatics crew that they know how it work on my system (hardware, OS, driver and app).. - Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. - |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Vielen dank, Sutaru! You are welcome. I tested only 266.58 and 275.33. BTW, I found two old tests.. which was not so well like 275.33: 280.19 BETA Lunatics_x38g_win32_cuda32.exe TaskName: PG030429731429144.wu - 784.781 secs Elapsed - 80.234 secs CPU time TaskName: PG039172342915667.wu - 589.266 secs Elapsed - 69.969 secs CPU time TaskName: PG043184314852926.wu - 509.156 secs Elapsed - 62.922 secs CPU time TaskName: PG23752934904573.wu - 124.203 secs Elapsed - 29.047 secs CPU time 280.26 Lunatics_x38g_win32_cuda32.exe TaskName: PG030429731429144.wu - 784.281 secs Elapsed - 81.938 secs CPU time TaskName: PG039172342915667.wu - 589.313 secs Elapsed - 70.156 secs CPU time TaskName: PG043184314852926.wu - 509.109 secs Elapsed - 62.031 secs CPU time TaskName: PG23752934904573.wu - 124.203 secs Elapsed - 30.000 secs CPU time - Best regards! - Sutaru Tsureku, team seti.international founder. - Optimize your PC for higher RAC. - SETI@home needs your help. - |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
... I have no particular issues myself with this specific test, since it illustrates quite well the architectural changes both drivers and applications are undergoing. There are two main, perhaps not so obvious, issues to consider that are (& aren't) reflected in the numbers seen here. We have two main issues at play: - Later drivers refining toward meeting Microsoft's WDDM specification, do by design function slower on these older cards. These cards simply do not have the same memory related features, so an emulation layer is needed. This is in part frustrating for older card owners, even on XP as driver changes are necessary for compatibility there as well. Microsoft's expanded design of the old XP model is a costly forward-looking one devoted to reliability & security, both of which have been major problems under XP's now quite dated driver model (in it's pure 190.xx driver form at least). Basically, IMO, those improvements are worth it, but do have an associated performance cost. - The second main question is whether comparing old applications is reasonable, considering that there are known stability issues, precision/correctness issues, compatibility issues, and bugs outright that can compromise uptime & validation rates with any application (old stock or new opt). These issues are reduced as the builds progress, as typically are driver bugs and performance issues... So the choices become between raw throughput, & validation rate & subsequent scientific value. From the standpoint of global reliability & validation concerns, IMO the stock builds should have been replaced a long time ago. While x branch still suffers some obscure key limitations inherited from the 6.09 codebase, it (x41o) is submitted for Beta V7 consideration on the principles of being a solid, far more refined basis for the future, though several known long standing issues inherited from 6.09 remain. So as it stands, IMO, if there needs to be any choices made, it should be to only stick with any outdated stock or opt applications if there is some pressing need, such as that experienced by the new hardware limitations on older cards like multiple GTX295's in one rig not 'liking' the newer driver model. Despite being a stock application still distributed, 6.09 should certainly not be any more considered as a validation reference, or an example of efficiency (which is what controls general RAC trends after all) but instead be put in some sortof museum. Jason "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
LadyL Send message Joined: 14 Sep 11 Posts: 1679 Credit: 5,230,097 RAC: 0 |
i ve tried this KWSN Knabench V1.81r yesterday .... 1.81?! Try 2.08. Joe added quite a few configuration options. There is a readme somewhere in the package with the basics. Once all the files are in place, it's dead simple to use, you just need to shuffle the testing files around as needed. If you want to try again and need more help, open a new thread and ask. I don't feel like writing readmes if there'll only be a handful of people reading it and anyway I always get complaints that I write too tense and technical. I am quite prepared to guide you (or anybody else) through the setup process though. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.