Posts by ausymark


log in
21) Message boards : Number crunching : GTX680 + NVidia Beta 304.79 + BOINC 7.0.28 = "Error while computing" (Message 1260277)
Posted 641 days ago by Profile ausymark
Just on a side note, what have your found to be the optimal number of parallel SETI GPU tasks on the 680? (I am guessing its somewhere between 4 and 6 before adding more tasks doesn't gain any more throughput - i.e. the existing running tasks start to run longer with no actual increased overall throughput.)

Cheers

Mark
22) Message boards : Number crunching : Linux 32-bit CUDA Client? (Message 1250770)
Posted 661 days ago by Profile ausymark
You may want to put a request into the Lunatics team, however my guess is that 32 bit cuda on linux is such a small user population that it may not be worth the effort (Like we could be talking a handfull of people worldwide.)

However it cant hurt to ask ;)

Cheers

Mark
23) Message boards : Number crunching : Question about feeding the GPU (Message 1250768)
Posted 661 days ago by Profile ausymark
My guestamate was worst case scenario, it could be as high as 50K :)

Cheers

Mark
24) Message boards : Number crunching : Question about feeding the GPU (Message 1250406)
Posted 662 days ago by Profile ausymark
Hi Irok

My setup is similar to yours (i7 2600K overclocked to 4.5Ghz on air with an nvidia 580). On 64bit Ubuntu 12.04 linux.

The i7 is an interesting beast as its 8 virtual cores, 4 real ones. This allows i7 uses to do something quite unique as far as CPU/GPU computing goes.

Core goals for CPU crunching is to use up to 100% of all cores to crunch (assuming the CPU has appropriate cooling) and ....

Core goals of GPU crunching is to crunch as many GPU work units as the GPU memory can handle while allowing the PC system user to use the graphics ability of their machine without degradation to the 'user experience'.

On the GPU front for me that meant running just 2 work units on it at a time. This is primarily because I play games on the system, as well as use it for normal office/web tasks. (Running 3 work units causes runtime issues with graphics games as graphics memory became contested resulting in GPU Seti work unit errors).

Now we come to the i7. The CPU must keep the data communication with the GPU occurring as quickly as possible with as much data as required. This is where the i7 works well ..... I have seti configured to use 50% of my CPU - that is 4 cores. But why just 4 when I have 8 virtual? Simply that by increasing it past 4 results in longer work unit processing times with very little RAC advantage.

The operating system will schedule each CPU work unit on each of the 4 physical cores. Some will now argue that these 4 cores may not be operating at 100% all the time. Well, great! Your operating system needs some space to do other things besides seti - so the system still remains fluid and responsive to all tasks - including seti.

Now this is also where the virtual cores come in. The operating system thinks there are 8 cores and will assign 'GPU data feeding' tasks for the GPU to one, or more, of the virtual cores not running the seti cpu task. This ensures that fluid scheduling is given to the 'quieter' seti cpu core and allows for the GPU to get the workload communication it requires.

This ends up being the best of all 3 worlds:

1) The real CPU cores being worked very hard
2) The GPU running efficiently - with maximum communication to the CPU
3) The PC itself remaining fluid for user interaction (a core goal of 'running seti in the background')

If you look at my stats atm they dont look great at around just 6000RAC, but thats with the pc running for only 3 to 4 hours a day atm. - so if it was run 24/7 it would be around the 35K RAC mark.

Anyway thats just my 2c worth. :)

Cheers

Mark
25) Message boards : Number crunching : Linux 32-bit CUDA Client? (Message 1250371)
Posted 662 days ago by Profile ausymark
Hi Justin

The 32bit CUDA client was pulled as it was buggy and generated work unit errors. As most crunchers were running 64bit distributions I believe it was deemed a waste of resources to fix and so it was pulled.

Do what I did .... upgrade to the 64 bit version of your distribution (I did it to both Ubuntu and Mandriva) - I haven't looked back since and my seti scores have increased ten-fold from running the CUDA client. (Actually its more than ten-fold lol)

Cheers

Mark
26) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1216812)
Posted 735 days ago by Profile ausymark
Hi Chris

Simply set the number of CPU's to use as 50%, that way it will run on 4 of the 8 cores.

As far as the GPU side of things I am using the optimised CUDA program from lunatics ( http://lunatics.kwsn.net/ )

In the configuration (app_info.xml ) file for that change the setting that says:

---------
<type>CUDA</type>
<count>1.0</count>
--------

to

-------
<type>CUDA</type>
<count>0.5</count>

---------

This will cause 2 seti instances to run on your gpu.

Hope this helps :)
27) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1202253)
Posted 773 days ago by Profile ausymark
Hi Team

Well my computer online time is going to be sporadic over the next several months so for the moment I am abandoning the experiment. I will be running 4 SETI processes on the cpu and 2 on the nvidia 580 graphics card. So for now i will just crunch on.

The server and data supply issues have also made getting any form of consistent baseline pretty much impossible - so not sure if those issues will get better over time or not.

Bottom line is however that this rig is way faster than anything I have used before. I have already crunched 15 times more work than I have in the preceding 11 years of doing seti - thats just crazily amazing :-)

Cheers

Mark.
28) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1183664)
Posted 829 days ago by Profile ausymark
Update

Hi team

Yes I am actually running the latest Firestorm Viewer.

The crux of the issue after some discussion is that the user selectable texture memory limit in the viewer does not account for how the SL viewer uses all the VRAM - the texture memory part is only a small part of the VRAM usage for the viewer.

So basically the SL viewer will use up as much VRAM as it wants regardless of the texture memory setting. So I have to consign myself to the fact that Second Life is going to hog the VRAM and contend with the CUDA client for resources.

Back to the experiment .... wondering what else will pop up :p

Cheers
29) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1183370)
Posted 830 days ago by Profile ausymark
Update

OK, going back to the older CUDA client seems to crash Second Life more so I've switched back to the x41g client. I have raised a bug issue with the Second Life viewer team and will see what gets resolved from that end.

Cheers

Mark
30) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1182935)
Posted 832 days ago by Profile ausymark
Update

OK, seems as though the previous CUDA client was also being 'starved' for VRAM when Second Life was running - however seemed to be handing it better. Previous experience indicates that I would get around 10 computational errors per week in this configuration, whereas the x41g allows 30+ computational errors to slip through - which I am guessing is VRAM issue.

So for the moment I am going to run with the old CUDA client as overall I wil get better throughput. Will wait for an update to the Second Life viewer to see if its VRAM memory configuration setting actually works like it once did.

Cheers

Mark
31) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1182889)
Posted 832 days ago by Profile ausymark
OK have done some more testing.

Seems as though Second Life is competing with the x41G CUDA clients that is causing the VRAM usage expansion in a way that the previous CUDA clients did not.

I may temporarily go back to the previous version to see if it does the same thing.

Cheers

Mark
32) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1182547)
Posted 834 days ago by Profile ausymark
HI aaronhaviland

OK, update time: Running 2 instances of x41G is now failing - running out of VRAM when running Second Life. (Generated a whole bag of Compilation errors). I've gone back to just a single CUDA instance and am seeing that my free VRAM memory is around 340mb. So its no wonder that a second instance is generating errors (300MB + 100MB of 'dynamic expansion'.

I will monitor this configuration and see how it goes.

Im wondering if a more ram optimised client is possible - or conversely a client that fully utilises the processing ability of the card - even with a slight memory increase would probably be preferable.

Anyway the experiment continues ;)

Cheers

Mark
33) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1182491)
Posted 834 days ago by Profile ausymark
Hi Dave

Sorry, Im not leaving my PC run 24/7 due to the environmental impacts that has and the extra cost of electricity. I also don't agree with your logic. All computer parts have a Mean Time Before Failure (MTBF) rating. Running those parts when I don't require the computer means that MTBF "dead line" comes up sooner. Yes I am aware of cool down/heatup issues with shutting the thing down but I have purchased good quality parts when I built the rig so it should be able to handle it.

Also, as a side point - having it run 24/7 increases the probability of a power spike (even with a surge protector) - and that is more deadly to any powering up/down of the system.

And yes, I fully realise that with the system running it would be churning through somewhere between 20K and 35K RAC's.

Anyway, thats my 2c worth :)

Cheers :)
34) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1182372)
Posted 835 days ago by Profile ausymark
Hi aaronhaviland

Found the issue with that. I had previously worked out with the old CUDA client that with OS it was using 650mb of VRAM, which left a good 800Mb for games etc. (Primarily Second Life).

However with the change to the new CUDA client I also changed to the updated Second Life Viewer - both of which have higher memory requirements.

I have also noticed that the new X41G code will use up to 200Mb more VRAM above the 600Mb I am allowing for - so consequently the possible active amount being used with operating system can potentially be as much as 950+ Mb!

When using Second Life, which in the past I had configured to use 500Mb of VRAM, things would run fine - however now we are hitting VRAM limits. I have reduced it down to just 400Mb of VRAM and the free VRAM seems to float between 260Mb free to as little as 40Mb.

Note: I am also assuming the nVidia 580 is underutilized running just 1 CUDA task and that it can run 2 CUDA tasks in almost the same amount of time that it can run 1 CUDA task. (This assumes nothing fundamental has changed between the older CUDA client and the newer x41G CUDA client.)

I will monitor the situation to see how it goes.

Cheers

Mark

35) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1181766)
Posted 837 days ago by Profile ausymark
Further notes

Several things have occurred over the last 4 weeks.

A drop of Astropulse work units (Replaced by normal CPU work units), and scarcity of GPU work units, both of which have dramatically affected throughput. Dropping from a high of 18K credits down to just below 10K credits.

Also my computer running time will shortly change from 14 hours a day back to an average of around 6 hours a day. Naturally that's going to affect the throughput again.

I have also, as of today, upgraded to the latest x41g GPU app for linux 64 bit. Initial impressions of it are that processing times have improved from 10% to 30%.

So for now, until I have some stability in running times I am going to run 4 CPU tasks (one for each core) and 2 GPU tasks.

Why this configuration? In the past its been suggested that for CPU tasks that one more task is run than the number of cores available so that the cores are maxed out all the time. However as I am also running the GPU seti client the CPU must still feed data to the GPU, so I don't wish to constrain that aspect. So with hyperthreading enabled any CPU slack should be used for GPU data feeding - thats the theory anyway.

So it will take awhile until we get back to some form of stable credit throughput - and if not achieved I will probably stay at the current configuration as it seems to work the best from brief testing.

Hopefully I will have a new update within 4 weeks.

Until then - Happy New Year!
36) Message boards : Number crunching : Linux x64 Cuda Multibeam (x41g) (Message 1181762)
Posted 837 days ago by Profile ausymark
Hi Team

Now running x41g under Ubuntu 11.10 - seems to be working fine, and between 10% to 30% faster compared to previously.

Will see how we go, but so far so good.

Great work team! :)

Cheers
37) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1174485)
Posted 869 days ago by Profile ausymark
Latest Update:

This has been very interesting to say the least.

Firstly to Blake, the computer system is not running 24/7, its usual duty cycle is between 12 and 16 hours. However I am generally consistent as to start and finish times on any particular day.

I have watched as the RAC climbed to 18.500, however it has since fallen and is fluctuating around the 16K to 16.5K mark.

I will monitor the progress again over the next two weeks to see if there is any change - my guess is that its a work unit/RAC variation that may once again see the average RAC end up around the 17K mark.

Also the rebooting problem was RAM related, now resolved.

On a Team note, I will be hitting the lead position within the team/group I am within the next 48 hours. I am now crunching the same number of RAC credits in a fortnight that once took me 10 Years to achieve ..... amazing CPU/GPU improvements in that time!

Anyway, let the experiment roll on.

Cheers

Mark
38) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1161377)
Posted 917 days ago by Profile ausymark
Update:

With the PC now moved over to Ubuntu 11.10, optimised biniaries in place, overclocked reliably at 46x on my Asus Z68V-Pro motherboard, and with 4 days of solid work units being fed to it - the system is once again approaching the 10,000 RAC mark. Hopefully within the next week it should flatten out somewhere around the 13,000 RAC mark running on 3 Seti CPU processes and 2 Seti GPU processes.

Fingers crossed no other major hiccups occur.

Cheers

Mark
39) Message boards : Number crunching : Interesting behavior (Message 1159707)
Posted 922 days ago by Profile ausymark
Actually.....

Under linux I have watched the seti loads get swapped between the cores (3 tasks running on a 4 real core i7). Its seemed to swap every 30 seconds or so, especially jumping to the core that was the coolest and the least active. Quite amusing to watch, which is why I am currently doing the following experiment:


http://setiathome.berkeley.edu/forum_thread.php?id=65352#1159580

My gut instinct is that on such a processor going beyond 6 seti cpu tasks would have minimal to no extra gain - but that's why I am doing the experiment :)

All because I saw the seti tasks being juggled as well as wondering how much spare CPU needs to feed the GPU (nVidia 580 in this case).

Now to get work units flowing so I can really see what it can do lol

Cheers

Mark
40) Message boards : Number crunching : i7 Hyperthreading + GPU + Most Efficient Seti Workload (Message 1159580)
Posted 922 days ago by Profile ausymark
Update

Finally got the system crunching as before.

Thanks to sunu over at the lunatics site and those here in the seti@home group.

See the other thread about this issue/fix here:

http://setiathome.berkeley.edu/forum_thread.php?id=65695

So now i can get back to the experiment.

Just hoping now that getting hold of work units wont be an issue like it has over the last week and a bit.

Cheers

Mark


Previous 20 · Next 20

Copyright © 2014 University of California