Message boards :
Number crunching :
Lunatics Windows Installer v0.44 - new release for v8 (required upgrade)
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next
Author | Message |
---|---|
John Neale Send message Joined: 16 Mar 00 Posts: 634 Credit: 7,246,513 RAC: 9 |
I have been running the Lunatics Intel GPU app here and the stock Intel GPU app on Beta, on an HD 4600, using the most recent driver (version 10.18.14.4264, dated 4 August 2015), without encountering any problems at all. I certainly won't upgrade at the moment. Interestingly, if I do try and update the driver, Windows (I'm using v8.1) informs me that it is up to date, which is why I referred to it as "most recent". A more accurate description would have been "more recent". While I have your attention, may I ask you two questions? You have some hardware that is similar to mine, so you may have some experience that I could benefit from. 1. I'm running one task at a time on my HD 4600 GPU, and the GPU utilisation typically runs at between 93 and 94 %. Do you think there'd be any mileage to be gained in running two tasks (MultiBeam and/or AstroPulse) simultaneously on this card? 2. I have a four-core Intel Core i5-4210M CPU which has a clock speed of 2.60 GHz. My machine is an HP ProBook laptop, so I'm throttling the CPU temperatures to a maximum of 75 °C with the TThrottle utility. I am not reserving a core for the GPU at any time (whether running MultiBeam or AstroPulse). I haven't done any testing, but I do observe that CPU utilisation by the GPU app is fairly low for both types of task. Should I consider reserving a CPU core, especially when running AstroPulse tasks? |
Dirk Sadowski Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I myself wrote: So a SSE4.2 app isn't available like last time for SAHv7? Grant (SSSF) wrote: It was mentioned in the opening post of this thread & the installer readme file. Hm, maybe the stock SAHv8 CPU app could run faster than the SSE3 opti CPU app (on a CPU which support SSE4.2)? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
I certainly won't upgrade at the moment. Interestingly, if I do try and update the driver, Windows (I'm using v8.1) informs me that it is up to date, which is why I referred to it as "most recent". A more accurate description would have been "more recent". I hope you're not using Windows Update to pick a driver for you? For scientific computing (i.e. crunching), it's always wiser to get drivers direct from the manufacturers' website - there have been occasions when the Microsoft drivers are fit for gaming only, and leave out the scientific bits. While I have your attention, may I ask you two questions? You have some hardware that is similar to mine, so you may have some experience that I could benefit from. To be honest, I don't like to get involved in the chase for the last few percentage points of performance - I'm more concerned about ensuring that the results are valid. I find a driver that works reliably, and leave it alone. And because there's always a mad scramble for AP tasks when they're around, I concentrate on the MB tasks that get left behind in the rush. But I do note what people say, although there tends to be less discussion of the Intel GPUs. One thing to note is that CPU utilisation is a complex subject which can't be expressed in a single percentage. I'm typing on a machine which is running the Einstein application on HD 4600, and is using all four CPU cores for other projects. The Einstein app uses practically no CPU - Task Manager is fluctuating between 00 and 01 - but it only runs at normal speed if the process priority is held massively high at Real Time. That's not recommended for any other application. The general conversation tends to be conducted in terms of the technology used - OpenCL apps are more likely to need a free core, CUDA apps less likely. But it depends on how exactly each application has been programmed internally. Some of the early Einstein CUDA apps used a lot of CPU time as well, but as the programming was refined, more work was transferred to the GPUs. In short, I don't think there's a hard-and-fast answer. Whatever works for you, in terms of satisfaction from your participation. I like to participate in multiple projects, and see how BOINC copes with scheduling them. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
Hm, maybe the stock SAHv8 CPU app could run faster than the SSE3 opti CPU app (on a CPU which support SSE4.2)? Unlikely, but you can try it and find out. |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
Times have finally finalized and it now shows that r3330 is slower than r3299 was on my HD7870 with Crimson 15.11. All tasks that ran had quicker times previous, after the earlier 100+ run on r3299. Tasks that used to have a run time of ~17m50s now have a remaining of 19m17s, 19m24s and 19m32s. Tasks that used to have a run time of ~16m30s now have a remaining of 18m29s, 18m30s. Minute differences maybe, and probably caused by stability enhancing code, but slower nonetheless. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
https://setisvn.ssl.berkeley.edu/trac/log/branches?action=stop_on_copy&mode=stop_on_copy&rev=3330&stop_rev=3299&limit=100&verbose=on https://setisvn.ssl.berkeley.edu/trac/changeset?reponame=&old=3330%40branches&new=3330%40branches I don't see anything in code that could account for such slowdown. With default checkpoint of 1 min and task processing time of <18min checkpint code should called only 18 times. 18 more "if" are neglectible. That is, as I said, most probably just code placement differencies by compiler. |
Tuna Ertemalp Send message Joined: 15 Nov 99 Posts: 18 Credit: 74,971,084 RAC: 0 |
Maybe a dumb question, but first time I installed Lunatics on any of my machines. So... Bare with me. Machine has an EVGA Titan X Hybrid (and an old i7-970 CPU). I see the GPU task running: 0.04 CPU + 1 GPU. Using Lunatics_x41zi_win32_cuda50.exe. Then I look at MSI Afterburner, and I see the GPU use to be very low: Usage% is usually <10% with a few peaks into 30-60% range, but just momentary peaks. Power% is at about 50%. Memory use is at just 700MB out of available 12G. This is at 41mins in with 17mins estimated left. Is this normal? Feels like GPU is extremely underused. Also, during Setup, for the CUDA, the installer defaulted to the third option for the 2xx cards, not the first CUDA5 option for the Kepler/Maxwell cards. Doesn't the Setup detect the card(s) installed and suggest the right thing, which it seems to be doing for the CPUs? And, reading stuff below: What is <count> for GPU and CPU? Am I supposed to set <count> to 0.3 to get three tasks run on one GPU? If so, in app_info or app_config? In general, is there a document somewhere explaining the format for the app_info and app_config? And, finally: Is there a main Lunatics announcement thread (as opposed to per version threads) that I can subscribe to so that I can hear about future releases, instead of checking the whole Number Crunching forum continuously? Tuna EDIT: Per Grant's example, I tried throwing 3, 5, 10, 8, 9 tasks to the GPU, and at 9 tasks, I get 75-90% usage (10 tasks made it stuck at 100%), 65% power usage, 3230MB memory use. But this is just me blindly following Grant';s template & Afterburner graphs. Is there a downside to this? The GPU is at a nice 39C. I used this: <app_config> <app> <name>setiathome_v8</name> <gpu_versions> <gpu_usage>0.11</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> </app_config> |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Then I look at MSI Afterburner, and I see the GPU use to be very low: Usage% is usually <10% with a few peaks into 30-60% range, but just momentary peaks. Not sure why GPU usage should be that low. The TiatnX is a very powerful card, but I'd expect just a single Seti WU would result in higher utilisation. This is at 41mins in with 17mins estimated left. Is this normal? IMHO No. Running only a single WU, depending on the type, on that video card should take 20min max. Probably less than 7min for a shorty. Running 2 WUs at a time, my GTX 750Tis crunch WUs in 16-30min or so. Doesn't the Setup detect the card(s) installed and suggest the right thing Nope, my understanding is that it's not as easy to implement as it should be; so the person doing the installation has to pick the most suitable version- hence the list of video card classes & suggested application versions in the readme file. And, reading stuff below: What is <count> for GPU and CPU? Am I supposed to set <count> to 0.3 to get three tasks run on one GPU? If so, in app_info or app_config? Either app_info or app_config. In app_config is best as you can update the Lunatics installation as necessary, and BOINC will continue to use the app_config settings. If you use app_info, every time you re-do the Lunatics installation you have to manually re-edit the app_info file. This is what I use, I only run MB. No AP work. <app_config> <app> <name>setiathome_v8</name> <gpu_versions> <gpu_usage>0.50</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> <app> <name>setiathome_v7</name> <gpu_versions> <gpu_usage>0.50</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> </app_config> Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
EDIT: Per Grant's example, I tried throwing 3, 5, 10, 8, 9 tasks to the GPU, and at 9 tasks, I get 75-90% usage (10 tasks made it stuck at 100%), GPU usage isn't a necessarily good indicator of work being done. Often running at 85% utilisation means you will do more work per hour than when running at 95%. The best way to work out the optimum values is to compare run times of similar WUs eg 1 WU in 15min = 4 per hour 2 WU in 28min = 4.2 per hour 3 WU in 40min = 4.5 per hour. You need to do it for shorties, then for VLARs. My GTX 750Tis when running 3WU at a time do more work per hour than when only running 1 or 2 WUs. However, when running shorties, the amount of work done per hour drops off, massively. So for me running 2 WUs at a time gives the best results. GPU utilisation is less than running 3 at a time, but I end up doing more work. The fact that you have such a long run time when running only 1 WU indicates some sort of issue. I'd sort that out before attempting to find out how many at a time is the optimum number for your hardware. Grant Darwin NT |
Mike Send message Joined: 17 Feb 01 Posts: 34348 Credit: 79,922,639 RAC: 80 |
Keep in mind that GBT tasks are different. Processing will take longer. I reduced from 3 instances to 2 instances because of this. With each crime and every kindness we birth our future. |
Tuna Ertemalp Send message Joined: 15 Nov 99 Posts: 18 Credit: 74,971,084 RAC: 0 |
My results are coming in (http://setiathome.berkeley.edu/results.php?hostid=5551867). Look at the validation pending or valid ones. In the past, with stock apps, it was about 600-750secs runtime per task, i.e within your expectation (that is about 5-6 WUs per hour). Now, with me pushing 9 tasks at a time with Lunatics app, it went up to 2100-2600 secs per task (that is about 12-15 WUs per hour). So, overall a gain, but I wonder what would have been if I were to push multiples with the stock app or no multiples with the Lunatics app. I guess I'll have to do some tests in the morning after seeing what this does over night. As comparison: - This is a non-Lunatics machine with Dual Titan X: http://setiathome.berkeley.edu/results.php?hostid=7916726 - This is a non-Lunatics machine with Quad Titan X: http://setiathome.berkeley.edu/results.php?hostid=7856578 Tuna EDIT: Actually, the times jumped up to 4000+ per WU, reducing the per hour to 7ish. Not good. 2000+ ones must have started when I had less parallelism, thus were faster at their start. I'll reduce to 0.5 or 0.3 until morning. |
Tuna Ertemalp Send message Joined: 15 Nov 99 Posts: 18 Credit: 74,971,084 RAC: 0 |
Keep in mind that GBT tasks are different. GBT? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
|
Tuna Ertemalp Send message Joined: 15 Nov 99 Posts: 18 Credit: 74,971,084 RAC: 0 |
Keep in mind that GBT tasks are different. Nice. Can/will one be able to tell if a task is from GBT, like one can tell VLARs since task name has .vlar_n at the end? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
Keep in mind that GBT tasks are different. I'm sure the tasks will be recognisable, but the server code to handle the conversion from raw telescope data to manageable tasks is still being worked on, so I wouldn't like to give a prescriptive answer at this stage. We've seen several adjustments since the first trial runs at Beta. |
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
I just discovered that the Astropulse device-specific configuration files (for those with multiple, disparate GPUs) that Raistmer added back in the fall of 2014 apparently still haven't found their way into the app_info.xml file generated by the Lunatics installer. For Nvidia GPUs, that would be 'AstroPulse_NV_config.xml'. I didn't notice it until I started checking results for the first major batch of APs that my xw9400 just started returning following the v0.44 upgrade. Some of the timings were quite a bit out of line from my expectations. Richard, is that a deliberate omission or just an oversight? I didn't see any sort of advisory in the release notes. I know there probably aren't many of us using that device-specific configuration, but a "heads-up" would be helpful if those files are going to be omitted from the installer. For Astropulse, there are quite a few app_version sections (8, by my count) where I had to pull <file_ref> blocks from the "oldApp_backup" app_info file. Also, one <file_info> entry. Keeping my fingers crossed that I didn't mess up on my cut-n-pasting. ;^) |
Mike Send message Joined: 17 Feb 01 Posts: 34348 Credit: 79,922,639 RAC: 80 |
I think to add a feature only used by a fraction of users is not worth implementing. Also the appinfo already is very huge. With each crime and every kindness we birth our future. |
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 572 |
Over 1700 V8 tasks completed on CPU and GPU, not one invalid or error. Thanks to all those who made it possible. Kevin |
Tuna Ertemalp Send message Joined: 15 Nov 99 Posts: 18 Credit: 74,971,084 RAC: 0 |
Is there a main Lunatics announcement thread (as opposed to per version threads) that I can subscribe to so that I can hear about future releases, instead of checking the whole Number Crunching forum continuously? ? |
Juha Send message Joined: 7 Mar 04 Posts: 388 Credit: 1,857,738 RAC: 0 |
There is a Subscribe button near the top of the page. Click that in the Optimised apps thread and you should get email notifications every time there's a new post. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.