Message boards :
Number crunching :
Setting up Linux to crunch CUDA90 and above for Windows users
Message board moderation
Previous · 1 . . . 109 · 110 · 111 · 112 · 113 · 114 · 115 . . . 162 · Next
Author | Message |
---|---|
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Need those startup lines of Event Log. See you have OpenCL detected now. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Shut BOINC down, then restart it and you'll get those 30 start up up lines required, but so far your rig with the GT710 is showing no signs of the OpenCL component being installed. No. That is the only place to get the AIO installer. We need to see the first 30 lines of the BOINC startup to see what BOINC is complaining about. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
Thu 04 Jul 2019 04:32:38 PM CDT | | Starting BOINC client version 7.14.2 for x86_64-pc-linux-gnu I guess I grabbed the wrong MB version. And can't locate libcudart.so.6.0 and libcufft.so.6.0 |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
See the problem, I goofed in the app_info header. I dropped the beginning <app_info>. This is the corrected app_info. Also corrected the cpu appname for MB found in the AIO installer. <app_info> <app> <name>setiathome_v8</name> </app> <file_info> <name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda60</name> <executable/> </file_info> <file_info> <name>libcudart.so.6.0</name> </file_info> <file_info> <name>libcufft.so.6.0</name> </file_info> <app_version> <app_name>setiathome_v8</app_name> <platform>x86_64-pc-linux-gnu</platform> <version_num>802</version_num> <plan_class>cuda60</plan_class> <cmdline>-unroll 6 -nobs</cmdline> <coproc> <type>NVIDIA</type> <count>1</count> </coproc> <avg_ncpus>0.1</avg_ncpus> <max_ncpus>0.1</max_ncpus> <file_ref> <file_name>setiathome_x41p_zi3v_x86_64-pc-linux-gnu_cuda60</file_name> <main_program/> </file_ref> <file_ref> <file_name>libcudart.so.6.0</file_name> </file_ref> <file_ref> <file_name>libcufft.so.6.0</file_name> </file_ref> </app_version> <app> <name>astropulse_v7</name> </app> <file_info> <name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</name> <executable/> </file_info> <file_info> <name>AstroPulse_Kernels_r2751.cl</name> </file_info> <file_info> <name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</name> </file_info> <app_version> <app_name>astropulse_v7</app_name> <platform>x86_64-pc-linux-gnu</platform> <version_num>708</version_num> <plan_class>opencl_nvidia_100</plan_class> <coproc> <type>NVIDIA</type> <count>1</count> </coproc> <avg_ncpus>0.1</avg_ncpus> <max_ncpus>0.1</max_ncpus> <file_ref> <file_name>astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100</file_name> <main_program/> </file_ref> <file_ref> <file_name>AstroPulse_Kernels_r2751.cl</file_name> </file_ref> <file_ref> <file_name>ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt</file_name> <open_name>ap_cmdline.txt</open_name> </file_ref> </app_version> <app> <name>setiathome_v8</name> </app> <file_info> <name>MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu</name> <executable/> </file_info> <app_version> <app_name>setiathome_v8</app_name> <platform>x86_64-pc-linux-gnu</platform> <version_num>800</version_num> <file_ref> <file_name>MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu</file_name> <main_program/> </file_ref> </app_version> <app> <name>astropulse_v7</name> </app> <file_info> <name>ap_7.05r2728_sse3_linux64</name> <executable/> </file_info> <app_version> <app_name>astropulse_v7</app_name> <version_num>704</version_num> <platform>x86_64-pc-linux-gnu</platform> <plan_class></plan_class> <file_ref> <file_name>ap_7.05r2728_sse3_linux64</file_name> <main_program/> </file_ref> </app_version> </app_info> Please copy again and update the app_info. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
Haha, I actually noticed that when I was copying and pasting before, I made sure to keep <app_info> at the beginning. The MB version that I grabbed says MBv8_8.22r3711_sse41_intel_x86_64-pc-linux-gnu VS the MBv8_8.22r3711_sse41_x86_64-pc-linux-gnu that you have in your app_info. I assume I just need to replace with the name that has "intel" that appears to be the only difference? Also, where do I get libcudd and libcuft, they are not in the project folder like everything else. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because it did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO. You need the CUDA60 gpu app from the CUDA60 package along with the cuda library files copied into the directory. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Haha, I actually noticed that when I was copying and pasting before, I made sure to keep <app_info> at the beginning. Yes, that is correct. I forget that I had switched my MB cpu app over the AMD variety. Just match the name of the actual executable up to the filename refernenced in the app_info. Make sure every file referenced in the app_info is actually in the directory. Make sure they all have the allow execution flag set on them. Your previous Event Log post shows most of the apps are missing now. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because is did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO. Haha, that is a smart app! I noticed that earlier..."where did that go? Could have sworn I put that file in here..." I believe the only things I am missing now is the libcubart and libcufft, they arent in the project folder. Do they need to be? |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because is did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO. Yes, they need to be in the project folder along with the executables. Go back to where you unpacked the CUDA60 archive and grab them from there. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
Ok, you need to stop BOINC and put the missing files back in the project directory again. Because of my goof in the app_info, BOINC throws all the apps out because is did not see a proper app_info. So you need to copy every file referenced in the app_info and put them back in the project directory. So you need the cpu apps for MB and AP from the AIO installer and the AP gpu app from the AIO. Hahaha, wow I absolutely overlooked those two files. LETS try this once more....give me a minute. Need to swig this beer and hit save |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
Thu 04 Jul 2019 04:55:09 PM CDT | | Starting BOINC client version 7.14.2 for x86_64-pc-linux-gnu |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
OK, that is finally looking correct. I am so sorry for missing something so simple in the copy and paste. Everything is normal and found in your Event Log startup. Now just need to get some gpu work to prove out the CUDA60 app. Hopefully it won't be long before the scheduler decides to send you some. I see the gpu request for seconds of work already. It just could be the normal scheduler filling the cpu cache first before filling the gpu cache. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
OK, that is finally looking correct. I am so sorry for missing something so simple in the copy and paste. Everything is normal and found in your Event Log startup. Now just need to get some gpu work to prove out the CUDA60 app. Hopefully it won't be long before the scheduler decides to send you some. I see the gpu request for seconds of work already. It just could be the normal scheduler filling the cpu cache first before filling the gpu cache. Thank you SO much for your help! I am going to make another computer ubuntu, however I will be using that AMD gpu that I replaced that was in this computer. Going to make https://setiathome.berkeley.edu/show_host_detail.php?hostid=8730309 "CHRIS" a linux machine. I would NOT use the special app correct? I still have the info you pasted from the other site regarding the gpu drivers, going to give it a shot. An actual gpu should help the little weak machine... |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
OK, that is finally looking correct. I am so sorry for missing something so simple in the copy and paste. Everything is normal and found in your Event Log startup. Now just need to get some gpu work to prove out the CUDA60 app. Hopefully it won't be long before the scheduler decides to send you some. I see the gpu request for seconds of work already. It just could be the normal scheduler filling the cpu cache first before filling the gpu cache. Yes, unfortunately all the "special" apps require relatively recent Nvidia hardware. The AMD gpu will have to suffice on the stock SoG gpu application. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I'll check back in with you tomorrow. The 24 hour penalty box should be expired by then and you should start receiving CUDA60 gpu tasks. I am really curious how the old zi3v code works on the early Maxwell gpus. I can't see how the performance could be any worse than the SoG application. If you are using the cmdline I stuck into the app_info, you have a unroll value of 6. The docs say that is the likely maximum. You might have to play with that and might find reducing to 4 or 2 might process faster. I also stuck in the -nobs parameter which speeds up gpu tasks by using a full cpu core to support the gpu thread. You have 8 cpu cores in the FX. I would limit the max cpu usage to 80% or put in a max_concurrent statement in an app_config for the project. Or set up the app_config for reserving a cpu core for each gpu task <app_config> <app_version> <app_name>setiathome_v8</app_name> <plan_class>cuda60</plan_class> <avg_ncpus>1</avg_ncpus> <ngpus>1</ngpus> <cmdline></cmdline> </app_version> </app_config> Your GT 710 has 192 CUDA cores and I have a Nvidia Jetson Nano SBC running a Maxwell gpu that only has 128 CUDA cores. It also is running the old zi3v code branch that CyborgSam and I have wrangled into working on the ARM64 platform. This is it here: https://setiathome.berkeley.edu/results.php?hostid=8707387&offset=0&show_names=0&state=4&appid= So I would think your GT 710 to be at least as fast considering it has a much more powerful cpu shoveling data into and out of the gpu and is probably clocked much higher than the SoC gpu in my Nano. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I'm not seeing any gpu tasks yet. Just cpu tasks. You have two gpu tasks in your cached but haven't crunched them yet. Are you deliberately holding off those for some reason? . . It ran quite well on my GT730 but that is CC=3.5. And a bit more powerful than the 710 :( Stephen ? ? |
ThePHX264 Send message Joined: 29 May 19 Posts: 86 Credit: 6,688,090 RAC: 32 |
I'll check back in with you tomorrow. The 24 hour penalty box should be expired by then and you should start receiving CUDA60 gpu tasks. I am really curious how the old zi3v code works on the early Maxwell gpus. I can't see how the performance could be any worse than the SoG application. If you are using the cmdline I stuck into the app_info, you have a unroll value of 6. The docs say that is the likely maximum. You might have to play with that and might find reducing to 4 or 2 might process faster. I also stuck in the -nobs parameter which speeds up gpu tasks by using a full cpu core to support the gpu thread. You have 8 cpu cores in the FX. I would limit the max cpu usage to 80% or put in a max_concurrent statement in an app_config for the project. Or set up the app_config for reserving a cpu core for each gpu task Just started the ribs for the 4th, but I will try to do the things that you mentioned. Haha, I was about to buy a 1060...but then I forgot the mobo only supported pcie 2.0. This was one of the "better" cards I could find at microcenter. https://www.msi.com/Graphics-card/GT-710-2GD3H-LP.html Nice and cheap...and thankfully they still had some variety to choose from that worked with pcie2.0! Only bad thing...this one is passive. I made sure to upgrade the case fans. Hopefully that's all I need to do. Will find out when the temps raise due to GPU tasks. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Right now I am running the beta SOG app on this host https://setiathome.berkeley.edu/show_host_detail.php?hostid=8702456. It has a GT 720 and a GT 730 (both CC = 3.5) and they don't seem very fast with the SOG app. As soon as I clear the cache I will run the CUDA60 -zi3v app and see if there is any improvement. . . On my C2D machine I updated from a GT730 to the GTX1050ti and it is well worth the effort :) . . For the record, run times on the 730 with SoG r3557 were around 45 to 50 mins. With Cuda 60 they were about27 mins (with Cuda50 they were about 33 to 37 mins). . . Have fun ... Stephen :) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Yes, the beta r3602 app doesn't seem much faster than the stock r3584 8.22 Linux SoG app. I have a hunch the zi3v app will be much faster even if you are forced to run with unroll 1 or 2 because of only 1GB of memory. . . Well the GT730 only has 2 CUs so unroll 2 is it ... I don't know about the 720. Stephen . . |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I'm not seeing any gpu tasks yet. Just cpu tasks. You have two gpu tasks in your cached but haven't crunched them yet. Are you deliberately holding off those for some reason? Don't see it in your hosts. What kind of typical times did you get? He should be good to go on his GT 710 also as it has the GK208b chip just like your GT 730. It is really just a GT 730 with some shaders disabled. From Wikipedia - "C.C of 3.5 on GK110/GK208 GPUs " Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.