Posts by ralphw

1) Message boards : Number crunching : The last day of S@H (Message 2044677)
Posted 14 Apr 2020 by Profile ralphw
Post:
I've been crunching S@H since May 1999, happy to be doing it right at the end, as well.

Thanks!
2) Message boards : Number crunching : Mint 19.1 vs NVIDIA drivers (396, 410, etc.) (Message 1975517)
Posted 17 Jan 2019 by Profile ralphw
Post:
Conclusion: The run file method is working for me in 19.1, as it did in Mint 18.
I have installed 4.10 and 4.15 NVidia drivers this way.

I have other, older Ubuntu distributions that were LTS long-term support.

I'm taking a break from crunching WUs for a few days, I'm going to rearrange my GPUs so they are more similar across machines, and try an 8-core 16-thread Ryzen chip at the same time.
3) Message boards : Number crunching : Mint 19.1 vs NVIDIA drivers (396, 410, etc.) (Message 1975099)
Posted 13 Jan 2019 by Profile ralphw
Post:
I found the workarounds and accomplished the goal of nuking nouveau from my grub config AND getting the module blacklisted.

I did fell like I was fighting the system the entire way - it's frustrating when there is a "driver manager" script that purportedly lets you switch, but that doesn't work.

I've been running SAH on Linux Mint for > 2 years, and it's gotten more frustrating with each new release.
The new system is only going to run Mint 19.1 (until 2023).

Now I just have the question of where to unpack the .7z file contents.

Do the new binaries and .cl files go into /usr/lib/boinc-app-seti?


    root@asimov /usr/lib/boinc-app-seti # ls -l
    -rwxr-xr-x 1 root root 3405264 Oct 14 2014 ap_7.05r2728_sse3_linux64
    -rw-r--r-- 1 root root 82 Aug 21 01:24 ap_cmdline_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100.txt
    -rw-r--r-- 1 root root 2814 Aug 27 16:57 app_info.xml
    -rwxr-xr-x 1 root root 2508192 Sep 18 2015 astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100
    -rwxr-xr-x 1 root root 120992 Sep 18 2015 AstroPulse_Kernels_r2751.cl
    drwxr-xr-x 2 root root 4096 Aug 27 17:07 docs
    drwxr-xr-x 2 root root 4096 Nov 23 2017 For AMD CPUs
    drwxr-xr-x 2 root root 4096 Jan 12 23:21 Linux_Pascal+0.97b2_Special
    -rw-r--r-- 1 root root 73308207 Jan 12 03:38 Linux_Pascal+0.97b2_Special.7z
    -rwxr-xr-x 1 root root 5678784 Nov 14 2017 MBv8_8.22r3711_sse41_x86_64-pc-linux-gnu
    -rwxr-xr-x 1 root root 1027656 Feb 2 2016 setiathome_v8
    -rwxr-xr-x 1 root root 132283048 Aug 27 18:46 setiathome_x41p_V0.97b2_Linux-Pascal+_cuda92



The app_info.xml file seens to have only one place to go:

/usr/share/boinc-app-seti/app_info.xml


    root@asimov /usr/lib/boinc-app-seti # find / -name 'app_info.xml' -exec ls -l {} \;
    -rw-r--r-- 1 root root 2814 Jan 12 23:32 /usr/share/boinc-app-seti/app_info.xml
    -rw-r--r-- 1 root root 2814 Aug 27 16:57 /usr/lib/boinc-app-seti/app_info.xml
    -rw-r--r-- 1 ralphw3 ralphw3 2814 Aug 27 16:57 /var/lib/boinc-client/ralphw/Linux_Pascal+0.97b2_Special/app_info.xml
    lrwxrwxrwx 1 boinc boinc 38 Jan 12 03:54 /var/lib/boinc-client/projects/setiweb.ssl.berkeley.edu/app_info.xml -> /usr/share/boinc-app-seti/app_info.xml
    lrwxrwxrwx 1 boinc boinc 38 Jan 12 03:54 /var/lib/boinc-client/projects/setiathome.ssl.berkeley.edu/app_info.xml -> /usr/share/boinc-app-seti/app_info.xml
    lrwxrwxrwx 1 boinc boinc 38 Jan 12 03:54 /var/lib/boinc-client/projects/setiathome.berkeley.edu/app_info.xml -> /usr/share/boinc-app-seti/app_info.xml

4) Message boards : Number crunching : Mint 19.1 vs NVIDIA drivers (396, 410, etc.) (Message 1974964)
Posted 12 Jan 2019 by Profile ralphw
Post:
I went for it and ignored the warning about ncurses:

Here are the steps take to get the nouveau blacklist made permanent:

    As root:
    # NVIDIA-Linux-x86_64-410.78.run
    # echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf
    # sudo update-initramfs -u



Here's what the boinc manager event log startup looks like now:
Sat 12 Jan 2019 04:11:56 PM EST | | CUDA: NVIDIA GPU 0: GeForce GTX 1070 (driver version 410.78, CUDA version 10.0, compute capability 6.1, 4096MB, 3984MB available, 6463 GFLOPS peak)
Sat 12 Jan 2019 04:11:56 PM EST | | OpenCL: NVIDIA GPU 0: GeForce GTX 1070 (driver version 410.78, device version OpenCL 1.2 CUDA, 8118MB, 3984MB available, 6463 GFLOPS peak)
Sat 12 Jan 2019 04:11:56 PM EST | SETI@home | Found app_info.xml; using anonymous platform
Sat 12 Jan 2019 04:11:56 PM EST | | [libc detection] gathered: 2.27, Ubuntu GLIBC 2.27-3ubuntu1
Sat 12 Jan 2019 04:11:56 PM EST | | Host name: trigun
Sat 12 Jan 2019 04:11:56 PM EST | | Processor: 12 GenuineIntel Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz [Family 6 Model 158 Stepping 10]
Sat 12 Jan 2019 04:11:56 PM EST | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
Sat 12 Jan 2019 04:11:56 PM EST | | OS: Linux LinuxMint: Linux Mint 19.1 Tessa [4.15.0-43-generic|libc 2.27 (Ubuntu GLIBC 2.27-3ubuntu1)]

So now it sees the NVidia card , and just needs a proper client to use the GPU.


    app_info.xml needs work

    Need the optimized CUDA 9 client (I've downloaded it, I need to build an installer for it)

    5) Message boards : Number crunching : Mint 19.1 vs NVIDIA drivers (396, 410, etc.) (Message 1974958)
    Posted 12 Jan 2019 by Profile ralphw
    Post:
    Started with Linux Mint 19.1 on a new computer:
    Installed boinc, set clients, and the boinc manager from the default repos. Every startup process ignores the fact that there is an NVidia 1070 GPU installed. That's because nouveau drivers are still in charge, I suspect, but even CPU-only work units abort in computation errors.

    I'm ready to downgrade to 18.something else at this point. Nothing including the graphics ppa seems to work as expected, either - I can never switch to any NVidia driver version.

    The noveau drivers keep getting in the way: Now I have to edit my grub config and get the noveau driver blacklisted, boot into single-user mode and run the NVidia .run file again.

    I've downloaded and used the .run file from NVidia, but it complains about the lack of an ncurses6 library. (ncurses 5 is current on this Linux box) Can I ignore this error?

    If I give up on Ubuntu & Linux mint, what other Linux distro are folks with GPUs running? I'm perfectly happy to switch.

    Comment from a recent contributor here: "I've recently found Ubuntu is making it very difficult to use the driver downloaded from nVidia with the newer systems". I completely agree with this statement, and would rather run "Any other version of Linux" that will let me use a proper NVidia driver with minimal hassle.
    6) Message boards : Number crunching : is there a page that shows when Top Participants was last updated? (Message 1922900)
    Posted 5 Mar 2018 by Profile ralphw
    Post:
    I can't find my own rank on the "Top Participants" page, which does happen from time to time, but is typically resolved by the next run.

    How do I found out when the ranking were last calculated?

    Is there a section of server statistics that has this info?
    7) Message boards : Number crunching : Best way to get more processing from GTX 1050ti alongside GTX 750ti/ GTX 950 (Message 1873749)
    Posted 18 Jun 2017 by Profile ralphw
    Post:
    I ended up moving an MSI GTX 750 Ti back into this system.

    I was expecting to put four GPUs on this motherboard, but I apparently need all of my 750 Ti systems to be
    the shorter 5-6" long cards.

    Only the first slot of this Gigabyte motherboard really accommodates a full-length card such as MSI's dual-fan GTX 750Ti.
    The fan shroud and card length really don't fit well with the other heat sinks and other connectors

    I will have to limit myself to the smaller form factor GPUs to mechanically use all of my remaining motherboard slots

    I'll see how well the WU averages keep up.
    8) Message boards : Number crunching : Best way to get more processing from GTX 1050ti alongside GTX 750ti/ GTX 950 (Message 1872556)
    Posted 12 Jun 2017 by Profile ralphw
    Post:
    Thanks.

    That is my primary configuration (GTX 1050 Ti alongside two GTX 750 Ti systems).

    The third 750 Ti (from MSI) is currently in an inactive system.
    9) Message boards : Number crunching : Best way to get more processing from GTX 1050ti alongside GTX 750ti/ GTX 950 (Message 1872515)
    Posted 12 Jun 2017 by Profile ralphw
    Post:
    Hello,

    I have thee models of NVIDIA GPU,

    • NVIDIA 1050 (1) Ti (1)
    • NVIDIA GTX 950 Ti (2)
    • NVIDIA GTX 750 Ti (3)


    Spread across two systems

    What's the best strategy for having the fast cards (950,1050) process more data?
    There are some clients that have loop unrolling options, but is running multiple workunits on a GPU - by setting up an app_info.xml file - really taking advantage of the extra CUDA cores?

    10) Questions and Answers : GPU applications : If I were to run a GTX 1060 and a GTX 750 Ti on the same pc how could I max both? (Message 1872118)
    Posted 10 Jun 2017 by Profile ralphw
    Post:
    I have a similar "problem" - GTX 1050 Ti and two GTX 750 Ti systems.
    I assume there is an XML file I can tweak.
    11) Questions and Answers : GPU applications : Back with linux Mint 18.1 (Message 1871524)
    Posted 7 Jun 2017 by Profile ralphw
    Post:
    Found what was missing:

    The use_all_gpus setting in the cc_config.xml (/var/lib/boinc-client/cc_config.xml) was missing.

    Now I'm using all of my cude cores:
    <use_all_gpus>1</use_all_gpus>
    


    Wed 07 Jun 2017 02:56:35 AM EDT | | CUDA: NVIDIA GPU 0: GeForce GTX 1050 Ti (driver version 375.66, CUDA version 8.0, compute capability 6.1, 4039MB, 3994MB available, 2274 GFLOPS peak)
    Wed 07 Jun 2017 02:56:35 AM EDT | | CUDA: NVIDIA GPU 1: GeForce GTX 750 Ti (driver version 375.66, CUDA version 8.0, compute capability 5.0, 2001MB, 1973MB available, 1606 GFLOPS peak)
    Wed 07 Jun 2017 02:56:35 AM EDT | | OpenCL: NVIDIA GPU 0: GeForce GTX 1050 Ti (driver version 375.66, device version OpenCL 1.2 CUDA, 4039MB, 3994MB available, 2274 GFLOPS peak)
    Wed 07 Jun 2017 02:56:35 AM EDT | | OpenCL: NVIDIA GPU 1: GeForce GTX 750 Ti (driver version 375.66, device version OpenCL 1.2 CUDA, 2001MB, 1973MB available, 1606 GFLOPS peak)
    Wed 07 Jun 2017 02:56:35 AM EDT | | [coproc] NVIDIA library reports 2 GPUs
    [/code]
    12) Questions and Answers : GPU applications : Back with linux Mint 18.1 (Message 1871426)
    Posted 6 Jun 2017 by Profile ralphw
    Post:
    All three VGA devices seem to show appropriately.

    $ lspci | grep VGA
    00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
    01:00.0 VGA compatible controller: NVIDIA Corporation Device 1c82 (rev a1)
    02:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)
    
    13) Questions and Answers : GPU applications : Back with linux Mint 18.1 (Message 1871422)
    Posted 6 Jun 2017 by Profile ralphw
    Post:
    I'm working with Linux Mint 18.1, using the driver settings with Nvidia.

    I have a GTX 1050 Ti (Pascal) and GTX 750 Ti (Maxwell)

    The 375 drivers are supposed to recognize both, but don't seem to.


    Eventually, I'll want to put the third Nvidia card back in.
    nvidia-smi
    Tue Jun  6 00:37:27 2017       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX 105...  Off  | 0000:01:00.0     Off |                  N/A |
    | 35%   33C    P0    35W /  75W |      0MiB /  4038MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   1  GeForce GTX 750 Ti  Off  | 0000:02:00.0     Off |                  N/A |
    | 40%   26C    P0     1W /  38W |      0MiB /  2000MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID  Type  Process name                               Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    
    When SETI starts, this is in the event log:
    Tue 06 Jun 2017 12:30:42 AM EDT |  | CUDA: NVIDIA GPU 0: GeForce GTX 1050 Ti (driver version 375.66, CUDA version 8.0, compute capability 6.1, 4039MB, 3996MB available, 2274 GFLOPS peak)
    Tue 06 Jun 2017 12:30:42 AM EDT |  | CUDA: NVIDIA GPU 1 (not used): GeForce GTX 750 Ti (driver version 375.66, CUDA version 8.0, compute capability 5.0, 2001MB, 1973MB available, 1606 GFLOPS peak)
    Tue 06 Jun 2017 12:30:42 AM EDT |  | OpenCL: NVIDIA GPU 0: GeForce GTX 1050 Ti (driver version 375.66, device version OpenCL 1.2 CUDA, 4039MB, 3996MB available, 2274 GFLOPS peak)
    Tue 06 Jun 2017 12:30:42 AM EDT |  | OpenCL: NVIDIA GPU 1 (ignored by config): GeForce GTX 750 Ti (driver version 375.66, device version OpenCL 1.2 CUDA, 2001MB, 1973MB available, 1606 GFLOPS peak)
    
    


    Suggests for what to try next are welcome. Mint 18.1 seems a bit better behaved than 17.2 or 18.0,

    But I'm not sure what is missing.
    14) Questions and Answers : GPU applications : NVIDIA fan controls (Linux MINT 18, Driver Version 367.57, two NVidia GTX 950s) (Message 1831601)
    Posted 20 Nov 2016 by Profile ralphw
    Post:
    The problem seems to be that manual fan control is enabled by the "Coolbits" option in xorg.conf

    Problem 1 (solved): xorg.conf gets overwritten with new values upon reboot. Enabling nogpumanager in GRUB doesn't fix this in Linux MINT 18, so I had to use restore to using chattr +i on the /etc/X11/xorg.conf file.

    Problem 2 (working on): even though the line
    "Coolbits" "12" is now preserved for both device entries, only one NVidia device allows for manual control of fan speed. Working on a solution using this Folding@home page as a guide:

    https://foldingforum.org/viewtopic.php?p=267165
    It requires setting up multiple X Servers, one per GPU.

    I'd prefer to easier configuration options for multiple GPUs and multiple monitors, along with a more aggressive adaptive fan control system. I'm not overclocking, but SETI@home pushes GPUs hard.
    15) Questions and Answers : GPU applications : NVIDIA fan controls (Linux MINT 18, Driver Version 367.57, two NVidia GTX 950s) (Message 1831561)
    Posted 20 Nov 2016 by Profile ralphw
    Post:
    So I'm thinking that there is a driver compatibility issue with Linux Mint 18, that might be preventing things from working. This guide shows use of the PPA install method with Linux MINT 18.
    https://johners.tech/2016/07/installing-the-latest-nvidia-graphics-drivers-on-linux-mint-18/

    I was using the PPA for graphics drivers, but am going to try switching to the .run installation method (downloading the driver directly from NVidia).
    (This post talks about CUDA support, but recommends NOT using the PPA method - https://devtalk.nvidia.com/default/topic/955464/cuda-setup-and-installation/gtx-1080-cuda-8-0-on-linux-mint-18-problems-setting-up-/#reply)

    Not sure if this will work any better, but I need to try something to combat the frustration of (works with 14.04 / doesn't work with MINT 18) syndrome.

    Fan controls in NVIDIA seem important when I'm running GPU apps on SETI.
    90 degree GPU temps seem too high.
    16) Questions and Answers : GPU applications : NVIDIA fan controls (Linux MINT 18, Driver Version 367.57, two NVidia GTX 950s) (Message 1831536)
    Posted 20 Nov 2016 by Profile ralphw
    Post:
    Those look interesting, but the NVIDIA driver controls are supposed to be able to regulate this. Aside from googling (which I'm trying), how can I be sure the driver version is "proper" and supports the fan speed setting capability?
    17) Questions and Answers : GPU applications : NVIDIA fan controls (Linux MINT 18, Driver Version 367.57, two NVidia GTX 950s) (Message 1831446)
    Posted 19 Nov 2016 by Profile ralphw
    Post:
    I'm running SETI@home 8.0 applications on Linux Mint 18 (based on Ubuntu 16.04).
    GPU temps are extemely high, and I feel I need better speed controls for the GPU fans.

    >nvidia-smi
    >Sat Nov 19 12:00:51 2016
    >+-----------------------------------------------------------------------------+
    >| NVIDIA-SMI 367.57 Driver Version: 367.57 |
    >|-------------------------------+----------------------+----------------------+
    >| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
    >| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
    >|===============================+======================+======================|
    >| 0 GeForce GTX 950 Off | 0000:01:00.0 On | N/A |
    >| 0% 40C P0 23W / 125W | 214MiB / 1996MiB | 0% Default |
    >+-------------------------------+----------------------+----------------------+
    >| 1 GeForce GTX 950 Off | 0000:02:00.0 Off | N/A |
    >| 0% 25C P8 6W / 125W | 1MiB / 1996MiB | 0% Default |
    >+-------------------------------+----------------------+----------------------+

    ....


    Despite playing with the "coolbits" settings in the Xorg.conf file, no controls to set the fans are ever activated.
    It also seems like the settings should be under the "device" section, but the nvidia-settings CLI option puts them in the "screens" section.

    I have two NVidia GTX 950s in this system.

    When runnning SETI@home, GPU temps would peak briefly at 90 degrees C, which is about 40 degrees too hot.
    So I'm looking for a way to turn the fans on more or continuously when running GPU work units.
    18) Questions and Answers : Wish list : Request: Linux client use DMI info (base board manuf/product) to populate "Model" field for computer (Message 1758007)
    Posted 22 Jan 2016 by Profile ralphw
    Post:
    When you look at computer details, the "Product Name" field shows the same information as what's in the "Model" field in the summary list of computers:

    The detail page for this mac shows "MacBookPro11,3" in the Product Name field.

    On the summary page (listing all computers), the same field is visible under the column "Model". I don't know why the field name changes between the summary and detail page...

    So Boinc must support the transmission of this information somehow.
    19) Questions and Answers : Wish list : Request: Linux client use DMI info (base board manuf/product) to populate "Model" field for computer (Message 1757649)
    Posted 20 Jan 2016 by Profile ralphw
    Post:
    When I look at "My Computers", I see only one system that has populated the computer "Model field".

    I'd love for this to populate for an PC motherboard as well, what would I have to do to manually populate this?

    Can it be done at a workunit level?

    (Here is what "dmidecode -t baseboard" shows on the system I'm using right now)

    Handle 0x0002, DMI type 2, 15 bytes
    Base Board Information
    Manufacturer: Gigabyte Technology Co., Ltd.
    Product Name: Z97X-SOC Force
    20) Message boards : Number crunching : Intel / OpenCL binaries for Linux (2015) (Message 1756480)
    Posted 15 Jan 2016 by Profile ralphw
    Post:
    My experiment with Beignet doesn't appear to be successful with Astropulse 7.0.8.
    My machine has three GPUs (Nvidia GTX 750Ti), plus the Intel HD 4600 graphics.

    I think it's getting confused, thinking it sees two Nvidias, but trying to use the Intel. Would trying an older NVidia driver help?



    https://setiathome.berkeley.edu/result.php?resultid=4662135832
    ....

    <core_client_version>7.2.42</core_client_version>
    <![CDATA[
    <message>
    process exited with code 193 (0xc1, -63)
    </message>
    <stderr_txt>
    Running on device number: 0
    OpenCL platform detected: NVIDIA Corporation
    Number of OpenCL devices found : 2
    BOINC assigns slot on device #0.
    Info: BOINC provided OpenCL device ID used
    Used GPU device parameters are:
    Number of compute units: 5
    Single buffer allocation size: 256MB
    Total device global memory: 2047MB
    max WG size: 1024
    local mem type: Real
    FERMI path used: yes
    -unroll default value used: 5
    -ffa_block default value used: 1280
    -ffa_block_fetch default value used: 640
    AstroPulse v7.08
    Linux 64 bit, Rev 2751, OpenCL version by Raistmer, GPU mode
    V7, by Raistmer ported to Linux by Lunatics.kwsn.net team.
    by Urs Echternacht
    ffa threshold mods by Joe Segur
    SSE3 dechirping by JDWhale using SSE3 emulation

    Build features: Non-graphics OpenCL USE_OPENCL_NV OCL_ZERO_COPY OPENCL_WRITE COMBINED_DECHIRP_KERNEL SMALL_CHIRP_TABLE TWIN_FFA FFTW BLANKIT USE_INCREASED_PRECISION SSE2 64bit
    System: Linux x86_64 Kernel: 3.16.0-38-generic
    CPU : Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
    8 core(s), Speed : 4300.000 MHz
    L1 : 64 KB, Cache : 8192 KB

    Number of OpenCL platforms: 2


    OpenCL Platform Name: NVIDIA CUDA
    Number of devices: 2
    Max compute units: 5
    Max work group size: 1024
    Max clock frequency: 1254Mhz
    Max memory allocation: 536821760
    Cache type: Read/Write
    Cache line size: 128
    Cache size: 81920
    Global memory size: 2147287040
    Constant buffer size: 65536
    Max number of constant args: 9
    Local memory type: Scratchpad
    Local memory size: 49152
    Queue properties:
    Out-of-Order: Yes
    Name: GeForce GTX 750 Ti
    Vendor: NVIDIA Corporation
    Driver version: 352.63
    Version: OpenCL 1.2 CUDA
    Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64
    Max compute units: 5
    Max work group size: 1024
    Max clock frequency: 1254Mhz
    Max memory allocation: 536821760
    Cache type: Read/Write
    Cache line size: 128
    Cache size: 81920
    Global memory size: 2147287040
    Constant buffer size: 65536
    Max number of constant args: 9
    Local memory type: Scratchpad
    Local memory size: 49152
    Queue properties:
    Out-of-Order: Yes
    Name: GeForce GTX 750 Ti
    Vendor: NVIDIA Corporation
    Driver version: 352.63
    Version: OpenCL 1.2 CUDA
    Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64


    OpenCL Platform Name: Experiment Intel Gen OCL Driver
    DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
    Assuming 131072kB available aperture size.
    May lead to reduced performance or incorrect rendering.
    get chip id failed: -1 [22]
    param: 4, val: 0
    SIGSEGV: segmentation violation
    Stack trace (15 frames):
    ../../projects/setiathome.berkeley.edu/astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100(boinc_catch_signal+0x4d)[0x4c6fdd]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7fc8c8194340]
    /usr/lib/x86_64-linux-gnu/libdrm_intel.so.1(drm_intel_bufmgr_gem_enable_reuse+0x0)[0x7fc8c2f4d0d0]
    /usr/lib/beignet/libcl.so(+0x13b58)[0x7fc8c4a5eb58]
    /usr/lib/beignet/libcl.so(+0x13d07)[0x7fc8c4a5ed07]
    /usr/lib/beignet/libcl.so(+0x13e21)[0x7fc8c4a5ee21]
    /usr/lib/beignet/libcl.so(+0x13f08)[0x7fc8c4a5ef08]
    /usr/lib/beignet/libcl.so(+0xf45d)[0x7fc8c4a5a45d]
    /usr/lib/beignet/libcl.so(+0xf521)[0x7fc8c4a5a521]
    ../../projects/setiathome.berkeley.edu/astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100[0x4885df]
    ../../projects/setiathome.berkeley.edu/astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100[0x488dfc]
    ../../projects/setiathome.berkeley.edu/astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100[0x461896]
    ../../projects/setiathome.berkeley.edu/astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100[0x46a205]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fc8c719dec5]
    ../../projects/setiathome.berkeley.edu/astropulse_7.08_x86_64-pc-linux-gnu__opencl_nvidia_100[0x40bd89]

    Exiting...

    </stderr_txt>
    ]]>


    Next 20


     
    ©2024 University of California
     
    SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.