Posts by ivan


log in
21) Message boards : Number crunching : Show and tell your machine. Here's mine. (Message 1555563)
Posted 12 Aug 2014 by Profile ivan
Just took delivery of these boxes. Each box contains a 2U rack-mount chassis. Each chassis contains four independent servers. Each server contains two 10-core Xeons (with hyperthreading) and 128 GB of RAM. So, 4x4x2x10x2=640 threads...
Unfortunately I can't run s@h on them, they're for our particle-physics GRID-computing data centre.
But watch this space.
22) Message boards : Number crunching : Show and tell your machine. Here's mine. (Message 1555069)
Posted 11 Aug 2014 by Profile ivan
Anyway, to illustrate what I've been posting in the low power thread, here are a couple of shots of my Jetson development kit, before and after adding an external SSD.

How many of those pins are GPIO, and how many are i2c and SPI? I haven't found the docs on that yet.

You should be able to get to the Jetson TK1 Support page. The PM375 Module Specification document gives the names of all the pins; I haven't found a more-verbose description yet but the schematics are there too, if you can read them. Note there are several other interfaces brought out here too, such as LVDS for an LCD panel, touch input, etc.

The price is certainly right.

That sounds like a seriously cool board. Raspberry Pi has it's uses, but this sounds like it's useful for more than just home automation stuff done on the cheap (Raspberry Pi B+ for $35 is really hard to beat).
23) Message boards : Number crunching : a lot of pending WUs...... (Message 1554215)
Posted 9 Aug 2014 by Profile ivan
wow 1200 the ghost of wiggo will be around long after he kicks the bucket and there i thought i had heaps

4,906 pending here, buddy.

I've only got 1,445. :-{(>
24) Message boards : Number crunching : Show and tell your machine. Here's mine. (Message 1553587)
Posted 8 Aug 2014 by Profile ivan
LOL...no, no nixies in the computer.
Although the color in the pic does evoke their glow. Those are just some LED illuminated push button switches on the edge of the mobo.

I do own one of these, though.....

I remember even earlier dekatrons, where the glowing athodes were arranged in a circle, 0-9, and guide cathodes in between could "increment" the glowing cathode quite simply -- a counting circuit in a not-vacuum tube.
Anyway, to illustrate what I've been posting in the low power thread, here are a couple of shots of my Jetson development kit, before and after adding an external SSD.
(Sorry, embedding the images didn't work for some reason, so you'll have to click on the lynx.)
25) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1553356)
Posted 7 Aug 2014 by Profile ivan
The first WU has just finished; Run time 50 min 57 sec, CPU time 21 min 32 sec. Not validated yet. Run time is just about twice what I'm currently achieving with the 750 Ti, but that's running two at once.

Interestingly, I decided about midnight last night to test whether there's a memory bandwidth problem within the Jetson by changing to run two WUs at a time. The interesting part is that for the next few WUs, from the same tape and for about the same credit, the run-time per WU increased by 50% (not 100%) but the reported CPU time fell to 50%. I take the sublinear increase in real time to mean a bottleneck that's alleviated by crunching another thread while memory transfers(?) stall a thread (remember this GPU doesn't have separate RAM, it uses part of system memory AFAICT). The decrease in CPU time perhaps implies some busy-waiting -- the CPU time for two instances is the same as for a single one.
The run-times are starting to fluctuate as more varied WUs arrive. I think I'll let it stabilise for a while and then explore three simultaneous WUs, but cynicism and experience suggest that there's little more to be gained there.
26) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1553101)
Posted 7 Aug 2014 by Profile ivan
This assumes you've installed the CUDA SDK and added the appropriate locations to your PATH and LD_LIBRARY_PATH environment variables, but that's well-covered in the Nvidia documentation.

As I've just been reminded, it's a good idea also to set up the system-wide pointer to the libs with ldconfig, in case you forget to set up LD_LIBRARY_PATH or start up boinc from an account/shell that doesn't set it automatically -- in these cases the libraries aren't found and the jobs all die...
ubuntu@tegra-ubuntu:~/BOINC$ ldd projects/setiathome.berkeley.edu/setiathome_x41zc_armv7l-unknown-linux-gnu_cuda60 libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb66cd000) libcudart.so.6.0 => not found libcufft.so.6.0 => not found libstdc++.so.6 => /usr/lib/arm-linux-gnueabihf/libstdc++.so.6 (0xb6621000) libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0xb65b5000) libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xb6594000) libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb64ac000) /lib/ld-linux-armhf.so.3 (0xb6704000) ubuntu@tegra-ubuntu:~/BOINC$ sudo nano /etc/ld.so.conf.d/cuda.conf ... [edit file here] ubuntu@tegra-ubuntu:~/BOINC$ cat /etc/ld.so.conf.d/cuda.conf # cuda default configuration /usr/local/cuda/lib ubuntu@tegra-ubuntu:~/BOINC$ sudo ldconfig ubuntu@tegra-ubuntu:~/BOINC$ ldd projects/setiathome.berkeley.edu/setiathome_x41zc_armv7l-unknown-linux-gnu_cuda60 libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb6700000) libcudart.so.6.0 => /usr/local/cuda/lib/libcudart.so.6.0 (0xb66b6000) libcufft.so.6.0 => /usr/local/cuda/lib/libcufft.so.6.0 (0xb4b7f000) libstdc++.so.6 => /usr/lib/arm-linux-gnueabihf/libstdc++.so.6 (0xb4ad4000) libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0xb4a68000) libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xb4a47000) libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb495f000) /lib/ld-linux-armhf.so.3 (0xb6738000) libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0xb4954000) librt.so.1 => /lib/arm-linux-gnueabihf/librt.so.1 (0xb4946000)
27) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1553089)
Posted 6 Aug 2014 by Profile ivan
I also like what you talked about with the Baytrail-D system. If you don't mind me asking which ones do you have?

Mine is just something eBuyer had on special a couple of months ago for £130: Acer Aspire XC-603 Desktop PC. It took a bit of effort to find something that would boot on it -- I ended up with Centos 7. I could have tried to put corporate Windows 7 on it, but several comments said that it was difficult to get the drivers right. I wouldn't mind putting the wattmeter on it too. As I mentioned earlier, I've not found out if it's possible to get the iGPU crunching too under Linux -- BOINC says there's no GPU.
28) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1553058)
Posted 6 Aug 2014 by Profile ivan
That is pretty awesome performance for a system that NVidia says is using 5 watts under real work loads.

The more I look at Nvidia's support page this doesn't make much since. I don't think this applies to pushing the Cuda cores to their limit. It would be interesting to get a power meter on it to see what it's usage is.

The Jetson docs I was reading yesterday said that total at-the-wall consumption was (IIRC) 10.somethng W. Then it went through the chain describing the inefficiencies (20% loss in the power brick, etc...). It did stress that because it was a development chip the peripherals hadn't been chosen for low power consumption. Next time I power down my home system I'll remove my power-meter and apply it to the Jetson instead.
Remember, though, that the Jetson runs its cores at a lower frequency than many PCI-e video cards and the memory bus is narrower, which drops power consumption.
/home/ubuntu/CUDA-SDK/NVIDIA_CUDA-6.0_Samples/bin/armv7l/linux/release/gnueabihf/deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GK20A" CUDA Driver Version / Runtime Version 6.0 / 6.0 CUDA Capability Major/Minor version number: 3.2 Total amount of global memory: 1746 MBytes (1831051264 bytes) ( 1) Multiprocessors, (192) CUDA Cores/MP: 192 CUDA Cores GPU Clock rate: 852 MHz (0.85 GHz) Memory Clock rate: 924 Mhz Memory Bus Width: 64-bit L2 Cache Size: 131072 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 32768 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: Yes Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device PCI Bus ID / PCI location ID: 0 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.0, CUDA Runtime Version = 6.0, NumDevs = 1, Device0 = GK20A Result = PASS
29) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1552992)
Posted 6 Aug 2014 by Profile ivan

The first WU has just finished; Run time 50 min 57 sec, CPU time 21 min 32 sec. Not validated yet. Run time is just about twice what I'm currently achieving with the 750 Ti, but that's running two at once.

From your stderr out:

setiathome enhanced x41zc, Cuda 6.00


Where did you get this version from?

As mavrrick says, I compiled it myself. However, the hard part isn't s@h; the hard part is compiling BOINC. It has so many prerequisites. The basic instructions are here.
git clone git://boinc.berkeley.edu/boinc-v2.git boinc cd boinc git tag [Note the version corresponding to the latest recommendation.] git checkout client_release/<required release>; git status ./_autosetup ./configure --disable-server --enable-manager make -j n [where n is the number of cores/threads at your disposal]

The problems you will have is first finding the libraries and utilities that _autosetup wants, then ensuring that you have g++ installed, and then finding all the libraries and development packs that configure wants (you need the -devs for the header definition files). The final hurdle, if you want to use the boincmgr graphical command interface, is getting wxWidgets. It tends not to be included in repositories for modern distributions now so you have to try to compile it yourself. Which I haven't managed lately as BOINC wants an old version which was (apparently) badly coded and gives lots of problems with the newest, smartest gcc/g++ compilers. You may need to just learn how to use the boinccmd command-line controller...
The simplest way to then compile s@h was detailed back in January, in this thread.
cd <directory your boinc directory is in> svn checkout -r1921 https://setisvn.ssl.berkeley.edu/svn/branches/sah_v7_opt/Xbranch cd Xbranch [edit client/analyzeFuncs.h and add the line '#include <unistd.h>'] sh ./_autosetup sh ./configure BOINCDIR=../boinc --enable-sse2 --enable-fast-math make -j n

This assumes you've installed the CUDA SDK and added the appropriate locations to your PATH and LD_LIBRARY_PATH environment variables, but that's well-covered in the Nvidia documentation. As I alluded to above, you will probably have to edit the configure file too, to make sure obsolete gencode entries are removed and appropriate ones for your kit are included. Oh, and drop the --enable-sse2 if you're compiling for other than Intel/AMD CPUs.
30) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1552955)
Posted 6 Aug 2014 by Profile ivan
Ah, there it is!
Well, I got both my hologram reconstruction and s@h compiled and running on the Jetson today. The holograms run about 10x slower than on my GTX 750 Ti (1.5 frames/sec for a 4Kx4K reconstruction). No real problems with the s@h, just the missing include I reported last January, and I had to edit the config file to remove the old compute capabilities that nvcc didn't like and put in 3.2 for the Tegra.
The first WU has just finished; Run time 50 min 57 sec, CPU time 21 min 32 sec. Not validated yet. Run time is just about twice what I'm currently achieving with the 750 Ti, but that's running two at once.
31) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1552687)
Posted 5 Aug 2014 by Profile ivan

05-Aug-2014 16:03:28 [---] This computer is not attached to any projects 05-Aug-2014 16:03:28 [---] Visit http://boinc.berkeley.edu for instructions 05-Aug-2014 16:03:29 Initialization completed 05-Aug-2014 16:03:29 [---] Suspending GPU computation - computer is in use 05-Aug-2014 16:04:00 [---] Received signal 2 05-Aug-2014 16:04:01 [---] Exit requested by user

Now to try to attach to S@H since the project is up again!

05-Aug-2014 20:31:55 [---] Suspending GPU computation - computer is in use 05-Aug-2014 20:39:33 [---] Running CPU benchmarks 05-Aug-2014 20:39:33 [---] Suspending computation - CPU benchmarks in progress 05-Aug-2014 20:39:33 [---] Running CPU benchmarks 05-Aug-2014 20:39:33 [---] Running CPU benchmarks 05-Aug-2014 20:39:33 [---] Running CPU benchmarks 05-Aug-2014 20:39:33 [---] Running CPU benchmarks 05-Aug-2014 20:40:05 [---] Benchmark results: 05-Aug-2014 20:40:05 [---] Number of CPUs: 4 05-Aug-2014 20:40:05 [---] 966 floating point MIPS (Whetstone) per CPU 05-Aug-2014 20:40:05 [---] 6829 integer MIPS (Dhrystone) per CPU 05-Aug-2014 20:40:06 [---] Resuming computation 05-Aug-2014 20:40:12 [http://setiathome.berkeley.edu/] Master file download succeeded 05-Aug-2014 20:40:17 [---] Number of usable CPUs has changed from 4 to 1. 05-Aug-2014 20:40:17 [http://setiathome.berkeley.edu/] Sending scheduler request: Project initialization. 05-Aug-2014 20:40:17 [http://setiathome.berkeley.edu/] Requesting new tasks for CPU and NVIDIA 05-Aug-2014 20:40:22 [SETI@home] Scheduler request completed: got 0 new tasks 05-Aug-2014 20:40:22 [SETI@home] This project doesn't support computers of type armv7l-unknown-linux-gnueabihf 05-Aug-2014 20:40:24 [SETI@home] Started download of arecibo_181.png 05-Aug-2014 20:40:24 [SETI@home] Started download of sah_40.png 05-Aug-2014 20:40:27 [SETI@home] Finished download of arecibo_181.png 05-Aug-2014 20:40:27 [SETI@home] Finished download of sah_40.png 05-Aug-2014 20:40:27 [SETI@home] Started download of sah_banner_290.png 05-Aug-2014 20:40:27 [SETI@home] Started download of sah_ss_290.png 05-Aug-2014 20:40:29 [SETI@home] Finished download of sah_banner_290.png 05-Aug-2014 20:40:29 [SETI@home] Finished download of sah_ss_290.png 05-Aug-2014 20:43:41 [---] Resuming GPU computation 05-Aug-2014 20:44:27 [---] Suspending GPU computation - computer is in use
:-)
Ah, there it is!
32) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1552676)
Posted 5 Aug 2014 by Profile ivan
I took delivery of an Nvidia Tegra TK1 "Jetson" SDK tonight and should have all the bits needed to run it (HDMI->DVI cable, USB hub, Keyboard+mouse) on next-day delivery tomorrow. First plan is to work out how it runs (it's an ARM version of Ubuntu) and install the latest CUDA libraries. Then, after I've got my hologram reconstructions running on the 192-core Kepler, I'll see if there are all the resources needed to compile BOINC & S@H on it. Watch, as they say, this space.

Well, I've got this far so far:
05-Aug-2014 16:03:28 [---] cc_config.xml not found - using defaults 05-Aug-2014 16:03:28 [---] Starting BOINC client version 7.2.42 for armv7l-unknown-linux-gnueabihf 05-Aug-2014 16:03:28 [---] log flags: file_xfer, sched_ops, task 05-Aug-2014 16:03:28 [---] Libraries: libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3 05-Aug-2014 16:03:28 [---] Data directory: /home/ubuntu/BOINC 05-Aug-2014 16:03:28 [---] CUDA: NVIDIA GPU 0: GK20A (driver version unknown, CUDA version 6.0, compute capability 3.2, 1746MB, 141MB available, 327 GFLOPS peak) 05-Aug-2014 16:03:28 [---] Host name: tegra-ubuntu 05-Aug-2014 16:03:28 [---] Processor: 1 ARM ARMv7 Processor rev 3 (v7l) 05-Aug-2014 16:03:28 [---] Processor features: swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt 05-Aug-2014 16:03:28 [---] OS: Linux: 3.10.24-g6a2d13a 05-Aug-2014 16:03:28 [---] Memory: 1.71 GB physical, 0 bytes virtual 05-Aug-2014 16:03:28 [---] Disk: 11.69 GB total, 5.63 GB free 05-Aug-2014 16:03:28 [---] Local time is UTC +0 hours 05-Aug-2014 16:03:28 [---] No general preferences found - using defaults 05-Aug-2014 16:03:28 [---] Preferences: 05-Aug-2014 16:03:28 [---] max memory usage when active: 873.11MB 05-Aug-2014 16:03:28 [---] max memory usage when idle: 1571.60MB 05-Aug-2014 16:03:28 [---] max disk usage: 5.55GB 05-Aug-2014 16:03:28 [---] don't use GPU while active 05-Aug-2014 16:03:28 [---] suspend work if non-BOINC CPU load exceeds 25% 05-Aug-2014 16:03:28 [---] (to change preferences, visit a project web site or select Preferences in the Manager) 05-Aug-2014 16:03:28 [---] Not using a proxy 05-Aug-2014 16:03:28 [---] This computer is not attached to any projects 05-Aug-2014 16:03:28 [---] Visit http://boinc.berkeley.edu for instructions 05-Aug-2014 16:03:29 Initialization completed 05-Aug-2014 16:03:29 [---] Suspending GPU computation - computer is in use 05-Aug-2014 16:04:00 [---] Received signal 2 05-Aug-2014 16:04:01 [---] Exit requested by user

As with the Celeron I bought recently, I had a lot of trouble with the graphics, especially finding the GL, GLU and GLT libraries -- compounded by the fact that neither install (the Celeron is CENTOS 7) had g++ by default and ./configure doesn't really point that out to you. Big showstopper is wxWidgets. Need to compile it myself, and it looks like BOINC code isn't compatible with anything past 2.8.3 -- but 2.8.3 won't compile with gcc 4.8.3 apparently. So I haven't got boincmgr running on either yet.
Now to try to attach to S@H since the project is up again!
33) Message boards : Number crunching : Energy Efficiency of Arm Based sysetms over x86 or GPU based systems (Message 1552389)
Posted 4 Aug 2014 by Profile ivan
Compared to my Bay Trail-D system.
Application GFLOPS Cores Total GLOPS System Watts GFLOPS/Watt SETI@home v7 10.25 4 41.00 25 2.050 AstroPulse v6 21.30 4 85.20 25 4.260


Hmm, my machine is running somewhat fewer FLOPS than yours for both MB and AP. I haven't worked out how to enable the iGPU for crunching under Linux yet.

I took delivery of an Nvidia Tegra K "Jetson" SDK tonight and should have all the bits needed to run it (HDMI->DVI cable, USB hub, Keyboard+mouse) on next-day delivery tomorrow. First plan is to work out how it runs (it's an ARM version of Ubuntu) and install the latest CUDA libraries. Then, after I've got my hologram reconstructions running on the 192-core Kepler, I'll see if there are all the resources needed to compile BOINC & S@H on it. Watch, as they say, this space.

Must take my Wattmeter back into work next time I have to power down this rig (which is running 143 W ATM, it's usually around 250 W when the GPUs have APs to crunch).
34) Message boards : Number crunching : Linux 64, Mint 17, and Nvidia 340.24 driver (Message 1546268)
Posted 23 Jul 2014 by Profile ivan
I never managed to get BOINC to recognise NV GPUs with anything other than the standard Ubuntu nvidia drivers which at this point stands at 331 for Trusty. I don't know how much extra customising Mint does to the nvidia drivers.

Mint doesn't provide Nvidia drivers out-of-the-box, they give you xorg's nouveau driver. You can use Driver Manager to install proprietary drivers, currently mine is giving me the option of 331.38 or 304.117 -- or nouveau. I always download and install from the Nvidia site, however
35) Message boards : Number crunching : Linux 64, Mint 17, and Nvidia 340.24 driver (Message 1545733)
Posted 22 Jul 2014 by Profile ivan
I upgraded my home cruncher to Linux Mint 17 at the weekend because there were no more upgrades for Mint 15. Unfortunately you can't really upgrade, you have to do a new install, but I learnt long ago to keep /home on a separate partition so you don't have to recreate all your data and many of your personalisations.
I had to do a lot of fighting to get boimcmgr recompiled (boinc and boinccmd were OK) -- long story short, don't upgrade to wx3.0, stay with wx2.8.
I'm still using the stock applications, despite Petri having long ago sent me the recipes for updating Nvidia versions of MB and AP applications. So, I can only run AP on my video cards (1x GT 660 Ti + 1x GT 640), but I have been using libsleep.so to get around the busy-wait bug in the Nvidia drivers.
However, yesterday I got notification of several security updates, including the kernel, so as is my wont I downloaded the latest Nvidia driver too -- 340.24 (previously using 337.19). I'd had, as usual, the devil's own time banning the stock nouveau driver in favour of the proprietary Nvidia one when I upgraded but the step up to 340.24 went relatively easily -- one tip I learnt over the weekend, to kill X so you can go to runlevel 2 to install a new driver, use
sudo service stop mdm

Later, I realised that the gkrellm display was showing solid green CPU usage instead of the spiky look I was used to since starting using libsleep. Fired up top and sure enough, all four GPU AP jobs were using 100% CPU time and the processors were showing no idle time at all. I let it run overnight and this morning found that all recent AP results had CPU usage the same as overall time whereas heretofore it was in the ~10-30% range. So, I reinstalled the 337.19 driver and rebooted, and now the GPU AP jobs are back to 5-20% CPU and idle time is around 40% of a CPU.
So, beware if upgrading beyond 337.19 -- keep an eye on your CPU utilisation.
36) Message boards : Number crunching : Panic Mode On (88) Server Problems? (Message 1542532)
Posted 16 Jul 2014 by Profile ivan
Here's one of his machines, 7309756, that got into my database before he hid them. Looks like he ran S@H on it for about 4 weeks, then stopped cold on July 6. A lot of WUs successfully processed, which is terrific, but he might have left 100 in limbo if that machine doesn't connect again. Hope he doesn't do it that way for his whole data center.

Hmm, I just had a dual-node machine with those processors (tho' @ 2.5 GHz) ordered for me. I'll probably not run hyperthreading though, so 2x 20-core machines. :-)
37) Message boards : Number crunching : ASIC computers (Message 1540199)
Posted 12 Jul 2014 by Profile ivan
I'm not sure the ASICs could be repurposed in that way. See ASIC for seti.

Given the meaning -- Application-Specific Integrated Circuit -- I'm sure they can't be re-purposed. FPGAs (Field Programmable Gate Arrays) maybe, but they'd be devilishly hard to programme for SETI.
38) Message boards : Number crunching : Some questions about BOINC for Android... (Message 1539682)
Posted 11 Jul 2014 by Profile ivan
Twenty years from now, when your phone has more power than your desktop does now, maybe it'll be worthwhile, but not for a while.

See my recent post in the Milestones thread...
39) Message boards : Number crunching : CLOSED*SETI/BOINC Milestones [ v2.0 ] - XXVII*CLOSED (Message 1539660)
Posted 11 Jul 2014 by Profile ivan

And I think no one here sees things differently :)
I see it the same as you do. What's funny is that it really will be someone running SETI on an old cell phone that crunches the work unit with the alien contact in it.

You might be surprised. I recently found an app to compile gcc programmes, including Fortran, on my (original, 2012) Nexus 7 Android tablet (CCTools, if you're interested). For laffs I ran a 1975 Monte-Carlo programme that I used for my thesis. IIRC I got the run-time down to 5 hours on the University mainframe back then; on one core of the four in the Nexus 7 it took 78 seconds... (Admittedly, a 2 GHz Xeon with the Intel compiler takes 1.47 secs.)

100k is 100k more.

Indeed.
40) Message boards : Number crunching : Your thoughts on the upcoming Haswell E CPUs (Message 1538999)
Posted 10 Jul 2014 by Profile ivan
This question is slightly off topic but still to do with CPUs. Has anyone got a Devil's Canyon 4790K? These are stock clock at 4.0 GHz. Am just interested to see what performance is like?

http://www.theregister.co.uk/2014/06/30/review_intel_devils_canyon_cpu/


Previous 20 · Next 20

Copyright © 2014 University of California