Posts by petri33

1) Message boards : Number crunching : User achievements thread......... (Message 1857980)
Posted 3 hours ago by Profile petri33Project Donor
Post:
Total credit 16,451,341
Recent average credit 50,770.82
The GTX 1080 really crunches and does 28,000 to 30,000 WU on average with a EVGA Classified MB, Corsair Dominator Platinum DDR4 64Gb and a Intel 6950X 10 core cpu.
I have 3 other machines crunching.
ASUS i7 G73JW Laptop with Nvidia 460 gpu
Desktop with GTX 750Ti (Intel 975X 4 core cpu).
Desktop with GTX 1050Ti (Intel 980X 6 core cpu).
All are overclocked.
I think my electric bill went up $100 a month!


That RAM was a waste of money IMHO; 16GB is more than sufficient for a SETI cruncher. I have one with dual E5-2670s (32 threads) and dual GTX 1080s and 16gb RAM and it is averaging around 85K RAC with no stress. I have experimented with as low as 8GB for both an X58-based and X79-based consumer board with GTX 1080 and GTX 980 without running in to RAM problems. Just look in Windows Task Manager and add up the memory needed by all your SETI threads. Less than a couple of gig even on the dualie, which in addition to 3 WUs/GPU is running 25 threads of MB.


The next generation of GPU apps are going to need 'some' CPU ram to support the GPU locked host (CPU) memory to do the data transfers reliably in the background (asyncronously, need to know in advance basis). And on M$ platforms the company has said decades ago they the leave the optimization to the HW industry.

Petri
2) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857974)
Posted 3 hours ago by Profile petri33Project Donor
Post:
Hi all,

Regarless of the APP (Cuda/OpenCl) you are running the workload for GPU app needing for CPU is mainly 1) integer/boolean only and 2) used just to check if the GPU has finished processing.

This is why I run HT on. 6 cores becomes to 12 HT cores. I run 6 CPU tasks because the CPU has 6 FP/AVX units. The rest 6 HT cores can do the download from GPU and postprocessing of the results that my 4 GPUs produce. The GPUs need a HT core to be ready to notice a result is ready -- to minimize latency. I have set my CUDA app to loop actively (and placed it on a HT core) to wait the GPU. An active NOP loop does not consume much power nor does is hinder the performance of the real core. The Linux seems to be able to differentiate 'real' work from an active NOP-wait loop.

I do tell the Linux scheduler to run the CPU apps on defined CPU ID's and to run the GPU apps on some other subset of "CPU'"ID's. I nicely ask the sytem: "Please do run the CPU app on real core and the GPU app on the HT core and leave 2 of them HT cores to handle my network and system/everyday tasks.

You may do it some other way. And that is OK.

Petri
3) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857972)
Posted 4 hours ago by Profile petri33Project Donor
Post:
I found several hundreds of teams and user profiles...


A BIG thank You!
4) Message boards : Number crunching : User achievements thread......... (Message 1857965)
Posted 4 hours ago by Profile petri33Project Donor
Post:
I think my electric bill went up $100 a month!

It does get addictive. Be careful, or you'll end up like me, burning 2000 kwh/mo. Yikes!


3*GTX1080 + 1*GTX1080Ti and CPU plus cooling equals about
3*140W+1*200W + 140W + MB + cooling = about 800W.

For SETI: 800W * 24h/d * 30d/mo * 1k/1000 = 576 kWh/mo for boinc. Approx 7000 kWh/y.

The whole house needs 20 000 kWh/y (Seti included) for heating, warm water and appliances. 63,5 deg North. No active cooling in the house. Any escaping heat is circulated back to the input air. The computer room is in the basement so summer heat is seldom a problem. The whole house electric bill is about €200/mo.

I'm considering going partly Solar + batteries.

Petri
5) Message boards : Number crunching : User achievements thread......... (Message 1857962)
Posted 4 hours ago by Profile petri33Project Donor
Post:
And I have something to share here as well now.
The kitties have reached the 840 million credit milestone.

Meow.


I'd pour something to drink to them! i.e. milk or their preferred beverage.

Petri
6) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1857957)
Posted 5 hours ago by Profile petri33Project Donor
Post:
Can you give me an exact line and location to paste it? I'll see how it goes. Right now I don't have much else to do, zi3k+ looked good back with mostly Arecibo tasks and running around 1000 pendings to less than 50 Inconclusives. Now it doesn't look so good. Might as well try something different.


Yes,
you'll just have to wait. This is family night. Tomorrow is the outage (starting at 18 pm here) so then I'll have time to dust my computer and GPUs and the code.

However if you are feeling impatient you can try to find the first call to cudaAcc...dfts() in analyzeFuncs.cpp. Right before or after that.
The parameters to cudaMemsetAsync are in the CUDA documentation and the size of the reserved mem buffer can be found in cudaAcceleration.cu where the buffer is allocated using CUDA device memory allocation function. The size is in bytes and one float (short and fast form of decimal number) takes four bytes (4 chunks of 8 bit integers, totalling of 32 bits i.e. four bytes)

There is a possibility of an error either in allocation size or fetching of the result or another stage of signal finding overwriting. I know that one can get blind to his own errors. That's why I need a third eye or seventh sense.

Petri
7) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1857938)
Posted 6 hours ago by Profile petri33Project Donor
Post:
. . Since I cannot get any joy running the re-scheduler raw I think I may have to have a crack at migrating your front end. .

If you can figure out how to "run it raw" you'll have the code written for that segment, yeah?


. .I have been doing that manually from command line. I have not yet tried creating any scripts or shell files.

<shiver>


You can run shell scripts and commands from C.

Just google 'c run shell command' and you'll get a lot of information.
You do not have to reinvent the wheel. At least to try something that you know is working. Afterwards you can refine (to 'HiFi') you code.

Petri
8) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1857934)
Posted 6 hours ago by Profile petri33Project Donor
Post:
Thank you guys,

You can read the code and I definitely will a couple of next days. I'll reserve a stack of paper and a pencil and do some calculations ... and test runs.
You could insert a cudaMemsetAsync(SomeParams_and, streamThatProcessesTheTask) to zero out the result buffers way ahead (like before chirp) on your platforms, since I can not reproduce the error.

Petri
9) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1857932)
Posted 6 hours ago by Profile petri33Project Donor
Post:
Yes, it seems all of the zi3t1e False overflows are at 0 chirp. This task mentioned earlier has them at differing chirps where zi3t1e had them all at chirp=0, http://setiathome.berkeley.edu/result.php?resultid=5616173398
So, how do we fix that? It does appear zi3t1e produces fewer inconclusives otherwise.

Starting with zi3l I noticed the BLC tasks would run normally as long as they weren't the First task run at BOINC Startup. If the First task at Startup was an Arecibo task, the following BLC would run normally. If the First task at startup was a BLC it would immediately Overflow, and keep overflowing all the BLCs until it found an Arecibo task. After the Arecibo task ran, the following BLC tasks would run normally. Yes, Very strange. It would appear something wasn't starting correctly. That was when I built zi3k+, because, it didn't/doesn't have that problem.


Sounds like an uninitialized buffer/buffer overflow.
10) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1857931)
Posted 6 hours ago by Profile petri33Project Donor
Post:
heh, spike overflows at 0 chirp. That's weird, since that's just a copy, fft and powerspectrum (normally). Perhaps there's a silent failure or incomplete kernel launch. Will give something to poke at early in the piece.


That is one probable cause. Another is a buffer overflow before the copy.
11) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857737)
Posted 1 day ago by Profile petri33Project Donor
Post:
How many hours does it take for a GTX 1080 to go trough the 100WU limit?


My GTX1080 does guppi vlar in 150 seconds and a shortie in 46 seconds so a 100 of them would take 4600 - 15000 seconds (1hr 17 min - 4hr 10min).
My GTX1080Ti does guppi vlar in 110 seconds and a shortie in 37 seconds so a 100 of them would take 3700 - 11000 seconds (1hr 2 min - 3hr 4min).

Petri
12) Message boards : Number crunching : Anything relating to AstroPulse tasks (Message 1857459)
Posted 3 days ago by Profile petri33Project Donor
Post:
Hi,
appears to be dead but I got one. http://setiathome.berkeley.edu/workunit.php?wuid=2479033902
Since it is so long I checked the run times of APs I do not know if 500 seconds is a good or a bad time. -unroll was 20, it should have been (maybe) 28 for 1080Ti.
13) Message boards : Number crunching : SETI@home v8 v8.22 (opencl_nvidia_sah) x86_64-pc-linux-gnu Errors (Message 1857228)
Posted 4 days ago by Profile petri33Project Donor
Post:
Hi,
Is it possible that you are running on default graphics driver?
Try installing a driver from www.nvidia.com or use the driver manager on desktop and select a NVIDIA driver instead of the free public one.

Just my thoughts.

Petri
14) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857224)
Posted 4 days ago by Profile petri33Project Donor
Post:
Max WUs should be tied to RAC. 300 000 -> 3000 WUs.
15) Message boards : Number crunching : You have to love early Xmas and B'day presents (even if you have to pay for them yourself). (Message 1857093)
Posted 4 days ago by Profile petri33Project Donor
Post:
My GPUs are out of work. One of them could do a guppi vlar in about 105 seconds and the rest of them do them in 140 seconds.
I love my newest present.

And yes,
I had to pay for it myself.
16) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857026)
Posted 5 days ago by Profile petri33Project Donor
Post:
73 WUs/sec sure. All Arecibo vlars? No work for NVIDIA?
17) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1856967)
Posted 6 days ago by Profile petri33Project Donor
Post:
I already seen Petri classified "For sale 4 - NVIDIA 1080 Founders Edition cards for sale lightly used in home gaming ..."(EDIT:) Reason for selling. I won the game!"


LOL :)

However, with the special app the scaling seems quite nice. You can comb through my results to find 1080Ti work.
18) Message boards : Number crunching : User achievements thread......... (Message 1856876)
Posted 7 days ago by Profile petri33Project Donor
Post:
I achieved a new GPU that does guppi HIP vlars in just over 103 seconds per task. An old 1080 needs 143+ seconds to that.
The user achievement is in the almost linear scaling of performance when going from 20 to 28 SMX units.

Petri
19) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1856578)
Posted 8 days ago by Profile petri33Project Donor
Post:
For information about --cool-bits you could try the following ...
http://lmgtfy.com/?q=--cool-bits%3D28
20) Message boards : Number crunching : Linux CUDA 'Special' App finally available, featuring Low CPU use (Message 1856288)
Posted 9 days ago by Profile petri33Project Donor
Post:
Hi,
here is an error in the software: https://setiathome.berkeley.edu/result.php?resultid=5594593817. It happens, just not so often..

Pulse finding detects multiple 'pulses' at certain time. Something gets overwritten I guess. Hard to debug since I do not have that wu on my computer.


Next 20


 
©2017 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.