Posts by petri33

1) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1849944)
Posted 14 minutes ago by Profile petri33Project Donor
Post:
Would recommend checking a vanilla zi Vs vanilla CPU with Gaussians in extreme circumstances, because they have been problematic on and off at different times. They are relatively sensitive to platform variation (though that shouldn't impact best selection logic of course)


Hi,

I pm'ed you, TBar, and Gianfranco. There's a fix now to the Gaussian finding.

Petri
2) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1849496)
Posted 1 day ago by Profile petri33Project Donor
Post:
Hi,
the gaussian search is unaffected by unroll. Unroll is used only in pulse find.
3) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1849472)
Posted 1 day ago by Profile petri33Project Donor
Post:
Thank you TBar. I'll test that. Hope I find something.
4) Message boards : Number crunching : Panic Mode On (104) Server Problems? (Message 1849149)
Posted 2 days ago by Profile petri33Project Donor
Post:
Hi,

During this kind of famine
a) would it be wise to let all GPU's process all kind of work? AMD GPUs have their caches much better filled. NVIDIA ones do not get arecibo vlars. I'd like to get some and I think that those running special 'sauce' would like too. If plain vanilla CUDA apps got vlars that would keep them busy. They would not be asking more for a long time.
b) to limit the servers to send only 25 work units to a host at a time (I can do 4*5*1.3 shorties and 4*5*.33 guppi vlars in 5 minutes). And the next wu's would be sent to a host only after some results have been sent back.

This is an unfinished idea and it may certainly contain some errors in calculations and I do not know anything about the inner workings of the servers.

Petri
5) Message boards : Number crunching : I've Built a Couple OSX CUDA Apps... (Message 1848123)
Posted 7 days ago by Profile petri33Project Donor
Post:
@jason_gee
I'll drop some new honeycombed code to you and TBar to test with.
'To honeycomb' == look at at least from 6 angles and choose the best.

I'll be back on late sunday here. So till then.
6) Message boards : Number crunching : Panic Mode On (104) Server Problems? (Message 1847899)
Posted 8 days ago by Profile petri33Project Donor
Post:
Hi,
No aliens.

No, not any. At least to be found.

But since the servers are back running on line we may have a chance.
7) Message boards : Number crunching : User achievements thread......... (Message 1847625)
Posted 9 days ago by Profile petri33Project Donor
Post:

. . But I like your choice of music ... aaahh Love Over Gold.

Stephen

:)

PS Maybe we will meet up over at Einstein's Bar and Grill

.


:) AAD, ADD or something from the days before DDD. And a nice balance swing from left to right at the beginnig -- but The Dynamics of the Record! And the Music. Yess.

Bar and Grill. Sounds nice - I'like the drill. Beef up and sizzle the software over there. Wearing a Top Hat: Yeah with a signal, Cover me, cause I'm changing lanes..

And here's one from the Hotel Berkeley from the side band receiver:
The warm smell of the GPUs
...
"We are programmed to receive.
You can check-out any time you like,
But you can never leave! "
8) Message boards : Number crunching : User achievements thread......... (Message 1847599)
Posted 9 days ago by Profile petri33Project Donor
Post:
Hi,

Just like no one can say about anything meaningful about a days weather, but a constant drop of RAC raises some thoughts -- is the computing climate about to change?

My thoughts are like: bye bye RAC 0.3M. What's bugging me?

a) the weekly update (100 tasks per GPU, 4h10m cahe having avg guppi runtime 150 seconds, 1h17m cache with shorties (á 47s))
b) the days after the weekly maintenance: There's no seeds to be grown, got to reep from the what was already sown. -- But there is none, nowhere, to be found. (except from the beta). That's the Arecibo Road.
c) You name your own . . .

9) Message boards : Number crunching : User achievements thread......... (Message 1847595)
Posted 9 days ago by Profile petri33Project Donor
Post:
Just broke into the top 1% of data crunched for the project according to BOINC Combined.

The GTX 970 did most of the work, but I've enjoyed the computer archeology of maintaining some old rigs pushing old PCI slot GPUs. Another GS 8400 anyone?


Congrats!

The first 1% is the most important.
At the top it is windy nowadays. I must blame the climate change or something.
10) Message boards : Number crunching : User achievements thread......... (Message 1846783)
Posted 14 days ago by Profile petri33Project Donor
Post:
Hi Stephen,

I've heard that mint is easy and I tried it once. Then I tried ubuntu and I did not find it hard to set up that either.

P.
11) Message boards : Number crunching : User achievements thread......... (Message 1846748)
Posted 14 days ago by Profile petri33Project Donor
Post:

Stephen

:)


I see! Now You have a dream too! :)


YES!

Stephen


Here's a picture from the reality:


... and after that I wish you all good night.

p.
12) Message boards : Number crunching : OpenCL kernel/call clGetEventProfilingInfo call failed (Message 1846711)
Posted 14 days ago by Profile petri33Project Donor
Post:
My SW is totally different from the OpenCL one and I may have some dust bunnies hiding in the cooling system of my GPUs. I just happen to have an error or two every day rendering my machine to halt to near zero productivity and resulting to errors saying "Cuda error 'Couldn't get cuda device count". When I get back from work to home the "nvidia-smi -l" window says ERR on one or two GPUs.

It's quite easy way to check if OpenCL and CUDA device disappearing have common roots.

When on host with OpenCL app errors start again worth to run just same command ( nvidia-smi -l ) and see what GPU state it reports.
I suppose if it reports error no matter what runtime CUDA or OpenCL is used, the issue on deeper level than just runtime API.


Affirmative. A terminal window running solely nvidia-smi -l reports ERR on one or two GPUs when this happens. That is on an ubuntu nvidia machine.

EDIT: The machine is ground to the halt. You have to have the window open before launching BOINC. Just let it run a few days.
13) Message boards : Number crunching : User achievements thread......... (Message 1846709)
Posted 14 days ago by Profile petri33Project Donor
Post:

Stephen

:)


I see! Now You have a dream too! :)
14) Message boards : Number crunching : "Ghost" WU - amount "in progress" WU and real amount on my machine (Message 1846703)
Posted 14 days ago by Profile petri33Project Donor
Post:
Tanks a lot everyone. So, I will nothing change. No reset (I already did, but nothing changed),
no reinstall. I let my computer like it is
Thanks for all your explanations and my excuses about my English. I do my best.
Best regards from french part of Belgium.


I have no abilities in french and I can not speak engllish either. But with a little help from a google translate I can ask/say:

Merci. De quelle partie de la Belgique le meilleur chocolat vient-il? "The part?, of the B the best C is-from-it."

That is intended to mean something like: Thank You. From which part of the Belgium does the most appreciated chocolate come from?

--
15) Message boards : Number crunching : "Ghost" WU - amount "in progress" WU and real amount on my machine (Message 1846672)
Posted 14 days ago by Profile petri33Project Donor
Post:
Erasing your data folders was counter productive as the ghosts are lost between the servers and your PC, not sitting in limbo on your PC. Doing what you did you either got a pile of fresh tasks delivered, or got the ones that were on your computer back (which one depends on server settings, not something yo can control). But the ghosts will still be there, in the worst case you can actually increase the number of ghosts (as I did a few weeks back)
As Mark says, you either remove the computer from SETI, wait a few minutes, and re-connect the computer, or just let the ghosts time out.

The statistics are correct - your computer has ~960 tasks assigned to it, these are shown as "in progress". Of those ~660 will be ghosts - lost somewhere between the servers and your computer.


.. but nothing to worry about. The system will handle all your current and the lost work. Every work unit will be processed. If not by you, then by all of us others.

-- Frankie went to Hollywood and said ...
16) Message boards : Number crunching : User achievements thread......... (Message 1846660)
Posted 14 days ago by Profile petri33Project Donor
Post:
Passed to 1st donator in Croatia in SETi@home...it all took only 6 GPUs to do it!
;)

Maybe we should rethink again this topic:
https://setiathome.berkeley.edu/forum_thread.php?id=77363#1681725
:D


. . I can only say that my CPUs crunching add their share to my numbers. But then I am only crunching for seti@home. Only one rig does not use the CPU because it is too low powered and is flat out supporting the GPUs.

. . But I can understand why some people feel that way.

Stephen

.

I believe that CPU power can help out with WCG projects in cancer, Ebola, HIV & other diseases. Just recently HFCC project returned results for brain tumors in children & the same university started a new SCC project! So that's why my CPUs are there...
Just hoping others will follow my advice & check WCG on my link in GIF in signature. Thx!

BTW, my 1050TI didn't even start a game even after 3 months...bought it in a day, when I sold 3 old obsolete GPUs with only $100 extra to give in!
Made all the difference...would like to upgrade more, with more chipsets...but the money is an issue in Croatia, as we get a bit down 1.000€ to live (mine slightly more)...


And as soon as You can get a computer running (Ubuntu or whatever) linux with maxwell or pascal GPU('s) I'll donate the latest CUDA app. Just ask.

I'm a teacher and I use my home computer and pay the bills my self/with my wife.

I've considered solar energy/a biofuel engine as an alternative to
a) heat the house and make all the electricity
b) to make all this Seti stuff affordable
c) as a future hobby
d) use the solar farm/ oil crop field to be expanded into an energy farm at our summer place (Winnie the Pooh has xxx m², I have 5000 m² unbuilt land) directed to he morning sun at a low down hill near the polar circle.

So not a chance with the a-d), but I can have a dream. Let's c) .
17) Message boards : Number crunching : OpenCL kernel/call clGetEventProfilingInfo call failed (Message 1846011)
Posted 16 days ago by Profile petri33Project Donor
Post:
Hi,

I'm running linux and NVIDIA cuda, but... I've experienced random errors when using drivers 376-8.yyy. (Cannot get GPU count or similar).

The OpenCL 'may' use the same libraries as CUDA during compilation and when executing code and allocating resources.


--
p.

So far it has only be reported on ubuntu, so it should not be a general Linux problem.


Yeah, thanks. I'm running Ubuntu too. Not the latest though.

petri@Linux1:~$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=15.10
DISTRIB_CODENAME=wily
DISTRIB_DESCRIPTION="Ubuntu 15.10"
NAME="Ubuntu"
VERSION="15.10 (Wily Werewolf)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 15.10"
VERSION_ID="15.10"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"


And yes, I know. My SW is totally different from the OpenCL one and I may have some dust bunnies hiding in the cooling system of my GPUs. I just happen to have an error or two every day rendering my machine to halt to near zero productivity and resulting to errors saying "Cuda error 'Couldn't get cuda device count". When I get back from work to home the "nvidia-smi -l" window says ERR on one or two GPUs.

I can revert back a version or two with the drivers just to test. I'll test that tomorrow.
18) Message boards : Number crunching : OpenCL kernel/call clGetEventProfilingInfo call failed (Message 1845996)
Posted 16 days ago by Profile petri33Project Donor
Post:
Hi,

I'm running linux and NVIDIA cuda, but... I've experienced random errors when using drivers 376-8.yyy. (Cannot get GPU count or similar).

The OpenCL 'may' use the same libraries as CUDA during compilation and when executing code and allocating resources.


--
p.
19) Message boards : Number crunching : how to redyce CPU usage on GPU tasks? libsleep wont work anymore.... (Message 1844622)
Posted 23 days ago by Profile petri33Project Donor
Post:
Hello!
some years i use my nvidia GT630 videocard with nvidia drivers to crunch for seti via boinc.

i use gpu tasks only.
when i start that, i also see high cpu usage, but find i may use libsleep, and since then my cpu usage is about at 2-3% level at one core only, for seti.

all is good till i upgrade that machine to last stable slackware version - slackware64 14.2, kernel 4.4.38 .
since this, i found, my cpu again run on 100% (one core) all the time, yet that all was gpu tasks.

looks like libsleep not work anymore.
there is any workarounds? not want to run cpu on high load for nothing ( all work do GPU anyway)....electricity not cheap.


Hi,
I do not see any NVIDIA cards listed on any of your computers... You may need to load a driver for the GTX630 first.
20) Message boards : Number crunching : User achievements thread......... (Message 1844278)
Posted 25 days ago by Profile petri33Project Donor
Post:
Hi,
One machine only, one task at a time, with gbt + Arecibo tasks, v8 and CUDA: the recent average credit has just broken 250,000. Oh yeah.


Next 20


 
©2017 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.