Posts by J3P-0

1) Message boards : Number crunching : Intel Phi PCIE Cards - 56+ cores coprocessor+ CUDA - Day dreaming (Message 2042584)
Posted 2 Apr 2020 by Profile J3P-0
Post:
Thanks for the reply. I was thinking more towards the 1 CPU requirement per GPU then have 20+ GPUs on one system like the cheap P106-90 gpu.

I was just daydreaming and curious. :)

Thanks
JP
2) Message boards : Number crunching : Intel Phi PCIE Cards - 56+ cores coprocessor+ CUDA - Day dreaming (Message 2042546)
Posted 2 Apr 2020 by Profile J3P-0
Post:
A little late to the party, however, I just saw these cards are getting cheap - I understand graphics cards are the best option for running Seti as well as you need one cpu core per card. My question is - could you have an infinity number of graphics cards used in conjunction with these Intel Phi coprocessors? (power requirements aside)

Just curious,
3) Message boards : Number crunching : New CUDA 10.2 Linux App Available (Message 2037008)
Posted 9 Mar 2020 by Profile J3P-0
Post:
Thanks for the app update T-Bar and everyone that had helped me in the past - I'm way late and let my best workstation sit idle hoping to get back to fixing the app (stopped working a while ago but my other workstations have been running) - Just fixed it with new app and updated drivers/cuda 10.2, for one last run till the end of the month.

Wish everyone well
James
4) Message boards : Cafe SETI : 20th Anniversary T-shirts - More in stock (Message 1987789)
Posted 29 Mar 2019 by Profile J3P-0
Post:
1white 2xl
1black 2xl

Do you want me to PM you or do you have a link to purchase?

Thanks
JP

Edit: is shipping to the USA ok? Sorry just noticed where you are located. Also, just curious if you have pics of the design?
5) Message boards : Number crunching : BOINC Client is disconnected (Message 1983446)
Posted 4 Mar 2019 by Profile J3P-0
Post:
I too started having issues last week and just now started looking into it - when launching BOINC from terminal I get this error - I am running the special app - If your running the standard flavor you can remove and reinstall
sudo apt remove boinc-client boinc-manager THEN sudo apt install boinc-client boinc-manager

My launch from terminal showed this error
Failed to load module "canberra-gtk-module" (I then reinstalled it)

I also assume some modules were removed at some point due to other software I installed in my system because I also had to reinstall the following as well:
sudo apt-get install libcurl3
sudo apt-get install ocl-icd-libopencl1
6) Message boards : News : Storage machine crash.... (Message 1979582)
Posted 9 Feb 2019 by Profile J3P-0
Post:
Thanks for the update, as of this AM 11:25 CST have a bunch of tasks waiting to report with nothing downloaded. Should I continue to wait, abort or will it pick back up when storage is back online?

EDIT: seems a reboot fixed my issue.
7) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979348)
Posted 8 Feb 2019 by Profile J3P-0
Post:
short answer, yes.

but you need to trick the project servers at Berkely, not the app. also there is a maximum GPU count of 64 due to memory allocation issues.

you have to edit the boinc source code and compile a custom new version of the boinc client to do it.


Given all the recent issues with getting tasks and my main machines running dry, I'm willing to give it a try. Would you mind pointing to or providing a getting started guide? :)

Roger



As this AM I'm stuck with no tasks and status = downloading as well. I would love to know also.
8) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979160)
Posted 7 Feb 2019 by Profile J3P-0
Post:
short answer, yes.

but you need to trick the project servers at Berkely, not the app. also there is a maximum GPU count of 64 due to memory allocation issues.

you have to edit the boinc source code and compile a custom new version of the boinc client to do it.


oh, I think until I need that much I'll live with 100 WU's lol
9) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979154)
Posted 7 Feb 2019 by Profile J3P-0
Post:
About the 690.

Few years ago i used to run a fleet (about 8 ) with several host with 2, 3 or even 4 690 per host.
At the time they are some of the top seti crunchers, but now the things changed.
The Linux Special sauce builds changes everything.

If you want to squeeze all you can from your hosts this is what you could do:

Change the 690 to your windows host where they could run OpenCL builds only.
Then be sure you leave 1 CPU core free for each one of the GPU (2 per 690).
Search for the optimized parameters for that GPU (i not have them anymore) but you could ask Mike for some help.
If you can't find, PM i will try to look on my old messages what i use in that time.

On your Linux boxes buy the best GPU you could afford with a minimum Compute capacity of CUDA 5.0
If you can, something like the 1060 or up are the best choices.
They are good bargains on the top 10x0 series on e-bay. Look specially the 1070 (Ti or not) they have one of the best cost x power x production performances.
Obviously the RX20x0 are superior crunchers, but they cost are superior too.
If your host is powering a 690 now (who is power hungry), i'm sure it could power any Top GPU available on the market. So don't worry about that.
Some could tell about the 750Ti but there are relatively old GPU's now. Some of the newer builds could not work in there in the next years.
Install the Linux Special sauce builds and enjoy their amazing crunching speeds.

My 0.02


Thanks, I am going to give up on the 690's unfortunately and switch to something others referenced like the 1060's since they support the Special App and perform way better, I really like the concept of dual GPU cards ever since I saw an old ATI quad GPU demo card in early 2000's even before they came out with SLI and Crossfire.
10) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979148)
Posted 7 Feb 2019 by Profile J3P-0
Post:

Actually the time to be out of work is approximately the same with one or 7 GPU's since for each GPU you add 100 WU more.
so a 1 GPU hosts could DL 100 WU and a 7 GPU could DL 7x100. A 1 GPU host crunch 1 WU at a time, a 7 GPU host 7.
So they empty the cache at approximately the same rate.
The main problem is because the Linux Special Sauce are so fast and optimized. They could crunch a WU in a 1080Ti in less than 60 Secs.
So 100 WU holds for around 100 min an the outages normally takes 4-6 hrs, just do the math.
Just to clarify.





Yup, all things equal (using the same apps), more GPUs wont drain the cache faster, since the cache gets proportionally larger.



So if I have 7 GPU (7x100WU) - but I am able to trick the app to think I have 65 gpu's (65x100WU) but only really have 7 GPU's I can download 65x100 instead of 7x100 thus having a bigger cache of WU's to work from ... correct?

Please tell me how to enable this magic sorcery :)
11) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979032)
Posted 6 Feb 2019 by Profile J3P-0
Post:
Weird, under your account is stating {63} Nvidia for coprocessors
Some people have worked out how to get around the 100 WU server side limits.
oh, meaning that they run more than one WU per GPU? Where would one go to figure this out :)
No that is not the reason, the reason is to make sure that enough work is on hand to get through server outages without running out of GPU work. ;-)

Cheers.


Ah, gotcha, Tuesdays I run out of work on my 1080, I can't imagine how fast having 6 or 7 1080s would run out of work - so tricking the app to report more GPU's than you really have allows you to download more WU's to run?
12) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979027)
Posted 6 Feb 2019 by Profile J3P-0
Post:
Weird, under your account is stating {63} Nvidia for coprocessors

Some people have worked out how to get around the 100 WU server side limits.


oh, meaning that they run more than one WU per GPU? Where would one go to figure this out :)
13) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979020)
Posted 6 Feb 2019 by Profile J3P-0
Post:

I gotta ask Ian, are you really running 63 GPU's ?

[63] NVIDIA GeForce GTX 1080 Ti (4095MB) driver: 410.66


No. That system has 7 GPUs (6x 1080ti, 1x 1080)


Weird, under your account is stating {63} Nvidia for coprocessors
14) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979014)
Posted 6 Feb 2019 by Profile J3P-0
Post:
I looked at the 1060's but the cuda cores were only 1280 and the GTX 690 is a dual GPU with 3072 cuda cores and a 512 memory bus,

Some things to keep in mind for future purchases.

It's not just the number of cores, but the type of cores. There have been a lot of architectural improvements since the GTX 600 series. And even with a wider memory bus, dual GPUs on a single card tend to be at a disadvantage when it comes to memory bandwidth compared to a single GPU of the same type, even if it has a narrower memory bus (the memory bus on the GTX 690 is really 256bit. One 256 bit bus for each GPU= 512bit in marketing speak). And once again, there have ben considerable improvements over the years since the GTX 600 series came out.

The GTX 690 is 2*GTX 680s, but with GTX 670 clock speeds.

Looking at Shaggie's graphs, the GTX 1060 3GB puts out 400+ credits per hour (stock)- at 120W
A GTX 680 around 300, so the GTX 690 would be around 600- at 300W
This is with Windows on the SoG application. Running LINUX and the Special Application the GTX 1060 3GB would produce 3-4times as much credit, for 120W or less.
(the GTX 1050Ti puts out the same amount of work as a GTX 680, but for less power. For the same power usage as 1 GTX 690, you could run 4 GTX 1050Tis and put out more than double the work).

Shaggie's graphs show the work produced for the power used, the GTX 680 is one of the poorest performers (the GTX 690 would rate even lower). The GTX 1060 3GB is #10 in the top ten for efficiency (although it will eventually get bumped lower now the RTX 2060 has been released).
A more recent card might cost a lot more upfront to buy, but it will cost a lot, lot less to run.


I noticed the GTX690 isn't even on the chart or the Titan Z, is it because they count WU on each GPU separately instead of combined?
15) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979011)
Posted 6 Feb 2019 by Profile J3P-0
Post:
for $120, you could have bought a GTX 1060 3GB, which would use ~100w on SETI, and still be faster than the 690 since it can use the latest special app.


And those prices or lower are on eBay. :)

I also am seeing some pretty good prices on gtx 1070 Ti's.

Tom


I looked on ebay but also had it in my mind I wanted dual GPU cards lol - maybe that was a bad plan now since it was so old.
16) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979010)
Posted 6 Feb 2019 by Profile J3P-0
Post:
I looked at the 1060's but the cuda cores were only 1280 and the GTX 690 is a dual GPU with 3072 cuda cores and a 512 memory bus,

Some things to keep in mind for future purchases.

It's not just the number of cores, but the type of cores. There have been a lot of architectural improvements since the GTX 600 series. And even with a wider memory bus, dual GPUs on a single card tend to be at a disadvantage when it comes to memory bandwidth compared to a single GPU of the same type, even if it has a narrower memory bus (the memory bus on the GTX 690 is really 256bit. One 256 bit bus for each GPU= 512bit in marketing speak). And once again, there have ben considerable improvements over the years since the GTX 600 series came out.

The GTX 690 is 2*GTX 680s, but with GTX 670 clock speeds.

Looking at Shaggie's graphs, the GTX 1060 3GB puts out 400+ credits per hour (stock)- at 120W
A GTX 680 around 300, so the GTX 690 would be around 600- at 300W
This is with Windows on the SoG application. Running LINUX and the Special Application the GTX 1060 3GB would produce 3-4times as much credit, for 120W or less.
(the GTX 1050Ti puts out the same amount of work as a GTX 680, but for less power. For the same power usage as 1 GTX 690, you could run 4 GTX 1050Tis and put out more than double the work).

Shaggie's graphs show the work produced for the power used, the GTX 680 is one of the poorest performers (the GTX 690 would rate even lower). The GTX 1060 3GB is #10 in the top ten for efficiency (although it will eventually get bumped lower now the RTX 2060 has been released).
A more recent card might cost a lot more upfront to buy, but it will cost a lot, lot less to run.


Thanks, at first glance the 690 looked promising to me, the 256bit per gpu was higher than the 196bit for the 1060, so on the surface, I thought that 256 bit with the 3072 cuda cores it would do a lot better. I didn't realize the older archetecture would be that much of a hindrance. I will have to reevaluate and devise a new plan :) HA!
17) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979004)
Posted 6 Feb 2019 by Profile J3P-0
Post:
I looked at the 1060's but the cuda cores were only 1280 and the GTX 690 is a dual GPU with 3072 cuda cores and a 512 memory bus, I was unaware of the compute compatibility for the special app for the older card.

CUDA Cores
1280
Graphics Clock (MHz)
1506
Processor Clock (MHz)
1708
Graphics Performance
high-11048


you can't compare raw cuda core count

1. across different architectures
2. when using different apps

the Pascal architecture of the 10-series cards is 2 generations newer than Kepler, and is leaps and bounds more power efficient.

the CUDA Special app by petri is much more optimized than the OpenCL apps that you'd be limited to using the 690. the special app is about 3x faster. a single 1060 can yield about 40-50k RAC.

cuda counts aren't the whole story


I gotta ask Ian, are you really running 63 GPU's ?

[63] NVIDIA GeForce GTX 1080 Ti (4095MB) driver: 410.66
18) Message boards : Number crunching : Special App and Kepler Architecture (Message 1979003)
Posted 6 Feb 2019 by Profile J3P-0
Post:
I don't know where you got 115W for a GTX 690 as Nvidia rate it at 300W max draw.

But then compare this rig with a 690 to this rig with 2x 1060's with both running Win7.

One thing that I did notice is that there arn't as many GTX 690's around these days as there use to be.

[edit] Nvidia rate the 1060's at 120W max draw, but mine rarely pull above 80W while crunching.

Cheers.


I just did a quick search for power consumption for the 690 and literally clicked the first link I saw, I was not concerned with power use as I was only planning on running a couple of these 690 cards at most.



from the link he posted, which showed the system power draw (not GPU only), but this was also at IDLE, ie, not doing anything but displaying a desktop.

a little further down they have a graph of the system under load, showing ~400W.


Thanks Ian, I do realize now that at max power draw I could run more 1060's per power supply vs the 690's I was going to use. I'll certainly go another route now -
19) Message boards : Number crunching : Special App and Kepler Architecture (Message 1978995)
Posted 6 Feb 2019 by Profile J3P-0
Post:
Thanks again Keith for updating me on the Compute 3.0 of the older cards, Ill look for newer cars that are 5.0 compatible. I really like the dual GPU cards but the Titan Z is still pricey.

JP


Here also. https://en.wikipedia.org/wiki/CUDA#GPUs_supported


Thanks Viper, That will help for future purchases
20) Message boards : Number crunching : Special App and Kepler Architecture (Message 1978994)
Posted 6 Feb 2019 by Profile J3P-0
Post:
I looked at the 1060's but the cuda cores were only 1280 and the GTX 690 is a dual GPU with 3072 cuda cores and a 512 memory bus, I was unaware of the compute compatibility for the special app for the older card.

CUDA Cores
1280
Graphics Clock (MHz)
1506
Processor Clock (MHz)
1708
Graphics Performance
high-11048


Next 20


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.