24/7 Boinc pc, what do i need?

Message boards : Number crunching : 24/7 Boinc pc, what do i need?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile DVDL

Send message
Joined: 5 Jul 12
Posts: 4
Credit: 1,893,682
RAC: 0
Netherlands
Message 1870344 - Posted: 31 May 2017, 18:32:44 UTC
Last modified: 31 May 2017, 18:37:32 UTC

Hi,

I am working on a 24/7 Boinc PC. i got a few question here

-Do i need a x16 slot when using GPU or are x1/x4 slots with a riser also good (like etherium machines)
-If i need a x16 slot, i am planning to buy a 2x16 slot board, does it need a 32pci-e CPU to load it up?

can some give me some advise? I really dont know if i can get like a celeron build-on mobo with just a few risers it wont max out the GPU?

Extra info; im also using my new system on Einstein at home and Milkyway, no difference here right?
ID: 1870344 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1870353 - Posted: 31 May 2017, 19:29:13 UTC - in response to Message 1870344.  

Since the GPU apps don't do a whole lot of memory bus access (I was told some time ago) you don't really need more than one PCIe lane to run them.
ID: 1870353 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1870356 - Posted: 31 May 2017, 19:33:37 UTC - in response to Message 1870344.  

Actually there is a difference of PCIe lane for the different projects.

Here on Seti, thanks to the optimization of the developers, PCIe speeds aren't a big consideration. You can use X1 to an X16 and there isn't much of failing speed.

That is not the case for Einstein (can't speak for MW) but there you will see a difference in the speed based on the PCIe.

In general, you can get away with x8 PCIe speeds for most projects I believe. So if you want to get a board with 2 PCIe slots. Most boards like that have the first slot being x16 if only 1 GPU but it will come down to x8 if the second slot also has a GPU in it as well so both are x8 (ie they are sharing the lane of the 16 so both get half)

You can get better boards with devoted x16 for slot 1 and x8, x8(or x4) for slots 2, 3 but price starts to play an issue.

The next question about CPU has to be decided by you. Which CPU are you planning on getting? Intel divides their CPU between 16 lane and 40 lane chips. You need to match your chip to which ever configuration you are wanting to run. Most low end chips are 16, most high end are 40 (AMD had some 32 lane high end{no I am not talking about any recent chips from either as they vary})

So If you end up getting a 16 lane chip but decide to go with PCIE x16, x8, x4 design, you can see right away that there is a bottleneck as the chip only has 16, so you will be handicapping yourself.

As far as Risers with High End GPUs, I never had much success with that, but I believe some other have. They will have to help you with those.

This is a starting point
ID: 1870356 · Report as offensive
Profile DVDL

Send message
Joined: 5 Jul 12
Posts: 4
Credit: 1,893,682
RAC: 0
Netherlands
Message 1870363 - Posted: 31 May 2017, 19:50:54 UTC - in response to Message 1870356.  


You can get better boards with devoted x16 for slot 1 and x8, x8(or x4) for slots 2, 3 but price starts to play an issue.

The next question about CPU has to be decided by you. Which CPU are you planning on getting? Intel divides their CPU between 16 lane and 40 lane chips. You need to match your chip to which ever configuration you are wanting to run. Most low end chips are 16, most high end are 40 (AMD had some 32 lane high end{no I am not talking about any recent chips from either as they vary})


Thanks for all so far.
I want to buy this; MSI B150 PC MATE with G4560 CPU and 4G ram... This board got 2x pci-e slot with good price right here. So because both motherboard and cpu can only have 16 lanes i will run down both cards on 8 lanes right? First i will buy 1 GTX1060 3gb, later might add one. I think this setup can give both cards a good go,.

If i want to go 1GPU setup i tought of this; ASRock QC5000M with 2GB of ram. This board only uses 15W but can do x4 in the x16 slot.

You say Seti got some good development, but how can i check with project really needs all the bandwith?
ID: 1870363 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1870368 - Posted: 31 May 2017, 20:02:45 UTC - in response to Message 1870363.  
Last modified: 31 May 2017, 20:03:21 UTC

Thanks for all so far.
I want to buy this; MSI B150 PC MATE with G4560 CPU and 4G ram... This board got 2x pci-e slot with good price right here. So because both motherboard and cpu can only have 16 lanes i will run down both cards on 8 lanes right? First i will buy 1 GTX1060 3gb, later might add one. I think this setup can give both cards a good go,.

If i want to go 1GPU setup i tought of this; ASRock QC5000M with 2GB of ram. This board only uses 15W but can do x4 in the x16 slot.

You say Seti got some good development, but how can i check with project really needs all the bandwith?


As far as checking with projects, can't tell you. Experience has taught me with Seti and Einstein. I think Keith does MW so he might shed some light on what kind of bandwidth they need for the GPU.

That board is good, now that I know what kind of CPU. If this is going to be a dedicated cruncher, you should know that OpenCl both here and at Einstein require 1 CPU core for each work unit. You can get around that here with commandlines but Einstein doesn't have such a system. So, if you get 1 1060 and run 2 work units on it, it will require 2 of your 4 cores. If you do the same at Einstein, it will be the same. As you say, in the future. You decide to add a second GPU, then all 4 of your core will get used and the system may become unresponsive. Something to keep in mind. But if you only run 1 work unit at a time, there will be no issue even with 2 GPUs
ID: 1870368 · Report as offensive
Profile DVDL

Send message
Joined: 5 Jul 12
Posts: 4
Credit: 1,893,682
RAC: 0
Netherlands
Message 1870369 - Posted: 31 May 2017, 20:10:55 UTC

Hi,

Thanks again. Due to the facts i run serveral project and as far as google gets me right now, most project really want an x16 slot and with your info also I have made my decission now and go with another one;

ASRock H110M-DGS R3.0 and a G3900 CPU. This board is cheap and can run the card on full x16 no matter on what project, due to fact i will store the system i go with a 2x config file and let it run.
I only now have to choose if i still want a GTX1060 or another card. Some project dont use Nvidia, some not AMD.... but it wont be 2 cards then... it will be to expensive to buy a 40 lane cpu/board.. Brainkillers ;)
ID: 1870369 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22189
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1870380 - Posted: 31 May 2017, 20:44:10 UTC

The best place to get information on a project is the project's own forum.
Given you are running a low end, budget system, there is probably no need to worry about the lane count, so just about any motherboard 1151 /H 110 chipset board will do.
As for GPUs - it can be a BIG headache trying to run both AMD & nVidia on one PC, also AMD have a reputation for being "fussy" about their drivers (not that nVida are perfect on that front). I would go for the GTX1050ti as they appear to be a good price/performance balance, and have a low power demand - for a 24/7 machine don't go for wild over clocking - even a stock GTX1050 will do many times the work of the CPU!

For SETI you will almost certainly find that the bottleneck is the CPU - a two core/2thread CPU will struggle to "feed" two GPU tasks, there is a chunk of pre-processing that has to be done on the CPU, and with one of the GPU applications there is a fairly high, continuous CPU demand.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1870380 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1870382 - Posted: 31 May 2017, 20:45:59 UTC

Hi All,

@Zalster, you inferred in another post that Einstein does in fact see a difference in task completion times when the card is working in a less than PCIe X16 slot. Can you put some percentage difference numbers to that?

All my projects, SETI, Einstein and MilkyWay use OpenCL apps so each task requires a full CPU core to support each GPU task. I have not seen any impact in task completion times across my 1070s for any project when comparing the FX systems which can run each 1070 at X16 and the Ryzen system which runs each 1070 at X8.

I see that MilkyWay only uses about 4% CPU time when running a task. I see that each MW task bursts from 2MB to 6MB of data across the PCIe bus to the CPU about every minute. That is the highest I/O delta I see of all projects tasks I run. None of the other tasks comes close to that amount of data crossing the bus. But I don't notice a difference in MW task completion times between the cards running at X16 compared to X8. There might be a more noticeable difference in running at X4 or X1 though.

@Zalster, do you think the reason why Einstein tasks notice a difference between running at X16 compared to X8 for example is because the Einstein task finishes up the last 10% of its run time on the CPU and has to push all its accumulated data processed on the GPU across the bus to the CPU?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1870382 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1870393 - Posted: 31 May 2017, 21:51:03 UTC - in response to Message 1870382.  

Hi All,

@Zalster, you inferred in another post that Einstein does in fact see a difference in task completion times when the card is working in a less than PCIe X16 slot. Can you put some percentage difference numbers to that?

@Zalster, do you think the reason why Einstein tasks notice a difference between running at X16 compared to X8 for example is because the Einstein task finishes up the last 10% of its run time on the CPU and has to push all its accumulated data processed on the GPU across the bus to the CPU?


With the current OpenCl task at Einstein we don't see that as much but with the previous data run of the cuda work (BRP4G vs BRP 6) there was a very noticeable difference the PCIe slots. So much so that I was actually tailoring work based upon which GPUs was in which slot via the cc_config.xml and <ignore>x</ignore>

Since we don't the know how future applications of Einstein will work(ie cuda vs opencl) to blanketly say PCIe won't make a difference might be wrong statement.
ID: 1870393 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1870423 - Posted: 31 May 2017, 23:55:05 UTC

Really any computer you want to use will work for SETI@home. If you want it to run 100% 24/7 then the only thing I feel that really would need special attention is making the cooling better than stock.
For my Celeron J1900 system I added an 80mm fan and a GTX 750 ti with a PCIe x16 to x1 adapter. http://i.imgur.com/Hey7HV6.jpg
The CPU in it isn't that great for SETI@home tasks. So I run a few other projects on the CPU and SETI@home on the GPU.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1870423 · Report as offensive

Message boards : Number crunching : 24/7 Boinc pc, what do i need?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.