Max # of PCI Express Lanes - 16/44 ; and RTX 2080 Ti's in Nvlink

Message boards : Number crunching : Max # of PCI Express Lanes - 16/44 ; and RTX 2080 Ti's in Nvlink
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 1993309 - Posted: 10 May 2019, 2:14:13 UTC

Evening Gent's and Ladies,

I have two questions:

1) When running SETI@Home is there any difference in using a CPU with 16 PCI Express lanes -vs- 44 PCI Express lanes? More importantly, will a 44 lane CPU run the SETI tasks faster than a 16 lane CPU?

2) If I were to use 2x EVGA RTX 2080 Ti graphics cards in SLI/Nvlink mode, will they perform at double the cuda cores (4352 x 2 = 8704)? And... once again... will this allow the cards to perform the SETI tasks faster than one card?

What am I asking these two questions for? I am going to build my self a new high end computer (not the max, mind you) and I'm considering the Intel i9-9900k -vs- i9-9900x for CPU (I know, I know, but I just can't get into an AMD Threadripper). And the EVGA RTX 2080 Ti's... while over priced may just be the ticket.

The i9-9900k runs at a faster Hz (5.0GHz @ Max Turbo) and has a Thermal Solution Specification of 130W (PCG 2015D) with a T-junction limit of 100 degrees C and a TDP of 95W. It also has a Max Memory Bandwidth of 41.6 GB/s while supporting only 16 PCI Express Lanes, and is an 8 core-16 thread CPU.

The i9-9900x runs at a slower Hz (4.4GHz @ Max Turbo) with a Thermal Solution Specification of ???W (PCG 2017X) with a T-junction limit of only 92 degrees C, but a TDP of 165W. It also has a Max Memory Bandwidth of ...????... though it does support 44 PCI Express Lanes, and is a 10 core-20 thread CPU.

The above specs were from the Intel website:

https://www.intel.com/content/www/us/en/products/processors/core/i9-processors/i9-9900k.html

https://www.intel.com/content/www/us/en/products/processors/core/x-series/i9-9900x.html

I did confirm with both Nvidia and EVGA the the SLI has been changed/upgraded to Nvlink for the RTX graphics cards, and they both say that it improves performance in the RTX series cards.

So... Any insight from the (IMO) pros out there would be very helpful in making my decision.

What my new PC will (hopefully) be:

Cooler Master Mastercase H500M
EVGA Dark Z370 (or X299) MB
i9-9900(K or X) CPU
CPU Cooler TBA (air or water)
2x EVGA 2080-Ti GPUs (11G-P4-2281-KR)
64GB of Corsair Vengeance LPX DDR4 3200 DRAM
1x 970 Evo Plus 1TB NVME SSD
2x 860 Evo 2TB SATA-III SSDs
EVGA 1000w G3 80+ Gold PSU
George

ID: 1993309 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1993310 - Posted: 10 May 2019, 2:19:25 UTC - in response to Message 1993309.  

Neither BOINC nor Seti does anything with Crossfire, SLI or NVLink. The software isn't written for it. Seti needs very little PCIe bandwidth. It can use as little as PCIe X1 bandwidth and runs fine. Other projects do benefit from higher bandwidth. Seti is not one of them. So any reasonable cpu has plenty of cpu PCIe lanes to run two cards. If you want to run 4 or more cards at X16 bandwidth you will need more PCIe lanes from the cpu. But it is not necessary and the calculations on the gpus won't be sped up in any significant manner.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1993310 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 1993311 - Posted: 10 May 2019, 2:47:48 UTC

Thank you Keith for responding so fast. It gives me something to think about.

As for the other projects... I have also been using Milkyway and Einstein besides SETI. How about them?
George

ID: 1993311 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1993319 - Posted: 10 May 2019, 3:46:01 UTC - in response to Message 1993311.  
Last modified: 10 May 2019, 3:48:41 UTC

Einstein is one of the projects that does respond to higher bandwidth. If you search the Einstein forums, you will land on several discussions about the minimum number of PCIe lanes that Einstein doesn't fall off in production. I believe the consensus was X8. GPUGrid is another project I am familiar with and also responds to higher PCIe lane utilization. I believe that X4 is the level both projects respond with dramatic increases in task crunching times. Einstein uses a lot of cpu<>gpu memory transfer at 89% completion of a task. GPUGrid uses so much memory in general and with such large datasets, that it also uses a lot of cpu<>gpu data transfers throughout the entire calculation time of the task. For those projects you should strive to run your gpus at as high a PCIe lane utilization as possible.

I don't see any need for high PCIe lane use at Milkyway. The amount of data pushed onto the gpu is minimal and never needs to cross back to the cpu at all until finished. So MW is like Seti in being happy with X4 or X1 lane width.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1993319 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 1993320 - Posted: 10 May 2019, 4:06:12 UTC

Thanks again Keith. Your explanation helps me a great deal. I will once again think on it and make a decision soon. If my new PC isn't going to be strangled with workloads with SETI and the like, maybe I'll take up gaming. :^)
George

ID: 1993320 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1993332 - Posted: 10 May 2019, 13:44:51 UTC

Keep in mind that the 1x 970 Evo Plus 1TB NVME SSD will also use PCIe lanes, as many as four is possible.
Using two high end GPUs will run them at 8x a piece. Or probably lower because the non-volatile memory express (nvme) SSD will use some lanes as well.

Want to see how much an impact PCIe 16x vs 8x has in SLI in games? See this Linus Tech Tips video.
ID: 1993332 · Report as offensive

Message boards : Number crunching : Max # of PCI Express Lanes - 16/44 ; and RTX 2080 Ti's in Nvlink


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.