NvLink Bridge across GPUs from two physical workstations? Possible? #1072
Replies: 4 comments 3 replies
-
|
These are only for use within the same device |
Beta Was this translation helpful? Give feedback.
-
|
In the early days of using |
Beta Was this translation helpful? Give feedback.
-
|
Hi there! Got 16 x 3090 RTX and two kinds of CPUs -- the AMD Threadripper PRO 3995wx and Xeon QYFS. |
Beta Was this translation helpful? Give feedback.
-
|
I think the easiest solution is to use bifurcation for the rest of the PCIe slots (three in my case). So I can add another four GPUs with PCIe4 x8 speed. The question is where to put the NvLinks? With four NvLinks (2-slot bridges) there are three possibilities. x16_gpu <-> x16_gpu x16_gpu <-> x16_gpu x16_gpu <-> x8_gpu I was obviosly thinking of the latter one. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Have anyone tried that? I was thinking to connect two-slot NvLink to the RTX 3090 from two different machines to eliminate any need of using infiniband and RDMA.
Not sure if that will work, but if so that would allow to easily connect two workstations with 8 GPUs each. So there will be total of 16 GPUs and 384 GB VRAM total.
If the p2p driver from @aikitoria would work as well, that would be a perfect solution for the local inference.
16 GPUs and 8 NvLink bridges spaced across two workstations.
Beta Was this translation helpful? Give feedback.
All reactions