Replies: 1 comment
-
How to set this up? Can you please give us the exact instructions? I have two M2 Mac minis. I just want to increase inference speed for a smaller model, not the 70b! I have a Thunderbolt 4 cable. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Distributed Llama version: 0.11.1 (CPU only)
llama3_3_70b_instruct_q40
2 x Mac Mini M4 Pro via Thunderbolt 5
2 x Mac Mini M4 Pro via 10G Ethernet
4 x Mac Mini M4 Pro via Thunderbolt 5
4 x Mac Mini M4 Pro via 10G Ethernet
llama3_1_8b_instruct_q40
1 x Mac Mini M4 Pro
2 x Mac Mini M4 Pro via Thunderbolt 5
2 x Mac Mini M4 Pro via 10G Ethernet
4 x Mac Mini M4 Pro via Thunderbolt 5
4 x Mac Mini M4 Pro via 10G Ethernet
This performance test was made possible thanks to MacWeb.com ❤️, which offers on-demand access to Macs in the cloud.
Beta Was this translation helpful? Give feedback.
All reactions