Skip to content

Commit e20cecb

Browse files
author
Sathwik Yanamaddi
committed
Fixed gpus_per_node access
1 parent c55bb50 commit e20cecb

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

axonn/communication.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,8 +95,8 @@ def __init__(
9595
gpus_per_node if gpus_per_node is not None else torch.cuda.device_count()
9696
)
9797

98-
if config.device == "cuda" and gpus_per_node:
99-
self.local_rank = self.world_rank % gpus_per_node
98+
if config.device == "cuda":
99+
self.local_rank = self.world_rank % self.gpus_per_node
100100
torch.cuda.set_device(self.local_rank)
101101
self.intra_layer_parallel_rank = self.world_rank % G_intra
102102
self.intra_layer_column_parallel_rank = (

0 commit comments

Comments
 (0)