File tree Expand file tree Collapse file tree 2 files changed +0
-61
lines changed Expand file tree Collapse file tree 2 files changed +0
-61
lines changed Original file line number Diff line number Diff line change 70
70
<li ><a href =" #使用-Docker " >使用 Docker</a ></li >
71
71
<li ><a href =" #社区 " >社区</a ></li >
72
72
<li ><a href =" #做出贡献 " >做出贡献</a ></li >
73
- <li ><a href =" #快速预览 " >快速预览</a ></li >
74
- <ul >
75
- <li><a href="#几行代码开启分布式训练">几行代码开启分布式训练</a></li>
76
- <li><a href="#构建一个简单的2维并行模型">构建一个简单的2维并行模型</a></li>
77
- </ul >
78
73
<li ><a href =" #引用我们 " >引用我们</a ></li >
79
74
</ul >
80
75
@@ -306,31 +301,6 @@ docker run -ti --gpus all --rm --ipc=host colossalai bash
306
301
307
302
<p align =" right " >(<a href =" #top " >返回顶端</a >)</p >
308
303
309
- ## 快速预览
310
-
311
- ### 几行代码开启分布式训练
312
-
313
- ``` python
314
- parallel = dict (
315
- pipeline = 2 ,
316
- tensor = dict (mode = ' 2.5d' , depth = 1 , size = 4 )
317
- )
318
- ```
319
-
320
- ### 几行代码开启异构训练
321
-
322
- ``` python
323
- zero = dict (
324
- model_config = dict (
325
- tensor_placement_policy = ' auto' ,
326
- shard_strategy = TensorShardStrategy(),
327
- reuse_fp16_shard = True
328
- ),
329
- optimizer_config = dict (initial_scale = 2 ** 5 , gpu_margin_mem_ratio = 0.2 )
330
- )
331
- ```
332
-
333
- <p align =" right " >(<a href =" #top " >返回顶端</a >)</p >
334
304
335
305
## 引用我们
336
306
Original file line number Diff line number Diff line change 70
70
<li ><a href =" #Use-Docker " >Use Docker</a ></li >
71
71
<li ><a href =" #Community " >Community</a ></li >
72
72
<li ><a href =" #contributing " >Contributing</a ></li >
73
- <li ><a href =" #Quick-View " >Quick View</a ></li >
74
- <ul >
75
- <li><a href="#Start-Distributed-Training-in-Lines">Start Distributed Training in Lines</a></li>
76
- <li><a href="#Write-a-Simple-2D-Parallel-Model">Write a Simple 2D Parallel Model</a></li>
77
- </ul >
78
73
<li ><a href =" #Cite-Us " >Cite Us</a ></li >
79
74
</ul >
80
75
@@ -311,32 +306,6 @@ Thanks so much to all of our amazing contributors!
311
306
312
307
<p align =" right " >(<a href =" #top " >back to top</a >)</p >
313
308
314
- ## Quick View
315
-
316
- ### Start Distributed Training in Lines
317
-
318
- ``` python
319
- parallel = dict (
320
- pipeline = 2 ,
321
- tensor = dict (mode = ' 2.5d' , depth = 1 , size = 4 )
322
- )
323
- ```
324
-
325
- ### Start Heterogeneous Training in Lines
326
-
327
- ``` python
328
- zero = dict (
329
- model_config = dict (
330
- tensor_placement_policy = ' auto' ,
331
- shard_strategy = TensorShardStrategy(),
332
- reuse_fp16_shard = True
333
- ),
334
- optimizer_config = dict (initial_scale = 2 ** 5 , gpu_margin_mem_ratio = 0.2 )
335
- )
336
-
337
- ```
338
-
339
- <p align =" right " >(<a href =" #top " >back to top</a >)</p >
340
309
341
310
## Cite Us
342
311
You can’t perform that action at this time.
0 commit comments