Skip to content

Commit 9d3124a

Browse files
authored
[doc] remove obsolete API demo (#1833)
1 parent fba34ef commit 9d3124a

File tree

2 files changed

+0
-61
lines changed

2 files changed

+0
-61
lines changed

README-zh-Hans.md

Lines changed: 0 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -70,11 +70,6 @@
7070
<li><a href="#使用-Docker">使用 Docker</a></li>
7171
<li><a href="#社区">社区</a></li>
7272
<li><a href="#做出贡献">做出贡献</a></li>
73-
<li><a href="#快速预览">快速预览</a></li>
74-
<ul>
75-
<li><a href="#几行代码开启分布式训练">几行代码开启分布式训练</a></li>
76-
<li><a href="#构建一个简单的2维并行模型">构建一个简单的2维并行模型</a></li>
77-
</ul>
7873
<li><a href="#引用我们">引用我们</a></li>
7974
</ul>
8075

@@ -306,31 +301,6 @@ docker run -ti --gpus all --rm --ipc=host colossalai bash
306301

307302
<p align="right">(<a href="#top">返回顶端</a>)</p>
308303

309-
## 快速预览
310-
311-
### 几行代码开启分布式训练
312-
313-
```python
314-
parallel = dict(
315-
pipeline=2,
316-
tensor=dict(mode='2.5d', depth = 1, size=4)
317-
)
318-
```
319-
320-
### 几行代码开启异构训练
321-
322-
```python
323-
zero = dict(
324-
model_config=dict(
325-
tensor_placement_policy='auto',
326-
shard_strategy=TensorShardStrategy(),
327-
reuse_fp16_shard=True
328-
),
329-
optimizer_config=dict(initial_scale=2**5, gpu_margin_mem_ratio=0.2)
330-
)
331-
```
332-
333-
<p align="right">(<a href="#top">返回顶端</a>)</p>
334304

335305
## 引用我们
336306

README.md

Lines changed: 0 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -70,11 +70,6 @@
7070
<li><a href="#Use-Docker">Use Docker</a></li>
7171
<li><a href="#Community">Community</a></li>
7272
<li><a href="#contributing">Contributing</a></li>
73-
<li><a href="#Quick-View">Quick View</a></li>
74-
<ul>
75-
<li><a href="#Start-Distributed-Training-in-Lines">Start Distributed Training in Lines</a></li>
76-
<li><a href="#Write-a-Simple-2D-Parallel-Model">Write a Simple 2D Parallel Model</a></li>
77-
</ul>
7873
<li><a href="#Cite-Us">Cite Us</a></li>
7974
</ul>
8075

@@ -311,32 +306,6 @@ Thanks so much to all of our amazing contributors!
311306

312307
<p align="right">(<a href="#top">back to top</a>)</p>
313308

314-
## Quick View
315-
316-
### Start Distributed Training in Lines
317-
318-
```python
319-
parallel = dict(
320-
pipeline=2,
321-
tensor=dict(mode='2.5d', depth = 1, size=4)
322-
)
323-
```
324-
325-
### Start Heterogeneous Training in Lines
326-
327-
```python
328-
zero = dict(
329-
model_config=dict(
330-
tensor_placement_policy='auto',
331-
shard_strategy=TensorShardStrategy(),
332-
reuse_fp16_shard=True
333-
),
334-
optimizer_config=dict(initial_scale=2**5, gpu_margin_mem_ratio=0.2)
335-
)
336-
337-
```
338-
339-
<p align="right">(<a href="#top">back to top</a>)</p>
340309

341310
## Cite Us
342311

0 commit comments

Comments
 (0)