Update description2.md
Browse files- description2.md +6 -7
description2.md
CHANGED
@@ -1,18 +1,17 @@
|
|
1 |
## Zero Bubble Schedules
|
2 |
The key of achieving zero bubble is to breaking a backward pass into a B pass and W pass. B on one stage will only depend on the B on its next stage, compared to depending on both B and W of in 1F1B.
|
3 |
|
4 |
-
![image](https://
|
5 |
|
6 |
### Comparision of Schedules
|
7 |
* 1F1B
|
8 |
-
![image](https://
|
9 |
* ZB1P
|
10 |
-
![image](https://
|
11 |
* ZB2P
|
12 |
-
![image](https://
|
13 |
* ZBV - Each device is assigned to exactly 2 chunks (virtual stages), where white text colors represent the first chunk and black text colors represent the second chunk. The sequence of dependencies among model chunks follows a βVβ shape pattern for both the forward and backward passes.
|
14 |
-
![image](https://
|
15 |
-
|
16 |
|
17 |
|
18 |
|
@@ -29,6 +28,6 @@ The key of achieving zero bubble is to breaking a backward pass into a B pass an
|
|
29 |
In most practices of PP there's an all-reduce cross all pipeline stages for numerical robustness, e.g. global gradient norm for gradient clipping. INF/NAN check for mixed precision training, etc. This all-reduce breaks parallelogram and makes zero bubble impossible.
|
30 |
Under the observation that during a stable training both the gradient clipping and INF/NAN rarely triggers, we replace the before-hand synchronizations with a post update validation.
|
31 |
|
32 |
-
![image](https://
|
33 |
|
34 |
We eagerly step the optimizers assuming the grad cliping, INF/NAN conditions are not triggered. In case an amendment to the gradient is required, a rollback will be issued and then we redo the optimizer step based on the fully reduced global state.
|
|
|
1 |
## Zero Bubble Schedules
|
2 |
The key of achieving zero bubble is to breaking a backward pass into a B pass and W pass. B on one stage will only depend on the B on its next stage, compared to depending on both B and W of in 1F1B.
|
3 |
|
4 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63510eea0b94548566dad923/8B9thyMiLgysNi_m_O3Qn.png)
|
5 |
|
6 |
### Comparision of Schedules
|
7 |
* 1F1B
|
8 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63510eea0b94548566dad923/Q3yxf4BQIESQ_M7lKKlhf.png)
|
9 |
* ZB1P
|
10 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63510eea0b94548566dad923/EcTFvbjfM7soUXDYyn1Xu.png)
|
11 |
* ZB2P
|
12 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63510eea0b94548566dad923/8jFI_rO69BREKqiSFHIOL.png)
|
13 |
* ZBV - Each device is assigned to exactly 2 chunks (virtual stages), where white text colors represent the first chunk and black text colors represent the second chunk. The sequence of dependencies among model chunks follows a βVβ shape pattern for both the forward and backward passes.
|
14 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63510eea0b94548566dad923/VRfjNVXakAU3MQK3h6OKa.png)
|
|
|
15 |
|
16 |
|
17 |
|
|
|
28 |
In most practices of PP there's an all-reduce cross all pipeline stages for numerical robustness, e.g. global gradient norm for gradient clipping. INF/NAN check for mixed precision training, etc. This all-reduce breaks parallelogram and makes zero bubble impossible.
|
29 |
Under the observation that during a stable training both the gradient clipping and INF/NAN rarely triggers, we replace the before-hand synchronizations with a post update validation.
|
30 |
|
31 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63510eea0b94548566dad923/hRPFqaFxJ20wm2omwyKmO.png)
|
32 |
|
33 |
We eagerly step the optimizers assuming the grad cliping, INF/NAN conditions are not triggered. In case an amendment to the gradient is required, a rollback will be issued and then we redo the optimizer step based on the fully reduced global state.
|