AdamG012 commited on
Commit
75aa9a0
1 Parent(s): 19313bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -19
README.md CHANGED
@@ -96,25 +96,6 @@ git clone https://huggingface.co/FSALab/deepspeed-chatgpt-opt1.3b-sft
96
  GIT_LFS_SKIP_SMUDGE=1
97
  ```
98
 
99
- ## Why did we choose DeepSpeed?
100
-
101
- **DeepSpeed Training:**
102
-
103
- The `main.py` Python code take the DeepSpeed config with the argument `--deepspeed_config ./ds_config.json`.
104
-
105
- We read up on the DeepSpeed documentation and created a specific coniguration based on their work. The json file `ds_config.json` here is set to take the [ZeRO-2](https://www.microsoft.com/en-us/research/blog/ZeRO-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/) stage and FP16, allowing must faster training and GPU memory saving. Note that ZeRO-2 is just one of the examples using our DeepSpeed. You may use ZeRO-1, Zero-3, ZeRO-Offload and ZeRO-infinity. For more information on DeepSpeed ZeRO family, please see this [tutorial link](https://www.deepspeed.ai/tutorials/zero/) for Zero-1/2/3 and this [tutorial ](https://www.deepspeed.ai/tutorials/zero-offload/)for Zero-Offload.
106
-
107
- To enable the DeepSpeed Zero family training, we injected several lines of code in order to enable this i.e.:
108
-
109
- ```python
110
- model, optimizer, _, lr_scheduler = deepspeed.initialize(model=model, \
111
- optimizer=optimizer, \
112
- args=args, \
113
- lr_scheduler=lr_scheduler, \
114
- dist_init_required=True)
115
- ```
116
-
117
-
118
 
119
  ## **Acknowledgements**
120
 
 
96
  GIT_LFS_SKIP_SMUDGE=1
97
  ```
98
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
  ## **Acknowledgements**
101