bzheng commited on
Commit
5132be9
1 Parent(s): 41f74f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -29,7 +29,7 @@ Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are
29
  We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
30
 
31
  ## Requirements
32
- The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to install `transformers>=4.39.0`, or you might encounter the following error:
33
  ```
34
  KeyError: 'qwen2_moe'.
35
  ```
 
29
  We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
30
 
31
  ## Requirements
32
+ The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
33
  ```
34
  KeyError: 'qwen2_moe'.
35
  ```