bzheng commited on
Commit
a9d1cf7
1 Parent(s): b59a11c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ Qwen1.5-MoE is based on the Transformer architecture with SwiGLU activation, att
25
  Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 20% of the training resources. We also observed that the inference speed is 1.8 times that of `Qwen1.5-7B`.
26
 
27
  ## Requirements
28
- The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to install `transformers>=4.39.0`, or you might encounter the following error:
29
  ```
30
  KeyError: 'qwen2_moe'.
31
  ```
 
25
  Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 20% of the training resources. We also observed that the inference speed is 1.8 times that of `Qwen1.5-7B`.
26
 
27
  ## Requirements
28
+ The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
29
  ```
30
  KeyError: 'qwen2_moe'.
31
  ```