Qwen1.5-MoE-A2.7B / README.md
bzheng's picture
Update README.md
a9d1cf7 verified
metadata
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B/blob/main/LICENSE
language:
  - en
pipeline_tag: text-generation
tags:
  - pretrained

Qwen1.5-MoE-A2.7B

Introduction

Qwen1.5-MoE is the beta version of Qwen2-MoE, a transformer-based decoder-only language model pretrained on a large amount of data.

For more details, please refer to our blog post and GitHub repo.

Model Details

Qwen1.5-MoE is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and code. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.

Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, Qwen1.5-MoE-A2.7B is upcycled from Qwen-1.8B. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to Qwen1.5-7B, it only requires 20% of the training resources. We also observed that the inference speed is 1.8 times that of Qwen1.5-7B.

Requirements

The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command pip install git+https://github.com/huggingface/transformers, or you might encounter the following error:

KeyError: 'qwen2_moe'.

Usage

We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.