|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
|
|
|
|
APUS-xDAN-4.0-MOE |
|
|
|
Introduction |
|
APUS-xDAN-4.0-MOE is a transformer-based MoE decoder-only language model 对齐在 on a large amount of data. |
|
|
|
For more details, please refer to our blog post and GitHub repo. |
|
|
|
Model Details |
|
APUS-xDAN-4.0-MOE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, APUS-xDAN-4.0-MOE is upcycled from xDAN-L2 Series which are high performance alignModels. It has 136B parameters in total and 30B activated parameters during runtime. |
|
进过先进量化技术优化,我们的开源版本仅仅只有42GB大小,是可以很好的在消费级显卡例如4090,3090上运行。 |
|
|
|
|
|
Requirements |
|
The code of APUS-xDAN-4.0-MOE has been in the latest Hugging face transformers and we advise you to build from source with command pip install git+https://github.com/huggingface/transformers, or you might encounter the following error: |
|
|
|
|
|
Usage |
|
llama.cpp |
|
|
|
|
|
License |
|
APUS-xDAN-4.0-MOE is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
|
|