xverse commited on
Commit
283b627
1 Parent(s): 7bd7fc3

update README

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -7,6 +7,40 @@ inference: false
7
 
8
  # XVERSE-MoE-A4.2B-Chat
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ## 使用方法
11
 
12
  ### Transformers 加载方式
 
7
 
8
  # XVERSE-MoE-A4.2B-Chat
9
 
10
+
11
+ ## 模型介绍
12
+
13
+ **XVERSE-MoE-A4.2B-Chat**为 **XVERSE-MoE-A4.2B** 底座模型对齐后的版本。
14
+
15
+ **XVERSE-MoE-A4.2B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),使用混合专家模型(MoE,Mixture-of-experts)架构,模型的总参数规模为 258 亿,实际激活的参数量为 42 亿,本次开源的模型为底座模型 **XVERSE-MoE-A4.2B**,主要特点如下:
16
+
17
+ - **模型结构**:XVERSE-MoE-A4.2B 为 Decoder-only 的 Transformer 架构,将密集模型的 FFN 层扩展为专家层,不同于传统 MoE 中每个专家的大小与标准 FFN 相同(如Mixtral 8x7B ),使用了更细粒度的专家,每个专家是标准 FFN 大小的 1/4,并设置了共享专家(Shared Expert)和非共享专家(Non-shared Expert)两类,共享专家在计算时始终被激活,非共享专家通过 Router 选择性激活。
18
+ - **训练数据**:构建了 2.7 万亿 token 的高质量、多样化的数据对模型进行充分训练,包含中、英、俄、西等 40 多种语言,通过精细化设置不同类型数据的采样比例,使得中英两种语言表现优异,也能兼顾其他语言效果;模型使用 8K 长度的训练样本进行训练。
19
+ - **训练框架**:针对 MoE 模型中独有的专家路由和权重计算逻辑,进行了深入定制优化,开发出一套高效的融合算子,以提升计算效率。同时,为解决 MoE 模型显存占用和通信量大的挑战,设计了计算、通信和 CPU-Offload 的 Overlap 处理方式,从而提高整体吞吐量。
20
+
21
+ **XVERSE-MoE-A4.2B** 的模型大小、架构和学习率如下:
22
+
23
+ | total params | activated params | n_layers | d_model | n_heads | d_ff | n_non_shared_experts | n_shared_experts | top_k | lr |
24
+ | :----------: | :--------------: | :------: | :-----: | :-----: | :--: | :------------------: | :--------------: | :---: | :----: |
25
+ | 25.8B | 4.2B | 28 | 2560 | 32 | 1728 | 64 | 2 | 6 | 3.5e−4 |
26
+
27
+ ## Model Introduction
28
+
29
+ **XVERSE-MoE-A4.2B-Chat** is the aligned version of model **XVERSE-MoE-A4.2B**.
30
+
31
+ **XVERSE-MoE-A4.2B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology which is using Mixture-of-experts (MoE) architecture. The total parameter scale of the model is 25.8 billion, with an actual number of activated parameters being 4.2 billion. The models released this time is the base model **XVERSE-MoE-A4.2B**. Its key features are as follows:
32
+
33
+ - **Model Structure**: XVERSE-MoE-A4.2B uses the mainstream Decoder-only Transformer network structure that extends the FFN layer of dense models to expert layers. Unlike traditional MoE model where each expert has the same size as standard FFN (such as Mixtral 8x7B), it uses more fine-grained experts, with each expert being 1/4 the size of a standard FFN. It includes shared experts and non-shared experts, where shared experts are always activated during computation, and non-shared experts are selectively activated through a Router.
34
+ - **Training Data**: The model has been thoroughly trained on a diversified and high-quality dataset consisting of 2.7 trillion of tokens, including more than 40 languages such as Chinese, English, Russian, and Spanish. The sampling ratio of different types of data is finely set, which makes the performance of Chinese and English excellent, and also takes into account the effect of other languages; The model is trained using training samples of length 8k.
35
+ - **Training Framework**: We conducted in-depth customized optimization for the unique expert routing and weight calculation logic in the MoE model, developed an efficient fusion operator to improve computational efficiency. At the same time, to address the challenges of high memory consumption and communication volume in the MoE model, we designed a processing method for overlapping computation, communication, and CPU-Offload to increase overall throughput.
36
+
37
+ The models sizes, architectures and learning rate of **XVERSE-MoE-A4.2B** are showed as follows:
38
+
39
+ | total params | activated params | n_layers | d_model | n_heads | d_ff | n_non_shared_experts | n_shared_experts | top_k | lr |
40
+ | :----------: | :--------------: | :------: | :-----: | :-----: | :--: | :------------------: | :--------------: | :---: | :----: |
41
+ | 25.8B | 4.2B | 28 | 2560 | 32 | 1728 | 64 | 2 | 6 | 3.5e−4 |
42
+
43
+
44
  ## 使用方法
45
 
46
  ### Transformers 加载方式