ChloeAuYeung commited on
Commit
7368735
1 Parent(s): 7f1b739

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md CHANGED
@@ -7,6 +7,14 @@ inference: false
7
 
8
  # XVERSE-65B
9
 
 
 
 
 
 
 
 
 
10
  ## 模型介绍
11
 
12
  **XVERSE-65B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),参数规模为 650 亿,本次开源的模型为底座模型 **XVERSE-65B**,主要特点如下:
@@ -16,6 +24,32 @@ inference: false
16
  - **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
17
  - **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Model Introduction
20
 
21
  **XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
@@ -25,6 +59,32 @@ inference: false
25
  - **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
26
  - **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ## 评测结果
29
 
30
  为了综合评估模型的性能,我们在一系列标准数据集上进行了全面测试,包括C-Eval、CMMLU、Gaokao-Bench、MMLU、GAOKAO-English、AGIEval、RACE-M、CommonSenseQA、PIQA、GSM8K和HumanEval。这些评估覆盖了模型在多个领域的能力,具体包括中文问答、英文问答、语言理解、常识问答、逻辑推理、数学问题解答以及编程能力。评估结果如下:
 
7
 
8
  # XVERSE-65B
9
 
10
+ ## 更新信息
11
+ **[2023/11/24]** 更新预训练数据的相关信息。
12
+ **[2023/11/06]** 发布 65B 尺寸的 XVERSE-65B 底座模型。
13
+
14
+ ## Update Information
15
+ **[2023/11/24]** Update the related information of the pre-training data.
16
+ **[2023/11/06]** Released the XVERSE-65B base model.
17
+
18
  ## 模型介绍
19
 
20
  **XVERSE-65B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),参数规模为 650 亿,本次开源的模型为底座模型 **XVERSE-65B**,主要特点如下:
 
24
  - **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
25
  - **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
26
 
27
+ 在预训练阶段,**XVERSE-65B** 主要使用了 7 类不同的数据类型。以下表格展示了 XVERSE-65B 与其他一些知名模型在预训练数据集方面的比较:
28
+
29
+ | 数据类别 | GPT3[^1] | Llama[^2] | BLOOM[^3] | PaLM[^4] | Chinchilla[^5] | Gopher[^6] | MT-NLG[^7] | XVERSE-65B |
30
+ |:-------:|:--------:|:---------:|:---------:|:--------:|:--------------:|:----------:|:----------:|:----------:|
31
+ | 网页类 | Y | Y | Y | Y | Y | Y | Y | Y |
32
+ | 代码类 | | Y | Y | Y | Y | Y | Y | Y |
33
+ | 百科类 | Y | Y | | Y | Y | Y | Y | Y |
34
+ | 书籍类 | Y | Y | | Y | Y | Y | Y | Y |
35
+ | 论文类 | | Y | | | | | Y | Y |
36
+ | 问答类 | Y | Y | | Y | | | Y | Y |
37
+
38
+ > 注:'Y' 表示使用了该类数据。
39
+
40
+ 在预训练阶段,不同类别数据的采样比例如下所示:
41
+ | | 网页类 | 代码类 | 百科类 | 书籍类 | 论文类 | 问答类 | 其他类 |
42
+ |:-------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
43
+ | 比例(%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
44
+
45
+ [^1]: GPT3 Paper: [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
46
+ [^2]: LLaMA Paper: [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
47
+ [^3]: BLOOM Paper: [BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100)
48
+ [^4]: PaLM Paper: [PaLM: Scaling Language Modeling with Pathways](https://arxiv.org/abs/2204.02311)
49
+ [^5]: Chinchilla Paper: [Training Compute-Optimal Large Language Models](https://arxiv.org/pdf/2203.15556)
50
+ [^6]: Gopher Paper: [Scaling Language Models: Methods, Analysis & Insights from Training Gopher](https://arxiv.org/abs/2112.11446)
51
+ [^7]: MT-NLG Paper: [Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model](https://arxiv.org/abs/2201.11990)
52
+
53
  ## Model Introduction
54
 
55
  **XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
 
59
  - **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
60
  - **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
61
 
62
+ During the pre-training phase, **XVERSE-65B** primarily utilized 7 different types of data. The following table shows a comparison of the pre-training datasets of XVERSE-65B with some other well-known models:
63
+
64
+ | Data Type | GPT3[^1] | Llama[^2] | BLOOM[^3] | PaLM[^4] | Chinchilla[^5] | Gopher[^6] | MT-NLG[^7] | XVERSE-65B |
65
+ |:---------------:|:--------:|:---------:|:---------:|:--------:|:--------------:|:----------:|:----------:|:----------:|
66
+ | Web Pages | Y | Y | Y | Y | Y | Y | Y | Y |
67
+ | Code | | Y | Y | Y | Y | Y | Y | Y |
68
+ | Encyclopedia | Y | Y | | Y | Y | Y | Y | Y |
69
+ | Books | Y | Y | | Y | Y | Y | Y | Y |
70
+ | Academic Papers | | Y | | | | | Y | Y |
71
+ | QA | Y | Y | | Y | | | Y | Y |
72
+
73
+ > Note: 'Y' indicates that the data type was used.
74
+
75
+ The sampling ratios of different data types during the pre-training phase are as follows:
76
+ | | Web Pages | Code | Encyclopedia | Books | Academic Papers | QA | Other |
77
+ |:--------------:|:---------:|:----:|:------------:|:-----:|:---------------:|:----:|:-----:|
78
+ | Proportion (%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
79
+
80
+ [^1]: GPT3 Paper: [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
81
+ [^2]: LLaMA Paper: [Large Language Models Are Multilingual Learners](https://arxiv.org/abs/2207.04672)
82
+ [^3]: BLOOM Paper: [BLOOM: A Large Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100)
83
+ [^4]: PaLM Paper: [PaLM: Scaling Language Modeling with Pathways](https://arxiv.org/abs/2204.02311)
84
+ [^5]: Chinchilla Paper: [Chinchilla: A Large Language Model that Outperforms Gopher with 70x Fewer Parameters](https://arxiv.org/abs/2207.14280)
85
+ [^6]: Gopher Paper: [Introducing Gopher: A Giant Language Model from DeepMind](https://arxiv.org/abs/2112.11446)
86
+ [^7]: MT-NLG Paper: [MT-NLG: The Power of Scale for Machine Translation and Natural Language Generation](https://arxiv.org/abs/2202.07536)
87
+
88
  ## 评测结果
89
 
90
  为了综合评估模型的性能,我们在一系列标准数据集上进行了全面测试,包括C-Eval、CMMLU、Gaokao-Bench、MMLU、GAOKAO-English、AGIEval、RACE-M、CommonSenseQA、PIQA、GSM8K和HumanEval。这些评估覆盖了模型在多个领域的能力,具体包括中文问答、英文问答、语言理解、常识问答、逻辑推理、数学问题解答以及编程能力。评估结果如下: