willhe-xverse commited on
Commit
099838f
1 Parent(s): ffc7437

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ gptq_model-8bit-128g.safetensors.aa filter=lfs diff=lfs merge=lfs -text
37
+ gptq_model-8bit-128g.safetensors.ab filter=lfs diff=lfs merge=lfs -text
38
+ gptq_model-8bit-128g.safetensors.ac filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,149 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+
4
+ inference: false
5
+
6
  ---
7
+
8
+ # XVERSE-65B-Chat-GPTQ-Int4
9
+
10
+ ## 更新信息
11
+
12
+ **[2024/03/25]** 发布XVERSE-65B-Chat-GPTQ-Int4量化模型,支持vLLM推理xverse-65b量化模型。
13
+
14
+ **[2023/12/08]** 发布 **XVERSE-65B-2** 底座模型,该模型在前一版本的基础上进行了 **Continual Pre-Training**,训练总 token 量达到 **3.2** 万亿;模型各方面的能力均得到提升,尤其是数学和代码能力,在 GSM8K 上提升 **20**%,HumanEval 上提升 **41**%。
15
+ **[2023/11/29]** 更新模型架构及更多底座数据的相关信息。
16
+
17
+ **[2023/11/24]** 更新预训练数据的相关信息。
18
+
19
+ **[2023/11/06]** 发布 65B 尺寸的 XVERSE-65B 底座模型。
20
+
21
+ ## Update Information
22
+
23
+ **[2024/03/25] ** Release the XVERSE-65B-Chat-GPTQ-Int4 quantification model, supporting vLLM inference for the xverse-65b quantification model.
24
+
25
+ **[2023/12/08]** Released the **XVERSE-65B-2** base model. This model builds upon its predecessor through **Continual Pre-Training**, reaching a total training volume of **3.2** trillion tokens. It exhibits enhancements in all capabilities, particularly in mathematics and coding skills, with a **20%** improvement on the GSM8K benchmark and a **41%** increase on HumanEval.
26
+
27
+ **[2023/11/29]** Update model architecture and additional pre-training data information.
28
+
29
+ **[2023/11/24]** Update the related information of the pre-training data.
30
+
31
+ **[2023/11/06]** Released the XVERSE-65B base model.
32
+
33
+ ## 模型介绍
34
+
35
+ **XVERSE-65B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),参数规模为 650 亿,本次开源的模型为底座模型 **XVERSE-65B**,主要特点如下:
36
+
37
+ - **模型结构**:XVERSE-65B 使用主流 Decoder-only 的标准 Transformer 网络结构,支持 16K 的上下文长度(Context Length),能满足更长的多轮对话、知识问答与摘要等需求,模型应用场景更广泛。
38
+ - **训练数据**:构建了 2.6 万亿 token 的高质量、多样化的数据对模型进行充分训练,包含中、英、俄、西等 40 多种语言,通过精细化设置不同类型数据的采样比例,使得中英两种语言表现优异,也能兼顾其他语言效果。
39
+ - **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
40
+ - **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
41
+
42
+ **XVERSE-65B**的模型大小、架构和学习率如下:
43
+
44
+ | params | d_model | n_heads | n_layers | d_ff | learning rate |
45
+ |:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
46
+ | 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
47
+
48
+ ## Model Introduction
49
+
50
+ **XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
51
+
52
+ - **Model Structure**: XVERSE-65B uses the mainstream Decoder-only Transformer network structure, supports 16k context length, which can meet the need of longer multi-round dialogues, knowledge question-answering, and summarization. This makes the model more versatile in application scenarios.
53
+ - **Training Data**: The model has been thoroughly trained on a diversified and high-quality dataset consisting of 2.6 trillion of tokens, including more than 40 languages such as Chinese, English, Russian, and Spanish. The sampling ratio of different types of data is finely set, which makes the performance of Chinese and English excellent, and also takes into account the effect of other languages.
54
+ - **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
55
+ - **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
56
+
57
+ The models sizes, architectures and learning rate of **XVERSE-65B** are showed as follows:
58
+
59
+ | params | d_model | n_heads | n_layers | d_ff | learning rate |
60
+ |:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
61
+ | 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
62
+
63
+ ## 环境准备
64
+
65
+ 我们建议您克隆[`vllm`](https://github.com/vllm-project/vllm.git)并按照官方指南进行安装。
66
+
67
+ ## Environment Setup
68
+
69
+ We advise you to clone [`vllm`](https://github.com/vllm-project/vllm.git) and install it following the official guide.
70
+
71
+ ## 使用方法
72
+
73
+ 我们演示了如何使用 `vllm` 来运行XVERSE-65B-Chat-GPTQ-Int4量化模型:
74
+
75
+ ```python
76
+ from vllm import LLM, SamplingParams
77
+
78
+ model_dir = "xverse/XVERSE-65B-Chat-GPTQ-Int4/"
79
+
80
+ # Create an LLM.
81
+ llm = LLM(model_dir,
82
+ trust_remote_code=True)
83
+
84
+ # Create a sampling params object.
85
+ sampling_params = SamplingParams(temperature=0.5, top_p=0.85, max_tokens=2048, repetition_penalty=1.1)
86
+
87
+ # Generate texts from the prompts. The output is a list of RequestOutput objects
88
+ # that contain the prompt, generated text, and other information.
89
+ prompts = ["Human: 请你写一篇关于环保的文章,题材是从个人做起。\n\nAssistant: ",]
90
+ outputs = llm.generate(prompts, sampling_params)
91
+
92
+ # Print the outputs.
93
+ for output in outputs:
94
+ prompt = output.prompt
95
+ generated_text = output.outputs[0].text
96
+ print(f"Generated text:\n{generated_text}")
97
+ ```
98
+
99
+ ## Usage
100
+
101
+ We demonstrated how to use 'vllm' to run the XVERSE-65B-Chat GPTQ Int4 quantization model:
102
+
103
+ ```python
104
+ from vllm import LLM, SamplingParams
105
+
106
+ model_dir = "xverse/XVERSE-65B-Chat-GPTQ-Int4/"
107
+
108
+ # Create an LLM.
109
+ llm = LLM(model_dir,
110
+ trust_remote_code=True)
111
+
112
+ # Create a sampling params object.
113
+ sampling_params = SamplingParams(temperature=0.5, top_p=0.85, max_tokens=2048, repetition_penalty=1.1)
114
+
115
+ # Generate texts from the prompts. The output is a list of RequestOutput objects
116
+ # that contain the prompt, generated text, and other information.
117
+ prompts = ["Human: 请你写一篇关于环保的文章,题材是从个人做起。\n\nAssistant: ",]
118
+ outputs = llm.generate(prompts, sampling_params)
119
+
120
+ # Print the outputs.
121
+ for output in outputs:
122
+ prompt = output.prompt
123
+ generated_text = output.outputs[0].text
124
+ print(f"Generated text:\n{generated_text}")
125
+ ```
126
+
127
+ ## 局限性与免责申明
128
+
129
+ XVERSE-65B 与其他所有 LLM 一样,在某些情况下可能会产生不准确、有偏见或其他令人反感的内容。因此,请谨慎使用模型生成的内容,请勿将生成的有害内容进行传播,在部署任何 XVERSE-65B 的应用之前,开发人员应根据其具体应用对模型进行安全测试和调优。
130
+
131
+ 我们强烈警告不要将 XVERSE-65B 模型用于制造或传播有害信息,或进行任何可能损害公众、国家、社会安全或违反法规的活动。如果使用 XVERSE-65B 模型产生任何问题,无论是数据安全问题、公共舆论风险,还是模型被误解、滥用、传播或不合规使用所引发的任何风险和问题,我们将不承担任何责任。
132
+
133
+ ## Limitations and Disclaimer
134
+
135
+ Like all other Large Language Models (LLMs), XVERSE-65B may produce inaccurate, biased, or otherwise offensive content under certain circumstances. Therefore, please use the content generated by the model with caution and refrain from disseminating harmful content. Before deploying any application of XVERSE-65B, developers should conduct safety tests and optimization of the model according to its specific application.
136
+
137
+ We strongly warn against the use of the XVERSE-65B model for producing or spreading harmful information, or conducting any activities that might harm the public, national, or social security, or violate regulations. We assume no responsibility for any problems arising from the use of the XVERSE-65B model, whether it be data security issues, public opinion risks, or any risks and issues caused by misunderstanding, misuse, dissemination, or non-compliance with the model.
138
+
139
+ ## 模型开源协议
140
+
141
+ 使用本仓库的源码需要遵循 [Apache-2.0](https://github.com/xverse-ai/XVERSE-65B/blob/main/LICENSE) 开源协议,使用 XVERSE-65B 的模型权重则需要遵循[模型许可协议](https://github.com/xverse-ai/XVERSE-65B/blob/main/MODEL_LICENSE.pdf)。
142
+
143
+ XVERSE-65B 模型权重对学术研究**完全开放**,并且支持**免费商用**。如需申请商业许可证,请填写【[申请表](https://chat.xverse.cn/home/business.html)】,如有其他问题或合作,请联系 <opensource@xverse.cn>。
144
+
145
+ ## Open Source License
146
+
147
+ The use of the source code in this repository must follow the [Apache-2.0](https://github.com/xverse-ai/XVERSE-65B/blob/main/LICENSE) open-source license, while the use of the model weights of XVERSE-65B needs to adhere to the [Model License Agreement](https://github.com/xverse-ai/XVERSE-65B/blob/main/MODEL_LICENSE.pdf).
148
+
149
+ The XVERSE-65B model weights are **fully open** to academic research and support **free commercial use**. To apply for a commercial license, please fill in the [application form](https://chat.xverse.cn/home/business.html). For other questions or collaborations, please contact <opensource@xverse.cn>.
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/mnt/llm_dataset/libinbin/xverse_65b/LLaMA-Megatron/checkpoint/65b-2.5t-stage2-turbo_40w_clean/pretrain_PP1_new/iter_0000639/mp_rank_00/",
3
+ "architectures": [
4
+ "XverseForCausalLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_xverse.XverseConfig",
8
+ "AutoModelForCausalLM": "modeling_xverse.XverseForCausalLM"
9
+ },
10
+ "bos_token_id": 2,
11
+ "eos_token_id": 3,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 8192,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 22016,
16
+ "max_position_embeddings": 16384,
17
+ "max_tokenizer_truncation": 16384,
18
+ "model_type": "xverse",
19
+ "num_attention_heads": 64,
20
+ "num_hidden_layers": 80,
21
+ "pad_token_id": 1,
22
+ "quantization_config": {
23
+ "bits": 8,
24
+ "damp_percent": 0.01,
25
+ "desc_act": true,
26
+ "group_size": 128,
27
+ "is_marlin_format": false,
28
+ "model_file_base_name": null,
29
+ "model_name_or_path": null,
30
+ "quant_method": "gptq",
31
+ "static_groups": true,
32
+ "sym": true,
33
+ "true_sequential": true
34
+ },
35
+ "rms_norm_eps": 1e-06,
36
+ "tie_word_embeddings": false,
37
+ "torch_dtype": "float16",
38
+ "transformers_version": "4.39.1",
39
+ "use_cache": true,
40
+ "vocab_size": 100534
41
+ }
configuration_xverse.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ XVERSE model configuration"""
21
+
22
+ from transformers.configuration_utils import PretrainedConfig
23
+ from transformers.utils import logging
24
+
25
+
26
+ logger = logging.get_logger(__name__)
27
+
28
+ XVERSE_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
29
+
30
+
31
+ class XverseConfig(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`XverseModel`]. It is used to instantiate an Xverse
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the XVERSE-13B.
36
+
37
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
38
+ documentation from [`PretrainedConfig`] for more information.
39
+
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 100278):
43
+ Vocabulary size of the XVERSE model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`XverseModel`]
45
+ hidden_size (`int`, *optional*, defaults to 5120):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 13824):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 40):
50
+ Number of hidden layers in the Transformer encoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 40):
52
+ Number of attention heads for each attention layer in the Transformer encoder.
53
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
54
+ The non-linear activation function (function or string) in the decoder.
55
+ max_position_embeddings (`int`, *optional*, defaults to 8192):
56
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
57
+ just in case (e.g., 512 or 1024 or 2048).
58
+ initializer_range (`float`, *optional*, defaults to 0.02):
59
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
60
+ rms_norm_eps (`float`, *optional*, defaults to 1e-6):
61
+ The epsilon used by the rms normalization layers.
62
+ use_cache (`bool`, *optional*, defaults to `True`):
63
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
64
+ relevant if `config.is_decoder=True`.
65
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
66
+ Whether to tie weight embeddings
67
+
68
+ Example:
69
+
70
+ ```python
71
+ >>> from transformers import XverseModel, XverseConfig
72
+
73
+ >>> # Initializing a Xverse XVERSE-13B style configuration
74
+ >>> configuration = XverseConfig()
75
+
76
+ >>> # Initializing a model from the XVERSE-13B style configuration
77
+ >>> model = XverseModel(configuration)
78
+
79
+ >>> # Accessing the model configuration
80
+ >>> configuration = model.config
81
+ ```"""
82
+ model_type = "xverse"
83
+ keys_to_ignore_at_inference = ["past_key_values"]
84
+
85
+ def __init__(
86
+ self,
87
+ vocab_size=100278,
88
+ hidden_size=5120,
89
+ intermediate_size=13824,
90
+ num_hidden_layers=40,
91
+ num_attention_heads=40,
92
+ hidden_act="silu",
93
+ max_position_embeddings=8192,
94
+ max_tokenizer_truncation=8192,
95
+ initializer_range=0.02,
96
+ rms_norm_eps=1e-6,
97
+ use_cache=True,
98
+ pad_token_id=None,
99
+ bos_token_id=1,
100
+ eos_token_id=2,
101
+ tie_word_embeddings=False,
102
+ **kwargs,
103
+ ):
104
+ self.vocab_size = vocab_size
105
+ self.max_position_embeddings = max_position_embeddings
106
+ self.hidden_size = hidden_size
107
+ self.intermediate_size = intermediate_size
108
+ self.num_hidden_layers = num_hidden_layers
109
+ self.num_attention_heads = num_attention_heads
110
+
111
+ self.hidden_act = hidden_act
112
+ self.initializer_range = initializer_range
113
+ self.rms_norm_eps = rms_norm_eps
114
+ self.use_cache = use_cache
115
+ self.max_tokenizer_truncation = max_tokenizer_truncation
116
+
117
+ super().__init__(
118
+ pad_token_id=pad_token_id,
119
+ bos_token_id=bos_token_id,
120
+ eos_token_id=eos_token_id,
121
+ tie_word_embeddings=tie_word_embeddings,
122
+ **kwargs,
123
+ )
gptq_model-8bit-128g.safetensors.aa ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dec09ee26a3dc02d7b9640aacd7400d2aa96cf799bb8ec1998693d1c3125553f
3
+ size 23203847901
gptq_model-8bit-128g.safetensors.ab ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84e2e1f70f193abe6f726d3e501e0790be8e937c7122d3aaff9709fc01dc61b1
3
+ size 23203847901
gptq_model-8bit-128g.safetensors.ac ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94b9f65b79053005bac0a0af4f59fa1bf4d730ba43dd2a86ae89fa55eb5dc340
3
+ size 23203847902
quantize_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bits": 8,
3
+ "group_size": 128,
4
+ "damp_percent": 0.01,
5
+ "desc_act": true,
6
+ "static_groups": true,
7
+ "sym": true,
8
+ "true_sequential": true,
9
+ "model_name_or_path": null,
10
+ "model_file_base_name": null,
11
+ "is_marlin_format": false,
12
+ "quant_method": "gptq"
13
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<pad>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "model_max_length": 1000000000000000019884624838656,
4
+ "tokenizer_class": "PreTrainedTokenizerFast"
5
+ }