ChloeAuYeung commited on
Commit
a0a34b8
1 Parent(s): 0eb9440

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -8,6 +8,7 @@ inference: false
8
  # XVERSE-13B-256K
9
 
10
  ## 更新信息
 
11
  **[2024/01/16]** 发布长序列对话模型 **XVERSE-13B-256K**,该版本模型最大支持 256K 的上下文窗口长度,约 25w 字的输入内容,可以协助进行文献总结、报告分析等任务。
12
  **[2023/11/06]** 发布新版本的 **XVERSE-13B-2** 底座模型和 **XVERSE-13B-Chat-2** 对话模型,相较于原始版本,新版本的模型训练更加充分(从 1.4T 增加到 3.2T),各方面的能力均得到大幅提升,同时新增工具调用能力。
13
  **[2023/09/26]** 发布 7B 尺寸的 [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) 底座模型和 [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) 对话模型,支持在单张消费级显卡部署运行,并保持高性能、全开源、免费可商用。
@@ -15,12 +16,18 @@ inference: false
15
  **[2023/08/07]** 发布 13B 尺寸的 XVERSE-13B 底座模型。
16
 
17
  ## Update Information
 
18
  **[2024/01/16]** Released the long-sequence model **XVERSE-13B-256K**. This model version supports a maximum window length of 256K, accommodating approximately 250,000 words for tasks such as literature summarization and report analysis.
19
  **[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
20
  **[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
21
  **[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
22
  **[2023/08/07]** Released the XVERSE-13B base model.
23
 
 
 
 
 
 
24
  ## 模型介绍
25
 
26
  **XVERSE-13B-256K**是[**XVERSE-13B-2**](https://huggingface.co/xverse/XVERSE-13B)模型经过ABF+继续预训练、NTK+SFT 微调后的版本。
 
8
  # XVERSE-13B-256K
9
 
10
  ## 更新信息
11
+ **[2024/06/28]** 更新tokenizers。
12
  **[2024/01/16]** 发布长序列对话模型 **XVERSE-13B-256K**,该版本模型最大支持 256K 的上下文窗口长度,约 25w 字的输入内容,可以协助进行文献总结、报告分析等任务。
13
  **[2023/11/06]** 发布新版本的 **XVERSE-13B-2** 底座模型和 **XVERSE-13B-Chat-2** 对话模型,相较于原始版本,新版本的模型训练更加充分(从 1.4T 增加到 3.2T),各方面的能力均得到大幅提升,同时新增工具调用能力。
14
  **[2023/09/26]** 发布 7B 尺寸的 [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) 底座模型和 [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) 对话模型,支持在单张消费级显卡部署运行,并保持高性能、全开源、免费可商用。
 
16
  **[2023/08/07]** 发布 13B 尺寸的 XVERSE-13B 底座模型。
17
 
18
  ## Update Information
19
+ **[2024/06/28]** Updated tokenizers.
20
  **[2024/01/16]** Released the long-sequence model **XVERSE-13B-256K**. This model version supports a maximum window length of 256K, accommodating approximately 250,000 words for tasks such as literature summarization and report analysis.
21
  **[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
22
  **[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
23
  **[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
24
  **[2023/08/07]** Released the XVERSE-13B base model.
25
 
26
+ ## Tokenizer版本说明
27
+
28
+ 当使用的tokenizer版本低于0.19,可直接使用仓库中的tokenizer.json和tokenizer_config.json。对于0.19及以上版本,请使用tokenizer.json.update和tokenizer_config.json.update,需要将这两个文件中的所有内容复制并粘贴覆盖至现有的tokenizer.json和tokenizer_config.json文件中。
29
+
30
+ For tokenizer versions below 0.19, you can directly use the tokenizer.json and tokenizer_config.json files from the repository. For versions 0.19 and above, please utilize the tokenizer.json.update and tokenizer_config.json.update files. You need to copy all the contents from these two files and paste them over the existing tokenizer.json and tokenizer_config.json files.
31
  ## 模型介绍
32
 
33
  **XVERSE-13B-256K**是[**XVERSE-13B-2**](https://huggingface.co/xverse/XVERSE-13B)模型经过ABF+继续预训练、NTK+SFT 微调后的版本。