wenge-research commited on
Commit
4c3379d
1 Parent(s): eb3397f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -16,9 +16,15 @@ license: other
16
 
17
 
18
  ## 介绍/Introduction
19
- YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型, 采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。本次开源的模型为 YAYI2-30B Base 模型。我们希望通过雅意大模型的开源来促进中文预训练大模型开源社区的发展,并积极为此做出贡献。通过开源,我们与每一位合作伙伴共同构建雅意大模型生态。更多技术细节,敬请期待我们的的技术报告🔥。
 
 
 
 
 
 
 
20
 
21
- YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-39B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF). We opensource the pre-trained language model in this release, namely **YAYI2-30B**. By open-sourcing the YAYI 2 model, we aim to contribute to the development of the Chinese pre-trained large language model open-source community. Through open-source, we aspire to collaborate with every partner in building the YAYI large language model ecosystem. Stay tuned for more technical details in our upcoming technical report! 🔥
22
 
23
  ## 模型/Model
24
 
 
16
 
17
 
18
  ## 介绍/Introduction
19
+ YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。本次开源的模型为 YAYI2-30B Base 模型。我们希望通过雅意大模型的开源来促进中文预训练大模型开源社区的发展,并积极为此做出贡献。通过开源,我们与每一位合作伙伴共同构建雅意大模型生态。更多技术细节,敬请期待我们的的技术报告🔥。
20
+
21
+ 如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2) 仓库。
22
+
23
+
24
+ YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF). We opensource the pre-trained language model in this release, namely **YAYI2-30B**. By open-sourcing the YAYI 2 model, we aim to contribute to the development of the Chinese pre-trained large language model open-source community. Through open-source, we aspire to collaborate with every partner in building the YAYI large language model ecosystem. Stay tuned for more technical details in our upcoming technical report! 🔥
25
+
26
+ For more details about the YAYI 2, please refer to our GitHub repository.
27
 
 
28
 
29
  ## 模型/Model
30