Du Chen commited on
Commit
a84426e
1 Parent(s): 7acc177

update readme and special_tokens_map.json

Browse files
LICENSE CHANGED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright (C) 2023 ORION STAR Robotics
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
202
+
203
+
204
+
205
+
206
+
207
+ Apache License Version 2.0(简体中文)
208
+ 1. 定义
209
+ “许可证”是指根据本文档第1到第9部分关于使用、重生成和分发的术语和条件。
210
+ “许可证颁发者”是指版权所有者或者由版权所有者批准的授权许可证的实体。
211
+ “法律实体”是指实施实体和进行控制的所有其它实体受该实体控制,或者受该实体集中控制。根据此定义,”控制”是指(i)让无论是否签订协议的上述实体,进行指导或管理的直接权利或间接权利,或者(ii)拥有百分之五十(50%)或以上已发行股票的所有者,或者(iii)上述实体的实权所有者。
212
+ “用户”(或“用户的”)是指行使本许可证所授予权限的个人或法律实体。
213
+ “源程序”形式是指对包含但不限制软件源代码、文档源程序和配置文件进行修改的首选形式。
214
+ “目标”形式是指对源程序形式进行机械转换或翻译的任何形式,包括但不限于对编译的目标代码,生成的文件以及转换为其它媒体类型。
215
+ “作品”是指根据本许可证所制作的源程序形式或目标形式的著作,在著作中包含的或附加的版权通知(在下面附录中提供了一个示例)。
216
+ “衍生作品”是指基于作品(或从作品衍生而来)的源程序形式或目标形式的任何作品,以及编辑修订、注释、详细描述或其它修订等构成原创著作作品的整体。根据本许可证,衍生作品不得包括与作品及其衍生作品分离之作品,或仅与作品及其衍生作品的接口相链接(或按名称结合)之作品。
217
+ “贡献”是指任何著作作品,包括作品的原始版本和对该作品或衍生作品所做的任何修订或补充,意在提交给许可证颁发者以让版权所有者或代表版权所有者的授权个人或法律实体包含在其作品中。根据此定义,“提交”一词表示发送给许可证颁发者或其代表人,任何电子的、口头的或书面的交流信息形式,包括但不限于在由许可证颁发者或者代表其管理的电子邮件清单、源代码控制系统、以及发布跟踪系统上为讨论和提高作品的交流,但不包括由版权所有者以书面形式明显标注或指定为”非贡献”的交流活动。
218
+ “贡献者”是指许可证颁发者和代表从许可证颁发者接受之贡献的并随后包含在作品之贡献中的任何个人或法律实体。
219
+ 2. 版权许可证的授予
220
+ 根据本许可证的条款,每个贡献者授予用户永久性的、全球性的、非专有性的、免费的、无版权费的、不可撤销的版权许可证以源程序形式或目标形式复制、准备衍生作品、公开显示、公开执行、授予分许可证、以及分发作品和这样的衍生作品。
221
+ 3. 专利许可证的授予
222
+ 根据本许可证的条款,每个贡献者授予用户永久性的、全球性的、非专有性的、免费的、无版权费的、不可撤销的(除在本部分进行说明)专利许可证对作品进行制作、让人制作、使用、提供销售、销售、进口和其它转让,且这样的许可证仅适用于在所递交作品的贡献中因可由单一的或多个这样的贡献者授予而必须侵犯的申请专利。如果用户对任何实体针对作品或作品中所涉及贡献提出因直接性或贡献性专利侵权而提起专利法律诉讼(包括交互诉讼请求或反索赔),那么根据本许可证,授予用户针对作品的任何专利许可证将在提起上述诉讼之日起终止。
223
+ 4. 重新分发
224
+ 用户可在任何媒介中复制和分发作品或衍生作品之副本,无论是否修订,还是以源程序形式或目标形式,条件是用户需满���下列条款:
225
+ 用户必须为作品或衍生作品的任何其他接收者提供本许可证的副本;
226
+ 并且用户必须让任何修改过的文件附带明显的通知,声明用户已更改文件;
227
+ 并且用户必须从作品的源程序形式中保留衍生作品源程序形式的用户所分发的所有版权、专利、商标和属性通知,但不包括不属于衍生作品任何部分的类似通知;
228
+ 并且如果作品将”通知”文本文件包括为其分发作品的一部分,那么用户分发的任何衍生作品中须至少在下列地方之一包括,在这样的通知文件中所包含的属性通知的可读副本,但不包括那些不属于衍生作品任何部分的通知:在作为衍生作品一部分而分发的通知文本文件中;如果与衍生作品一起提供则在源程序形式或文件中;或者通常作为第三方通知出现的时候和地方,在衍生作品中产生的画面中。通知文件的内容仅供信息提供,并未对许可证进行修改。用户可在其分发的衍生作品中在作品的通知文本后或作为附录添加自己的属性通知,条件是附加的属性通知不得构成修改本许可证。
229
+ 用户可以为自身所做出的修订添加自己的版权声明并可对自身所做出修订内容或为这样的衍生作品作为整体的使用、复制或分发提供附加或不同的条款,条件是用户对作品的使用、复制和分发必须符合本许可证中声明的条款。
230
+ 5. 贡献的提交
231
+ 除非用户明确声明,在作品中由用户向许可证颁发者的提交若要包含在贡献中,必须在无任何附加条款下符合本许可证的条款。尽管上面如此规定,执行许可证颁发者有关贡献的条款时,任何情况下均不得替代或修改任何单独许可证协议的条款。
232
+ 6. 商标
233
+ 本许可证并未授予用户使用许可证颁发者的商号、商标、服务标记或产品名称,除非将这些名称用于合理性和惯例性描述作品起源和复制通知文件的内容时。
234
+ 7. 保证否认条款
235
+ 除非因适用法律需要或书面同意,许可证颁发者以”按原样”基础提供作品(并且每个贡献者提供其贡献),无任何明示的或暗示的保证或条件,包括但不限于关于所有权、不侵权、商品适销性、或适用性的保证或条件。用户仅对使用或重新分发作品的正确性负责,并需承担根据本许可证行使权限时的任何风险。
236
+ 8. 责任限制条款
237
+ 在任何情况下并根据任何法律,无论是因侵权(包括过失)或根据合同,还是其它原因,除非根据适用法律需要(例如故意行为和重大过失行为)或经书面同意,即使贡献者事先已被告知发生损害的可能性,任何贡献者不就用户因使用本许可证或不能使用或无法使用作品(包括但不限于商誉损失、停工、计算机失效或故障,或任何商业损坏或损失)而造成的损失,包括直接的、非直接的、特殊的、意外的或间接的字符损坏而负责。
238
+ 9. 接受保证或附加责任
239
+ 重新分发作品或及其衍生作品时,用户可选择提供或为符合本许可证承担之支持、担保、赔偿或其它职责义务和/或权利而收取费用。但是,在承担上述义务时,用户只可代表用户本身和用户本身责任来执行,无需代表任何其它贡献者,并且用户仅可保证、防护并保持每个贡献者不受任何因此而产生的责任或对因用户自身承担这样的保证或附加责任而对这样的贡献者所提出的索赔。
240
+ 条款结束
ModelsCommunityLicenseAgreement ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 【Orion-14B Series】 Models Community License Agreement
2
+ Version: 1.0
3
+ Date of Release:
4
+
5
+ 1. Definition
6
+
7
+ “Agreement” refers to the terms and conditions defined in this 【Orion-14B Series】 Models Community License Agreement for the use, reproduction, and distribution of Yi
8
+ Series Models.
9
+
10
+ “Model” refers to associated components (including checkpoints) developed based on machine learning, including learned weights and parameters (including the
11
+ status of optimizer).
12
+
13
+ “【Orion-14B Series】 Models” refers to open-source models with different specifications and capabilities provided by the Licensor, including:
14
+ 【Orion-14B-Base】Base model
15
+ 【Orion-14B-Chat】Chat model
16
+ 【Orion-14B-LongChat】Long context chat model
17
+ 【Orion-14B-Chat-RAG】Retrieval augmented generation chat model
18
+ 【Orion-14B-Chat-Plugin】Chat model with plugin capability
19
+ 【Orion-14B-Base-Int4】4-bit integer quantized base model
20
+ 【Orion-14B-Chat-Int4】4-bit integer quantized chat model
21
+
22
+ “Derivatives” refers to all modifications to 【Orion-14B Series】 Models, work based on 【Orion-14B Series】 Models, or any other models created or initialized by transferring the weights, parameters, activations, or output patterns of 【Orion-14B Series】 Models to other models to achieve similar performance, including but not limited to methods that require using intermediate data representations or generating synthetic data based on 【Orion-14B Series】 Models to train other models.
23
+
24
+ “Licensor” refers to Beijing Orionstar Technology Co., Ltd.
25
+
26
+ “you” refers to an individual or legal entity that exercises the license granted by this Agreement and/or uses the 【Orion-14B Series】 Models for any purpose and in any field of use.
27
+
28
+ “Third Party” refers to any individuals, legal entities, or non-legal organizations other than you.
29
+
30
+ “Distribute” refers to transmitting, copying, publishing, or otherwise sharing the 【Orion-14B Series】 Models with third parties, including providing the 【Orion-14B Series】Models through electronic or other remote means (such as any SaaS software or PaaS software accessed via API or web access).
31
+
32
+ “Commercial Purposes” refers to the use of the 【Orion-14B Series】 Models, directly or indirectly, for the operation, promotion, revenue generation, or any other profit-making purposes for entities or individuals.
33
+
34
+ “Laws and Regulations” refers to the laws and administrative regulations of the mainland of the People's Republic of China (for the purposes of this Agreement only, excluding Hong Kong, Macau, and Taiwan).
35
+
36
+ “Personal Information” refers to various information related to identified or identifiable natural persons recorded electronically or by other means, excluding information that has been anonymized.
37
+
38
+ “Logo” refers to any trademark, service mark, trade name, domain name, website name, or other distinctive branding marks.
39
+
40
+ 2. License and License Restrictions
41
+ The Licensor hereby grants you a non-exclusive, global, non-transferable, on-sub-licensable, revocable, and royalty-free copyright license. You must adhere to the following license restrictions:
42
+
43
+ 1) Your use of the 【Orion-14B Series】 Models must comply with the Laws and Regulations as well as applicable legal requirements of other countries/regions, and respect social ethics and moral standards, including but not limited to, not using the【Orion-14B Series】 Models for purposes prohibited by Laws and Regulations as well as applicable legal requirements of other countries/regions, such as harming national security, promoting terrorism, extremism, inciting ethnic or racial hatred, discrimination, violence, or pornography, and spreading false harmful information.
44
+
45
+ 2) You shall not, for military or unlawful purposes or in ways not allowed by Laws and Regulations as well as applicable legal requirements of other countries/regions, a) use, copy, or Distribute the【Orion-14B Series】 Models, or b) create complete or partial Derivatives of the 【Orion-14B Series】 Models.
46
+
47
+ 3) Your use of the 【Orion-14B Series】 Models (including using the output of the 【Orion-14B Series】 Models) and the creation of Derivatives must not infringe upon the legitimate rights of any Third Party, including but not limited to the rights of personal rights such as the right to likeness, reputation, and privacy, as well as intellectual property rights such as copyrights, patents, trade secrets, and other property rights.
48
+
49
+ 4) You must clearly attribute the source of the 【Orion-14B Series】 Models to the Licensor and provide a copy of this Agreement to any Third-Party users of the 【Orion-14B Series】 Models and Derivatives.
50
+
51
+ 5) If you modify the 【Orion-14B Series】 Models to create Derivatives, you must clearly indicate the substantial modifications made, and these modifications shall not violate the license restrictions of this Agreement. You shall not enable, assist, or in any way facilitate Third Parties to violate the license restrictions of this Agreement.
52
+
53
+ If you plan to use the 【Orion-14B Series】 Models and Derivatives for Commercial Purposes, please refer to the Registration Form of 【Orion-14B Series】 Models for Commercial Purposes (“Registration Form”), available at 【https://test.orionstar.com/llm-license.html】) and to complete the registration and obtain the license for Commercial Purposes. If you obtained the license for Commercial Purposes and use the 【Orion-14B Series】 Models and Derivatives for Commercial Purposes, you must comply with the afore-mentioned license restrictions.
54
+
55
+
56
+ 3. Intellectual Property
57
+ The ownership of the 【Orion-14B Series】 Models and their related intellectual property rights is solely held by the Licensor.
58
+
59
+ In any circumstance, without the prior written consent of the Licensor, you are not allowed to use any Logo associated with the Licensor. If your use of the Licensor's Logo in violation of this Agreement causes any losses to the Licensor or others, you will bear full legal responsibility.
60
+
61
+ Within the scope of the granted license, you are authorized to modify the Orion-14B series models to create derivative works. You may assert intellectual property rights over the portions of the derivative works that are the product of your creative labor.
62
+
63
+
64
+ 4. Disclaimer and Limitation of Liability
65
+
66
+ The 【Orion-14B Series】 Models are provided "AS IS." The Licensor does not provide any express or implied warranties for the 【Orion-14B Series】 Models, including but not limited to stability, ownership, merchantability, non-infringement, or fitness for a specific purpose of the 【Orion-14B Series】 Models and their output results. You assume all responsibilities for the risks and consequences arising from the use, reproduction, and distribution of the 【Orion-14B Series】 Models, and the creation of Derivatives.
67
+
68
+ The Licensor complies with Laws and Regulations at all stages of model training, maintaining the legality, authenticity, accuracy, objectivity, and diversity of data and algorithms. The Licensor is not liable for any direct, indirect, incidental consequences, and other losses or damages related to your use, reproduction, and distribution of the 【Orion-14B Series】 Models, and the creation of Derivatives under this Agreement. This includes but is not limited to:
69
+
70
+ 1) The Licensor is not responsible for data security risks resulting from your use of the 【Orion-14B Series】 Models.
71
+
72
+ 2) The 【Orion-14B Series】 Models may contain Personal Information. When you use 【Orion-14B Series】 Models, you acknowledge that you are the data processing entity as defined under the Laws and Regulations responsible for determining the processing methods and purposes of Personal Information. You must comply with legal requirements for processing any Personal Information that may be contained in the 【Orion-14B Series】 Models and assume the associated legal responsibilities, as well as the risks
73
+ and consequences of processing Personal Information.
74
+
75
+ 3) The Licensor is not liable for reputation risks arising from your use of the 【Orion-14B Series】 Models or the output results of the 【Orion-14B Series】 Models.
76
+
77
+ 4) The Licensor is not liable for intellectual property risks associated with your use of the 【Orion-14B Series】 Models’ output results.
78
+
79
+ If your use, reproduction, distribution of the 【Orion-14B Series】 Models, or the creation of Derivatives result in losses to the Licensor, the Licensor has the right to seek compensation from you. For any claims made by Third Parties against the Licensor related to your use, reproduction, and distribution of the 【Orion-14B Series】 Models, or the creation of Derivatives, the Licensor has the right to demand that you defend, compensate, and indemnify the Licensor and protect the Licensor from harm.
80
+
81
+
82
+ 5. Dispute Resolution
83
+ The stipulation, effectiveness, interpretation, performance, modification, and termination of the Agreement, the use, copy, and Distribute of the 【Orion-14B Series】 Models, and dispute resolution associated with your use, copy, and distribution shall be governed by the laws of the mainland of the People's Republic of China (for the purposes of this agreement only, excluding Hong Kong, Macau, and Taiwan), and the application of conflict of laws is excluded.
84
+ Any disputes arising from the use, copy, or distribution of the 【Orion-14B Series】 Models should first be resolved through amicable negotiations. If negotiations fail, legal proceedings should be initiated in the People's Court at the location of the Licensor.
85
+
86
+
87
+ 6. Effectiveness and Termination of the Agreement
88
+
89
+ Your use of the 【Orion-14B Series】 Models signifies that you have read and agreed to be bound by the terms of the Agreement. The Agreement becomes effective from the date of your use of the 【Orion-14B Series】 Models and will terminate from the date you cease using the 【Orion-14B Series】 Models. If you violate any terms or restrictions in the Agreement, the Licensor reserves the right to terminate the Agreement.
90
+
91
+ Upon termination of the Agreement, you must immediately cease using the 【Orion-14B Series】Models. Section 4, "Disclaimer and Limitation of Liability," and Section 5, "Dispute Resolution," of this Agreement remain in effect after the termination of this Agreement.
92
+
93
+
94
+ 7. Updates to the Agreement and Contact Information
95
+
96
+ The Licensor reserves the right to update the Agreement from time to time.
97
+
98
+
99
+
100
+
101
+
102
+
103
+ 【Orion-14B系列】 模型社区许可协议
104
+ 版本:1.0
105
+ 发布日期:
106
+ 一、 定义
107
+ “许可”是指本协议中定义的使用、复制和分发的条款和条件。
108
+
109
+ “模型”是指任何附带的基于机器学习的组件(包括检查点),包括学习的权重、参数(包括 优化器状态)。
110
+
111
+ “【Orion-14B系列】 模型”是指基于【Orion-14B-Base】模型构建的一系列具备领域特色的模型,包含 :
112
+ 【Orion-14B-Base】基座模型
113
+ 【Orion-14B-Chat】对话模型
114
+ 【Orion-14B-LongChat】长上下文模型
115
+ 【Orion-14B-Chat-RAG】检索增强模型
116
+ 【Orion-14B-Chat-Plugin】插件模型
117
+ 【Orion-14B-Base-Int4】基座Int4量化模型
118
+ 【Orion-14B-Chat-Int4】对话Int4量化模型
119
+
120
+ “数据”是指从与模型一起使用的数据集中提取的信息和/或内容的集合,包括用于训练、预 训练或以其他方式评估模型的数据。数据集中提取的信息和/或内容的集合,可能包含个人 信息或非个人信息。
121
+
122
+ “个人信息”是指以电子或者其他方式记录的与已识别或者可识别的自然人有关的各种信息, 不包括匿名化处理后的信息。个人信息的处理包括个人信息的收集、存储、使用、加工、 传输、提供、公开、删除等。
123
+
124
+ “输出”是指运行模型的结果,体现在由此产生的信息内容中。
125
+
126
+ “训练”是指为模型提供训练数据,以增强模型的预测能力。
127
+
128
+ “模型衍生品”是指对【Orion-14B系列】模型的所有修改、基于【Orion-14B系列】模型的工作,或通过将 【Orion-14B系列】模型的权重、参数、激活或输出模式转移到其他模型而创建或初始化的任何其他 模型,以使其他模型的性能与【Orion-14B系列】模型类似,包括但不限于需要使用中间数据表示的 提取方法或基于【Orion-14B系列】模型生成合成数据来训练其他模型的方法
129
+
130
+ “分发”是指向第三方传输、复制、发布或以其他方式共享模型或模型衍生品,包括将模型作为通过电子或其他远程方式(例如基于 API 或 Web 访问的任何 SaaS 软件或 PaaS 软件) 提供的托管服务。
131
+
132
+ “许可方”是指授予许可的版权所有者或版权所有者实体,包括可能对模型和/或被分发模型拥有权利的个人或实体。本协议下的许可方是:【北京猎户星空科技有限公司】,或其授权可 对任何第三方进行许可的实体或个人。“您”(或“您的”)是指行使本许可授予的权限和/或出于任何目的和在任何使用领域使用模 型的个人或法人实体,属于本协议的被许可人。
133
+
134
+ “第三方”是指您之外的任何个人、法人实体或非法人组织。
135
+
136
+ “商业用途”是指使用 【Orion-14B系列】模型,直接或间接为实体或个人进行运营、推广或产生收入,或用于任何其他盈利目的。
137
+
138
+ 二、 许可及许可限制
139
+ 根据本许可协议的条款和条件,许可方特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许可。您可以出于非商业用途使用此许可。许可方对您使用【Orion-14B系列】模型的输出或基于【Orion-14B系列】模型得到的模型衍生品不主张任何权利,但您必须满足如下许可限制条件:
140
+ 1. 您不得出于任何军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建【Orion-14B系列】 模型的全部或部分衍生品。
141
+ 2. 如果您计划将【Orion-14B系列】模型及模型衍生品用作商业用途,应当按照本协议提供的联络方式,事先向许可方登记并获得许可方的书面授权。请点击以下链接进行登记:https://test.orionstar.com/llm-license.html
142
+ 3. 您对【Orion-14B系列】模型的使用和修改(包括使用【Orion-14B系列】 模型的输出或者基于【Orion-14B系列】 模型得到的模型衍生品)不得违反任何国家的法律法规,尤其是中华人民共和国的法律法规,不得侵犯任何第三方的合法权益,包括但不限于肖像权、名誉权、隐私权等 人格权,著作权、专利权、商业秘密等知识产权,或者其他财产权益。
143
+ 4. 您必须向【Orion-14B系列】模型或其模型衍生品的任何第三方使用者提供【Orion-14B系列】模型的来源以及本协议的副本。
144
+ 5. 您修改【Orion-14B系列】 模型得到模型衍生品,必须以显著的方式���明修改的内容,且上述修改不得违反本协议的许可限制条件,也不能允许、协助或以其他方式使得第三方违反本协议中的许可限制条件。
145
+ 三、 知识产权
146
+ 1. 【Orion-14B系列】模型的所有权及其相关知识产权,由许可方单独所有。
147
+ 2. 在任何情况下,未经许可方事先书面同意,您不得使用许可方任何商标、服务标记、 商号、域名、网站名称或其他显著品牌特征(以下统称为"标识"),包括但不限于明示或暗示您自身为“许可方”。未经许可方事先书面同意,您不得将本条款前述标识以单独或结合的任何方式展示、使用或申请注册商标、进行域名注册等,也不得向他人明示或暗示有权展示、使用、或以其他方式处理这些标识的权利。由于您违反本协议使用许可方上述标识 等给许可方或他人造成损失的,由您承担全部法律责任。
148
+ 3. 在许可范围内,您可以对【Orion-14B系列】模型进行修改以得到模型衍生品,对于模型衍生品中您付出创造性劳动的部分,您可以主张该部分的知识产权。
149
+ 四、 免责声明及责任限制
150
+ 1. 在任何情况下,许可方不对您根据本协议使用【Orion-14B系列】模型而产生或与之相关的任何直接、间接、附带的后果、以及其他损失或损害承担责任。若由此导致许可方遭受损失,您应当向许可方承担全部赔偿责任。
151
+ 2. 模型中的模型参数仅仅是一种示例,如果您需要满足其他要求,需自行训练,并遵守相应数据集的许可协议。您将对【Orion-14B系列】模型的输出及模型衍生品所涉及的知识产权风险或与之相关的任何直接、间接、附带的后果、以及其他损失或损害负责。
152
+ 3. 尽管许可方在【Orion-14B系列】模型训练的所有阶段,都坚持努力维护数据的合规性和准确 性,但受限于【Orion-14B系列】模型的规模及其概率固有的随机性因素影响,其输出结果的准确性无法得到保证,模型存在被误导的可能。因此,许可方在此声明,许可方不承担您因使用【Orion-14B系列】模型及其源代码而导致的数据安全问题、声誉风险,或任何涉及【Orion-14B系列】模型被误导、误用、传播或不正当使用而产生的任何风险和责任。
153
+ 4. 本协议所称损失或损害包括但不限于下列任何损失或损害(无论此类损失或损害是不可预见的、可预见的、已知的或其他的):(i)收入损失;(ii)实际或预期利润损失;(ii)货币使用损失;(iv)预期节约的损失;(v)业务损失;(vi)机会损失;(vii)商誉、声誉损失;(viii)软件的使用损失;或(x)任何间接、附带的特殊或间接损害损失。
154
+ 5. 除非适用的法律另有要求或经过许可方书面同意,否则许可方将按“现状”授予【Orion-14B系列】模型的许可。针对本协议中的【Orion-14B系列】模型,许可方不提供任何明示、暗示的保证,包括但不限于:关于所有权的任何保证或条件、关于适销性的保证或条件、适用于任何特定目的的保证或条件、过去、现在或未来关于【Orion-14B系列】模型不侵权的任何类型的保证、以及因任何交易过程、贸易使用(如建议书、规范或样品)而产生的任何保证。您将对其通过使用、复制或再分发等方式利用【Orion-14B系列】模型所产生的风险与后果,独自承担责任。
155
+ 6. 您充分知悉并理解同意,【Orion-14B系列】模型中可能包含个人信息。您承诺将遵守所有适用的法律法规进行个人信息的处理,特别是遵守《中华人民共和国个人信息保护法》的相关规定。请注意,许可方给予您使用【Orion-14B系列】模型的授权,并不意味着您已经获得处理相关个人信息的合法性基础。您作为独立的个人信息处理者,需要保证在处理【Orion-14B系列】模型中可能包含的个人信息时,完全符合相关法律法规的要求,包括但不限于获得个人信息主体的授权同意等,并愿意独自承担由此可能产生的任何风险和后果。
156
+ 7. 您充分理解并同意,许可方有权依合理判断对违反有关法律法规或本协议规定的行为进行处理,对您的违法违规行为采取适当的法律行动,并依据法律法规保存有关信息向有关部门报告等,您应独自承担由此而产生的一切法律责任。
157
+ 五、 研究、教育和学术目的
158
+ 1. 根据本许可协议的条款和条件,本着对学术界做出贡献的精神,许可方鼓励非营利性学术机构的师生将【Orion-14B系列】模型用于研究、教育和学术目的。
159
+ 2. 进一步的,如您以研究、教育和学术目的使用【Orion-14B系列】模型,您可以在开展相关研 究、教育前,将您的机构名称、使用情况以及联系方式以邮件方式向���们进行提前告知,我们的联系邮箱为【ai@orionstar.com】,我们将可能基于您的联系方式,向您推送【Orion-14B系列】模型的相关更新资讯,以便您更好地开展研究、教育和学术工作。
160
+ 六、 品牌曝光与显著标识
161
+ 1. 您同意并理解,如您将您基于【Orion-14B系列】模型二次开发的模型衍生品在国内外的开源社区提供开源许可的,您需要在该开源社区以显著方式标注该模型衍生品系基于【Orion-14B系列】模型进行的二次开发,标注内容包括但不限于“【Orion-14B Series】 Inside”以及与【Orion-14B系列】模型相关的品牌的其他元素。
162
+ 2. 您同意并理解,如您将【Orion-14B系列】模型二次开发的模型衍生品参加国内外任何组织和个人举行的排名活动,包括但不限于针对模型性能、准确度、算法、算力等任何维度的排名活动,您均需在模型说明中以显著方式标注该模型衍生品系基于【Orion-14B系列】模型进行的二次开发,标注内容包括但不限于“【Orion-14B Series】Inside”以及与【Orion-14B系列】模型相关的品牌的其他元素。
163
+ 七、 其他
164
+ 1. 许可方在法律法规许可的范围内对协议条款享有最终解释权。
165
+ 2. 本协议的订立、效力、解释、履行、修改和终止,使用【Orion-14B系列】模型以及争议的解 决均适用中华人民共和国大陆地区(仅为本协议之目的,不包括香港、澳门和台湾)法律,并排除冲突法的适用。
166
+ 3. 因使用【Orion-14B系列】模型而发生的任何争议,各方应首先通过友好协商的方式加以解决。协商不成时,向许可方所在地人民法院提起诉讼。
167
+
README.md CHANGED
@@ -1,5 +1,332 @@
1
  ---
2
  license: other
3
  license_name: orion
4
- license_link: LICENSE
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  license_name: orion
4
+ license_link: https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin/blob/main/ModelsCommunityLicenseAgreement
5
+ widget:
6
+ - text: "Hi!"
7
+ output:
8
+ text: "Hello! How can I help you today?"
9
+ pipeline_tag: text-generation
10
  ---
11
+
12
+ <!-- markdownlint-disable first-line-h1 -->
13
+ <!-- markdownlint-disable html -->
14
+ ![](./assets/imgs/orion_start.PNG)
15
+
16
+ <div align="center">
17
+ <h1>
18
+ Orion-14B
19
+ </h1>
20
+ </div>
21
+
22
+ <div align="center">
23
+
24
+
25
+ <h4 align="center">
26
+ <p>
27
+ <b>🌐English</b> |
28
+ <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin/blob/main/README_cn.md">🇨🇳中文</a><br><br>
29
+ 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a>
30
+ <p>
31
+ </h4>
32
+
33
+ </div>
34
+
35
+
36
+
37
+ # Table of Contents
38
+
39
+ - [📖 Model Introduction](#model-introduction)
40
+ - [🔗 Model Download](#model-download)
41
+ - [🔖 Model Benchmark](#model-benchmark)
42
+ - [📊 Model Inference](#model-inference)
43
+ - [🥇 Company Introduction](#company-introduction)
44
+ - [📜 Declarations & License](#declarations-license)
45
+
46
+ # Model Introduction
47
+
48
+ - Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages.
49
+
50
+ - The Orion-14B series models exhibit the following features:
51
+ - Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations.
52
+ - Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets.
53
+ - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests.
54
+ - The long-chat version supports extremely long texts, extending up to 200K tokens.
55
+ - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%.
56
+ ![](./assets/imgs/model_cap_en.png)
57
+
58
+ - Orion-14B series models including:
59
+ - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
60
+ - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community.
61
+ - **Orion-14B-LongChat:** This model is optimized for long context lengths more than 200k tokens and demonstrates performance comparable to proprietary models on long context evaluation sets.
62
+ - **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks.
63
+ - **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system.
64
+ - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%.
65
+ - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights.
66
+
67
+ # Model Download
68
+
69
+ Model release and download links are provided in the table below:
70
+
71
+ | Model Name | HuggingFace Download Links | ModelScope Download Links |
72
+ |-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
73
+ | ⚾Orion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
74
+ | 😛Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
75
+ | 📃Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
76
+ | 🔎Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
77
+ | 🔌Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) |
78
+ | 💼Orion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
79
+ | 📦Orion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
80
+
81
+ # Model Benchmarks
82
+
83
+ ## 1. Base Model Benchmarks
84
+ ### LLM evaluation results on examination and professional knowledge
85
+ | Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
86
+ |--------------------|----------|----------|----------|----------|----------|----------|
87
+ | LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
88
+ | Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
89
+ | Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
90
+ | QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
91
+ | InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
92
+ | **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
93
+
94
+ ### LLM evaluation results on language understanding and common knowledge
95
+ | Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC |
96
+ |--------------------|----------|----------|----------|----------|----------|----------|
97
+ | LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
98
+ | Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
99
+ | Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
100
+ | QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
101
+ | InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
102
+ | **Orion-14B-Base** | **93.3** | **91.3** | 78.5 | 79.5 | **78.9** | **70.2** |
103
+
104
+ ### LLM evaluation results of OpenCompass testsets
105
+ | Model | Average | Examination | Language | Knowledge | Understanding | Reasoning |
106
+ |------------------|----------|----------|----------|----------|----------|----------|
107
+ | LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
108
+ | Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
109
+ | Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
110
+ | QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
111
+ | InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
112
+ |**Orion-14B-Base**| **64.4** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
113
+
114
+ ### Comparison of LLM performances on Japanese testsets
115
+ | Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
116
+ |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
117
+ | PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
118
+ | WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
119
+ | ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
120
+ | StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
121
+ | LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
122
+ | Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
123
+ | QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
124
+ | Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
125
+ | **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
126
+
127
+ ### Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation
128
+ |Model | **Average**<br>n=0&nbsp;&nbsp;n=5 | HellaSwag<br>n=0&nbsp;&nbsp;n=5 | COPA<br> n=0&nbsp;&nbsp;n=5 | BooIQ<br>n=0&nbsp;&nbsp;n=5 | SentiNeg<br>n=0&nbsp;&nbsp;n=5|
129
+ |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
130
+ | KoGPT | 53.0 &nbsp;&nbsp; 70.1 | 55.9 &nbsp;&nbsp; 58.3 | 73.5 &nbsp;&nbsp; 72.9 | 45.1 &nbsp;&nbsp; 59.8 | 37.5 &nbsp;&nbsp; 89.4 |
131
+ | Polyglot-ko-13B | 69.6 &nbsp;&nbsp; 73.7 |**59.5** &nbsp;&nbsp; **63.1**|**79.4** &nbsp;&nbsp; **81.1**| 48.2 &nbsp;&nbsp; 60.4 | 91.2 &nbsp;&nbsp; 90.2 |
132
+ | LLaMA 2-13B | 46.7 &nbsp;&nbsp; 63.7 | 41.3 &nbsp;&nbsp; 44.0 | 59.3 &nbsp;&nbsp; 63.8 | 34.9 &nbsp;&nbsp; 73.8 | 51.5 &nbsp;&nbsp; 73.4 |
133
+ | Baichuan 2-13B | 52.1 &nbsp;&nbsp; 58.7 | 39.2 &nbsp;&nbsp; 39.6 | 60.6 &nbsp;&nbsp; 60.6 | 58.4 &nbsp;&nbsp; 61.5 | 50.3 &nbsp;&nbsp; 72.9 |
134
+ | QWEN-14B | 53.8 &nbsp;&nbsp; 73.7 | 45.3 &nbsp;&nbsp; 46.8 | 64.9 &nbsp;&nbsp; 68.9 | 33.4 &nbsp;&nbsp; 83.5 | 71.5 &nbsp;&nbsp; 95.7 |
135
+ | Yi-34B | 54.2 &nbsp;&nbsp; 72.1 | 44.6 &nbsp;&nbsp; 44.7 | 58.0 &nbsp;&nbsp; 60.6 | 65.9 &nbsp;&nbsp; 90.2 | 48.3 &nbsp;&nbsp; 92.9 |
136
+ |**Orion-14B-Chat**|**74.5** &nbsp;&nbsp; **79.6**| 47.0 &nbsp;&nbsp; 49.6 | 77.7 &nbsp;&nbsp; 79.4 |**81.6** &nbsp;&nbsp; **90.7**|**92.4** &nbsp;&nbsp; **98.7**|
137
+
138
+ ### Multilingual evaluation
139
+ | Model | Train Lang | Japanese | Korean | Chinese | English |
140
+ |--------------------|------------|----------|----------|----------|----------|
141
+ | PLaMo-13B | En,Jp | 52.3 | * | * | * |
142
+ | Weblab-10B | En,Jp | 50.7 | * | * | * |
143
+ | ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
144
+ | StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
145
+ | KoGPT-6B | En,Ko | * | 70.1 | * | * |
146
+ | Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
147
+ | Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
148
+ | Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
149
+ | Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
150
+ | Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
151
+ | **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
152
+
153
+
154
+ ## 2. Chat Model Benchmarks
155
+ ### Chat model subjective evaluation of MTBench
156
+ | Model | First-Turn | Second-Turn | **Average** |
157
+ |----------------------|----------|----------|----------|
158
+ | Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
159
+ | Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
160
+ | Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
161
+ | InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
162
+ | **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
163
+ \* use vllm for inference
164
+
165
+ ### Chat model subjective evaluation of AlignBench
166
+ | Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**|
167
+ |--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
168
+ | Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
169
+ | Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**|
170
+ | Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
171
+ | InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 |
172
+ | **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 |
173
+
174
+ \* use vllm for inference
175
+
176
+ ## 3. LongChat Model Benchmarks
177
+ ### LongChat evaluation of LongBench
178
+ | Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P|
179
+ |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
180
+ | GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
181
+ | LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
182
+ | Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
183
+ | Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
184
+ | Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
185
+
186
+
187
+ ## 4. Chat RAG Model Benchmarks
188
+ ### LLM evaluation results of self-built RAG testsets
189
+ |Model|Effectiveness of Response(Keyword)|*Effectiveness of Response(subjective evaluation)|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction|
190
+ |---------------------|------|------|------|------|------|------|
191
+ | Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
192
+ | Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
193
+ | Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
194
+ | GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
195
+ | Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
196
+ \* means manual assessment
197
+
198
+ ## 5. Chat Plugin Model Benchmarks
199
+ ### LLM evaluation results of self-built plugin testsets
200
+ |Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition |
201
+ |-----------------------|--------|-----------|--------|
202
+ | Baichuan2-13B-Chat | 25 | 0 | 0 |
203
+ | Qwen-14B-Chat | 55 | 0 | 50 |
204
+ | GPT-4 | **95** | 52.38 | 70 |
205
+ | Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
206
+
207
+ ## 6. Quantized Model Benchmarks
208
+ ### Comparison of before and after quantization
209
+ |Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag|
210
+ |-------------------------|-------|-----|------|------|------|------|------|
211
+ | OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
212
+ | OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
213
+
214
+ # Model Inference
215
+
216
+ Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link
217
+ is available in the table at the beginning of this document. We demonstrate various inference methods here, and the
218
+ program will automatically download the necessary resources from Hugging Face.
219
+
220
+ ## Python Code
221
+
222
+ ```python
223
+ import torch
224
+ from transformers import AutoModelForCausalLM, AutoTokenizer
225
+ from transformers.generation.utils import GenerationConfig
226
+
227
+ tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
228
+ model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
229
+ torch_dtype=torch.bfloat16, trust_remote_code=True)
230
+
231
+ model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
232
+ messages = [{"role": "user", "content": "Hello, what is your name? "}]
233
+ response = model.chat(tokenizer, messages, streaming=False)
234
+ print(response)
235
+
236
+ ```
237
+
238
+ In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the
239
+ device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1).
240
+
241
+ ## Command Line Tool
242
+
243
+ ```shell
244
+ CUDA_VISIBLE_DEVICES=0 python cli_demo.py
245
+ ```
246
+
247
+ This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model.
248
+
249
+ ## Direct Script Inference
250
+
251
+ ```shell
252
+
253
+ # base model
254
+ CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello
255
+
256
+ # chat model
257
+ CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi
258
+
259
+ ```
260
+
261
+ # Example Output
262
+
263
+ ## Casual Chat
264
+
265
+ `````
266
+ User: Hello
267
+ Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.
268
+ User: What functions do you have?
269
+ Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to:
270
+
271
+ Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more.
272
+ Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases.
273
+ Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants.
274
+ Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me.
275
+ Translating languages: I can translate text from one language to another.
276
+ Chatting: I can engage in conversation with users and respond to their queries in a conversational manner.
277
+ Summarizing long texts: I can summarize longer texts like articles, books, and research papers.
278
+ Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria.
279
+ Playing games: I can play simple games like Hangman, Trivia, and more.
280
+ Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences.
281
+ These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it.
282
+ User: Tell me a joke.
283
+ Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything.
284
+ `````
285
+
286
+ ## Japanese & Korean Chat
287
+
288
+ `````
289
+ User:自己を紹介してください
290
+ Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。
291
+ User:자기소개를 해주세요.
292
+ Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에��� 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다.
293
+ `````
294
+
295
+ # Company Introduction
296
+
297
+ OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to
298
+ using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break
299
+ free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology,
300
+ OrionStar aims to make society and the world a better place.
301
+
302
+ OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and
303
+ visual navigation. It integrates product development capabilities and technological application capabilities. Based on
304
+ the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini,
305
+ Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for
306
+ Truly Useful Robots", OrionStar empowers more people through AI technology.
307
+
308
+ # Declarations, License
309
+
310
+ ## Declarations
311
+
312
+ We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law.
313
+ Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing.
314
+ We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment.
315
+ We have done our best to ensure the compliance of the data used in the model training process. However, despite our
316
+ significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any
317
+ problems arise due to the use of the Orion-14B open-source model, including but not limited to data security
318
+ issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or
319
+ improperly utilized, we will not assume any responsibility.
320
+
321
+ ## License
322
+
323
+ Community use of the Orion-14B series models
324
+ - For code, please comply with [Apache License Version 2.0](./LICENSE)<br>
325
+ - For model, please comply with [【Orion-14B Series】 Models Community License Agreement](./ModelsCommunityLicenseAgreement)
326
+
327
+ # Contact Us
328
+
329
+ Email: ai@orionstar.com
330
+
331
+ ![](./assets/imgs/wechat_group.jpg)
332
+
README_cn.md ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: orion
4
+ license_link: https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin/blob/main/ModelsCommunityLicenseAgreement
5
+ widget:
6
+ - text: "你好,你叫什么名字?"
7
+ output:
8
+ text: "你好!我是一个人工智能助手,没有固定的名字。你可以随意称呼我。有什么我可以帮助你的吗?"
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ <!-- markdownlint-disable first-line-h1 -->
13
+ <!-- markdownlint-disable html -->
14
+ ![](./assets/imgs/orion_start.PNG)
15
+
16
+ <div align="center">
17
+ <h1>
18
+ Orion-14B
19
+ </h1>
20
+ </div>
21
+
22
+ <div align="center">
23
+
24
+
25
+ <h4 align="center">
26
+ <p>
27
+ <b>🇨🇳中文</b> |
28
+ <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin/blob/main/README.md">🌐English</a><br><br>
29
+ 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace主页</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope主页</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace在线试用</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope在线试用</a>
30
+ <p>
31
+ </h4>
32
+
33
+ </div>
34
+
35
+
36
+
37
+ # 目录
38
+
39
+ - [📖 模型介绍](#模型介绍)
40
+ - [🔗 下载路径](#下载路径)
41
+ - [🔖 评估结果](#评估结果)
42
+ - [📊 模型推理](#模型推理)
43
+ - [🥇 企业介绍](#企业介绍)
44
+ - [📜 声明协议](#声明协议)
45
+
46
+ # 模型介绍
47
+
48
+ - Orion-14B-Base是一个具有140亿参数的多语种大模型,该模型在一个包含2.5万亿token的多样化数据集上进行了训练,涵盖了中文、英语、日语、韩语等多种语言。在多语言环境下的一系列任务中展现出卓越的性能。在主流的公开基准评测中,Orion-14B系列模型表现优异,多项指标显著超越同等参数基本的其他模型。
49
+
50
+ - Orion-14B系列大模型有以下几个特点:
51
+ - 基座20B参数级别大模型综合评测效果表现优异
52
+ - 多语言能力强,在日语、韩语测试集上显著领先
53
+ - 微调模型适应性强,在人类标注盲测中,表现突出
54
+ - 长上下文版本支持超长文本,长达200k token
55
+ - 量化版本模型大小缩小70%,推理速度提升30%,性能损失小于1%
56
+ ![](./assets/imgs/model_cap_zh.png)
57
+
58
+ - 具体而言,Orion-14B系列大语言模型包含:
59
+ - **Orion-14B-Base:** 基于2.5万亿令牌多样化数据集训练处的140亿参数量级的多语言基座模型。
60
+ - **Orion-14B-Chat:** 基于高质量语料库微调的对话类模型,旨在为大模型社区提供更好的用户交互体验。
61
+ - **Orion-14B-LongChat:** 支持长度超过200K令牌上下文的交互,在长文本评估集上性能比肩专有模型。
62
+ - **Orion-14B-Chat-RAG:** 在一个定制的检索增强生成数据集上进行微调的聊天模型,在检索增强生成任务中取得了卓越的性能。
63
+ - **Orion-14B-Chat-Plugin:** 专门针对插件和函数调用任务定制的聊天模型,非常适用于使用代理的相关场景,其中大语言模型充当插件和函数调用系统。
64
+ - **Orion-14B-Base-Int4:** 一个使用4位整数进行量化的基座模型。它将模型大小显著减小了70%,同时提高了推理速度30%,仅引入了1%的最小性能损失。
65
+ - **Orion-14B-Chat-Int4:** 一个使用4位整数进行量化的对话模型。
66
+
67
+ # 下载路径
68
+
69
+ 发布模型和下载链接见下表:
70
+
71
+ | 模型名称 | HuggingFace下载链接 | ModelScope下载链接 |
72
+ |---------------------|-----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
73
+ | ⚾ 基座模型 | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
74
+ | 😛 对话模型 | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
75
+ | 📃 长上下文模型 | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
76
+ | 🔎 检索增强模型 | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
77
+ | 🔌 插件模型 | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary)|
78
+ | 💼 基座Int4量化模型 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
79
+ | 📦 对话Int4量化模型 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
80
+
81
+ # 评估结果
82
+
83
+ ## 1. 基座模型评估
84
+ ### 专业知识与试题评估结果
85
+ | 模型名称 | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
86
+ |--------------------|----------|----------|----------|----------|----------|----------|
87
+ | LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
88
+ | Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
89
+ | Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
90
+ | QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
91
+ | InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
92
+ | **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
93
+
94
+ ### 理解与通识评估结果
95
+ | 模型名称 |RACE-middle|RACE-high| HellaSwag| PIQA | Lambada | WSC |
96
+ |--------------------|----------|----------|----------|----------|----------|----------|
97
+ | LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
98
+ | Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
99
+ | Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
100
+ | QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
101
+ | InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
102
+ | **Orion-14B-Base** | **93.3** | **91.3** | 78.5 | 79.5 | **78.9** | **70.2** |
103
+
104
+ ### OpenCompass评测集评估结果
105
+ | 模型名称 | Average | Examination | Language | Knowledge | Understanding | Reasoning |
106
+ |------------------|----------|----------|----------|----------|----------|----------|
107
+ | LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
108
+ | Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
109
+ | Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
110
+ | QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
111
+ | InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
112
+ |**Orion-14B-Base**| **64.4** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
113
+
114
+ ### 日语测试集评估结果
115
+ | 模型名称 |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
116
+ |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
117
+ | PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
118
+ | WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
119
+ | ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
120
+ | StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
121
+ | LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
122
+ | Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
123
+ | QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
124
+ | Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
125
+ | **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
126
+
127
+ ### 韩语测试集n-shot评估结果
128
+ | 模型名称 | **Average**<br>n=0&nbsp;&nbsp;n=5 | HellaSwag<br>n=0&nbsp;&nbsp;n=5 | COPA<br> n=0&nbsp;&nbsp;n=5 | BooIQ<br>n=0&nbsp;&nbsp;n=5 | SentiNeg<br>n=0&nbsp;&nbsp;n=5|
129
+ |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
130
+ | KoGPT | 53.0 &nbsp;&nbsp; 70.1 | 55.9 &nbsp;&nbsp; 58.3 | 73.5 &nbsp;&nbsp; 72.9 | 45.1 &nbsp;&nbsp; 59.8 | 37.5 &nbsp;&nbsp; 89.4 |
131
+ | Polyglot-ko-13B | 69.6 &nbsp;&nbsp; 73.7 |**59.5** &nbsp;&nbsp; **63.1**|**79.4** &nbsp;&nbsp; **81.1**| 48.2 &nbsp;&nbsp; 60.4 | 91.2 &nbsp;&nbsp; 90.2 |
132
+ | LLaMA 2-13B | 46.7 &nbsp;&nbsp; 63.7 | 41.3 &nbsp;&nbsp; 44.0 | 59.3 &nbsp;&nbsp; 63.8 | 34.9 &nbsp;&nbsp; 73.8 | 51.5 &nbsp;&nbsp; 73.4 |
133
+ | Baichuan 2-13B | 52.1 &nbsp;&nbsp; 58.7 | 39.2 &nbsp;&nbsp; 39.6 | 60.6 &nbsp;&nbsp; 60.6 | 58.4 &nbsp;&nbsp; 61.5 | 50.3 &nbsp;&nbsp; 72.9 |
134
+ | QWEN-14B | 53.8 &nbsp;&nbsp; 73.7 | 45.3 &nbsp;&nbsp; 46.8 | 64.9 &nbsp;&nbsp; 68.9 | 33.4 &nbsp;&nbsp; 83.5 | 71.5 &nbsp;&nbsp; 95.7 |
135
+ | Yi-34B | 54.2 &nbsp;&nbsp; 72.1 | 44.6 &nbsp;&nbsp; 44.7 | 58.0 &nbsp;&nbsp; 60.6 | 65.9 &nbsp;&nbsp; 90.2 | 48.3 &nbsp;&nbsp; 92.9 |
136
+ |**Orion-14B-Base**|**74.5** &nbsp;&nbsp; **79.6**| 47.0 &nbsp;&nbsp; 49.6 | 77.7 &nbsp;&nbsp; 79.4 |**81.6** &nbsp;&nbsp; **90.7**|**92.4** &nbsp;&nbsp; **98.7**|
137
+
138
+ ### 多语言评估结果
139
+ | 模型名称 | Train Lang | Japanese | Korean | Chinese | English |
140
+ |--------------------|------------|----------|----------|----------|----------|
141
+ | PLaMo-13B | En,Jp | 52.3 | * | * | * |
142
+ | Weblab-10B | En,Jp | 50.7 | * | * | * |
143
+ | ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
144
+ | StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
145
+ | KoGPT-6B | En,Ko | * | 70.1 | * | * |
146
+ | Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
147
+ | Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
148
+ | Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
149
+ | Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
150
+ | Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
151
+ | **Orion-14B-Base** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
152
+
153
+ ## 2. 对话模型评估
154
+ ### 对话模型MTBench主观评估
155
+ | 模型名称 | 第一轮 | 第二轮 | **平均** |
156
+ |----------------------|----------|----------|----------|
157
+ | Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
158
+ | Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
159
+ | Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
160
+ | InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
161
+ | **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
162
+ \*这里评测使用vllm进行推理
163
+
164
+ ### 对话模型AlignBench主观评估
165
+ | 模型名称 | 数学能力 | 逻辑推理 | 基本能力 | 中文理解 | 综合问答 | 写作能力 | 角色扮演 | 专业知识 | **平均** |
166
+ |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
167
+ | Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
168
+ | Qwen-14B-Chat | **4.91** | **4.71** | **6.90** | 6.36 | 6.74 | 6.64 | 6.59 | 6.56 | **5.72** |
169
+ | Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
170
+ | InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 | **7.18** | 6.19 | 6.49 | 6.22 | 4.96 |
171
+ | **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 | **6.57** | 7.16 | **7.36** | **7.16** | **6.99** | 5.51 |
172
+ \*这里评测使用vllm进行推理
173
+
174
+ ## 3. 长上下文模型评估
175
+ ### 长上下文模型LongBench评估
176
+ | 模型名称 | NarrativeQA| MultiFieldQA-en| MultiFieldQA-zh | DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT | RepoBench-P |
177
+ |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
178
+ | GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
179
+ | LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
180
+ | Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
181
+ | Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
182
+ | Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
183
+
184
+ ## 4. 检索增强模型评估
185
+ ### 自建检索增强测试集评估结果
186
+ |模型名称|回复效果(关键字)|*回复效果(主观打分)|引用能力|兜底能力|*AutoQA|*抽取数据|
187
+ |---------------------|------|------|------|------|------|------|
188
+ | Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
189
+ | Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
190
+ | Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
191
+ | GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
192
+ | Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
193
+ \* 表示人工评判结果
194
+
195
+ ## 5. 插件模型评估
196
+ ### 自建插件测试集评估结果
197
+ | 模型名称 | 全参数意图识别 | 缺参数意图识别 | 非插件调用识别 |
198
+ |-----------------------|--------|-----------|--------|
199
+ | Baichuan2-13B-Chat | 25 | 0 | 0 |
200
+ | Qwen-14B-Chat | 55 | 0 | 50 |
201
+ | GPT-4 | **95** | 52.38 | 70 |
202
+ | Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
203
+
204
+ ## 6. 量化模型评估
205
+ ### 量化前后整体对比
206
+ |模型名称|模型大小(GB)|推理速度(令牌数/秒)|C-Eval |CMMLU |MMLU |RACE | HellaSwag|
207
+ |-------------------------|------|-----|------|------|------|------|------|
208
+ | OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
209
+ | OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
210
+
211
+
212
+ # 模型推理
213
+
214
+ 推理所需的模型权重、源码、配置已发布在 Hugging Face,下载链接见本文档最开始的表格。我们在此示范多种推理方式。程序会自动从
215
+ Hugging Face 下载所需资源。
216
+
217
+ ## Python 代码方式
218
+
219
+ ```python
220
+ import torch
221
+ from transformers import AutoModelForCausalLM, AutoTokenizer
222
+ from transformers.generation.utils import GenerationConfig
223
+
224
+ tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
225
+ model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
226
+ torch_dtype=torch.bfloat16, trust_remote_code=True)
227
+
228
+ model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
229
+ messages = [{"role": "user", "content": "你好! 你叫什么名字!"}]
230
+ response = model.chat(tokenizer, messages, streaming=Flase)
231
+ print(response)
232
+
233
+ ```
234
+
235
+ 在上述两段代码中,模型加载指定 `device_map='auto'`
236
+ ,会使用所有可用显卡。如需指定使用的设备,可以使用类似 `export CUDA_VISIBLE_DEVICES=0,1`(使用了0、1号显卡)的方式控制。
237
+
238
+ ## 命令行工具方式
239
+
240
+ ```shell
241
+ CUDA_VISIBLE_DEVICES=0 python cli_demo.py
242
+ ```
243
+
244
+ 本命令行工具是为 Chat 场景设计,因此我们不支持使用该工具调用 Base 模型。
245
+
246
+ ## 脚本直接推理
247
+
248
+ ```shell
249
+ # base model
250
+ CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt 你好,你叫什么名字
251
+
252
+ # chat model
253
+ CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt 你好,你叫什么名字
254
+
255
+ ```
256
+
257
+ ## 示例输出
258
+
259
+ ## 闲聊
260
+
261
+ `````
262
+ 用户:你好,你叫什么名字
263
+ Orion-14B:你好!我是一个人工智能助手,没有固定的名字。你可以随意称呼我。有什么我可以帮助你的吗?
264
+ 用户:讲个故事
265
+ Orion-14B:从前有一个小男孩叫杰克,他住在一个小村庄里。杰克非常喜欢冒险和探索,总是梦想着能够发现一些神奇的事情。
266
+
267
+ 有一天,杰克听说在附近的森林深处有一座传说中的魔法城堡,里面住着一个邪恶的巨人。这个巨人的名字叫格里姆,他被困在城堡中,无法离开。
268
+
269
+ 杰克决定要去解救这个巨人,因为他相信格里姆一定有自己的原因被困在那里。于是,他带着勇气和决心,踏上了前往魔法城堡的旅程。
270
+
271
+ 经过长时间的跋涉,杰克终于来到了城堡的大门前。他小心翼翼地推开大门,进入了一个充满神秘气息的房间。
272
+
273
+ 房间里弥漫着黑暗的气息,但杰克并没有被吓倒。他勇敢地向城堡深处走去,最终找到了格里姆。
274
+
275
+ 格里姆告诉杰克,他是被一个邪恶的女巫诅咒的,只有通过解开咒语才能获得自由。杰克决定要帮助格里姆,他开始寻找线索和解开咒语的方法。
276
+
277
+ 经过一番努力,杰克终于找到了一条通往女巫的秘密实验室的道路。他在那里发现了女巫留下的魔法书,书中记载了解开咒语的方法。
278
+
279
+ 杰克按照书上所写的步骤进行操作,成功地解开了格里姆身上的诅咒。巨人感激不已,他答应帮助杰克实现他的愿望。
280
+
281
+ 从此以后,杰克和格里姆成为了好朋友,他们一起经历了许多惊险刺激的冒险,并用他们的勇气和智慧战胜了各种困难。
282
+
283
+ 这个故事告诉我们,只要有勇气和决心,我们就能克服一切困难,实现自己的梦想。
284
+ `````
285
+
286
+ ## 日韩
287
+
288
+ `````
289
+ 用户:自己を紹介してください
290
+ Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。
291
+ 用户:자기소개를 해주세요.
292
+ Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에게 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다.
293
+ `````
294
+
295
+ # 企业介绍
296
+
297
+ 猎户星空(OrionStar)是一家全球领先的服务机器人解决方案公司,成立于2016年9月。猎��星空致力于基于人工智能技术打造下一代革命性机器人,使人们能够摆脱重复的体力劳动,使人类的工作和生活更加智能和有趣,通过技术使社会和世界变得更加美好。
298
+
299
+ 猎户星空拥有完全自主开发的全链条人工智能技术,如语音交互和视觉导航。它整合了产品开发能力和技术应用能力。基于Orion机械臂平台,它推出了ORION
300
+ STAR AI Robot Greeting、AI Robot Greeting Mini、Lucki、Coffee
301
+ Master等产品,并建立了Orion机器人的开放平台OrionOS。通过为 **真正有用的机器人而生** 的理念实践,它通过AI技术为更多人赋能。
302
+
303
+ 凭借7年AI经验积累,猎户星空已推出的大模型深度应用“聚言”,并陆续面向行业客户提供定制化AI大模型咨询与服务解决方案,真正帮助客户实现企业经营效率领先同行目标。
304
+
305
+ **猎户星空具备全链条大模型应用能力的核心优势**,包括拥有从海量数据处理、大模型预训练、二次预训练、微调(Fine-tune)、Prompt
306
+ Engineering 、Agent开发的全链条能力和经验积累;拥有完整的端到端模型训练能力,包括系统化的数据处理流程和数百张GPU的并行模型训练能力,现已在大政务、云服务、出海电商、快消等多个行业场景落地。
307
+
308
+ ***欢迎有大模型应用落地需求的企业联系我们进行商务合作,咨询电话 400-898-7779 。***
309
+
310
+ # 声明、协议
311
+
312
+ ## 声明
313
+
314
+ 我们强烈呼吁所有使用者,不要利用 Orion-14B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将
315
+ Orion-14B 模型用于未经适当安全审查和备案的互联网服务。
316
+
317
+ 我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
318
+ 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
319
+ Orion-14B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
320
+
321
+ ## 协议
322
+
323
+ 社区使用Orion-14B系列模型
324
+ - 代码请遵循 [Apache License Version 2.0](./LICENSE)<br>
325
+ - 模型请遵循 [Orion-14B系列模型社区许可协议](./ModelsCommunityLicenseAgreement)
326
+
327
+ # 联系我们
328
+
329
+ 电子邮箱: ai@orionstar.com
330
+
331
+ ![](./assets/imgs/wechat_group.jpg)
assets/imgs/model_cap_en.png ADDED
assets/imgs/model_cap_zh.png ADDED
assets/imgs/orion_start.PNG ADDED
assets/imgs/wechat_group.jpg ADDED
special_tokens_map.json CHANGED
@@ -4,27 +4,27 @@
4
  "lstrip": false,
5
  "normalized": true,
6
  "rstrip": false,
7
- "single_word": true
8
  },
9
  "eos_token": {
10
  "content": "</s>",
11
  "lstrip": false,
12
  "normalized": true,
13
  "rstrip": false,
14
- "single_word": true
15
  },
16
  "unk_token": {
17
  "content": "<unk>",
18
  "lstrip": false,
19
  "normalized": true,
20
  "rstrip": false,
21
- "single_word": true
22
  },
23
  "pad_token": {
24
  "content": "<unk>",
25
  "lstrip": false,
26
  "normalized": true,
27
  "rstrip": false,
28
- "single_word": true
29
  }
30
  }
 
4
  "lstrip": false,
5
  "normalized": true,
6
  "rstrip": false,
7
+ "single_word": false
8
  },
9
  "eos_token": {
10
  "content": "</s>",
11
  "lstrip": false,
12
  "normalized": true,
13
  "rstrip": false,
14
+ "single_word": false
15
  },
16
  "unk_token": {
17
  "content": "<unk>",
18
  "lstrip": false,
19
  "normalized": true,
20
  "rstrip": false,
21
+ "single_word": false
22
  },
23
  "pad_token": {
24
  "content": "<unk>",
25
  "lstrip": false,
26
  "normalized": true,
27
  "rstrip": false,
28
+ "single_word": false
29
  }
30
  }