File size: 7,231 Bytes
1281cd3
 
 
 
168c48e
cf0a31b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1281cd3
994cf06
 
c3d7207
994cf06
 
 
 
 
c3d7207
 
80f75ed
 
 
855bf9e
 
 
994cf06
 
 
80f75ed
 
 
 
c3d7207
994cf06
 
 
c3d7207
 
 
 
 
 
 
 
 
 
 
 
bf79513
 
 
c3d7207
 
 
 
 
 
 
 
 
 
 
994cf06
c3d7207
 
 
 
 
 
 
 
 
 
 
994cf06
c3d7207
994cf06
c3d7207
 
994cf06
 
 
c3d7207
 
 
 
 
 
 
 
994cf06
 
 
7a306dd
 
b123dbe
7a306dd
cf0a31b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: other
license_name: yi-license
license_link: LICENSE
widget:
- text: >-
    There's a place where time stands still. A place of breath taking wonder,
    but also
  output:
    text: >2-
       of great danger. A place where the very air you breathe could kill you. A place where the only way to survive is to be prepared.
      The place is called the Arctic.

      The Arctic is a vast, frozen wilderness. It is a place of extremes. The
      temperatures can drop to -40 degrees Celsius. The winds can reach speeds
      of 100 kilometers per hour. The sun can shine for 24 hours a day, or not
      at all for weeks on end.

      The Arctic is also a place of great beauty. The ice and snow are a
      pristine white. The sky is a deep blue. The sunsets are spectacular.

      But the Arctic is also a place of great danger. The ice can be
      treacherous. The winds can be deadly. The sun can be blinding.

      The Arctic is a place where the only way to survive is to be prepared.

      The Arctic is a place of extremes. The temperatures can drop to -40
      degrees Celsius. The winds can reach speeds of 100 kilometers per hour.
      The sun can shine for 24 hours a day, or not at all for weeks on end.

      The Arctic is a place of great beauty. The ice and snow are a
pipeline_tag: text-to-video
datasets:
- togethercomputer/RedPajama-Data-V2
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
tags:
- finance
---
<div align="center">

<img src="./Yi.svg" width="200px">

</div>

## Introduction

The **Yi** series models are large language models trained from scratch by
developers at [01.AI](https://01.ai/). The first public release contains two
bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B)) 
and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained 
with 4K sequence length and can be extended to 32K during inference time. 
The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
200K context length. 

## News

- 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) 
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
- 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and 
[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).


## Model Performance

| Model         |   MMLU   |  CMMLU   |  C-Eval  |  GAOKAO  |   BBH    | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
|               |  5-shot  |  5-shot  |  5-shot  |  0-shot  | 3-shot@1 |           -            |           -           |      -      |
| LLaMA2-34B    |   62.6   |    -     |    -     |    -     |   44.1   |          69.9          |         68.0          |    26.0     |
| LLaMA2-70B    |   68.9   |   53.3   |    -     |   49.8   |   51.2   |          71.9          |         69.4          |    36.8     |
| Baichuan2-13B |   59.2   |   62.0   |   58.1   |   54.3   |   48.8   |          64.3          |         62.4          |    23.0     |
| Qwen-14B      |   66.3   |   71.0   |   72.1   |   62.5   |   53.4   |          73.3          |         72.5          |  **39.8**   |
| Skywork-13B   |   62.1   |   61.8   |   60.6   |   68.1   |   41.7   |          72.4          |         61.4          |    24.9     |
| InternLM-20B  |   62.1   |   59.0   |   58.8   |   45.5   |   52.5   |          78.3          |           -           |    30.4     |
| Aquila-34B    |   67.8   |   71.4   |   63.1   |    -     |    -     |           -            |           -           |      -      |
| Falcon-180B   |   70.4   |   58.0   |   57.8   |   59.0   |   54.0   |          77.3          |         68.8          |    34.0     |
| Yi-6B         |   63.2   |   75.5   |   72.0   |   72.2   |   42.8   |          72.3          |         68.7          |    19.8     |
| Yi-6B-200K    |   64.0   |   75.3   |   73.5   |   73.9   |   42.0   |          72.0          |         69.1          |    19.0     |
| **Yi-34B**    | **76.3** | **83.7** |   81.4   |   82.8   | **54.3** |        **80.1**        |         76.4          |    37.1     |
| Yi-34B-200K   |   76.1   |   83.6   | **81.9** | **83.4** |   52.7   |          79.7          |       **76.6**        |    36.3     |

While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.

To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.

## Usage

Please visit our [github repository](https://github.com/01-ai/Yi) for general
guidance on how to use this model.

## Disclaimer

Although we use data compliance checking algorithms during the training process
to ensure the compliance of the trained model to the best of our ability, due to
the complexity of the data and the diversity of language model usage scenarios,
we cannot guarantee that the model will generate correct and reasonable output
in all scenarios. Please be aware that there is still a risk of the model
producing problematic outputs. We will not be responsible for any risks and
issues resulting from misuse, misguidance, illegal usage, and related
misinformation, as well as any associated data security concerns.

## License

The Yi series models are fully open for academic research and free commercial
usage with permission via applications. All usage must adhere to the [Model
License Agreement 2.0](https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE). To
apply for the official commercial license, please contact us
([yi@01.ai](mailto:yi@01.ai)).