license: other
license_name: yi-license
license_link: LICENSE
widget:
- text: >-
There's a place where time stands still. A place of breath taking wonder,
but also
output:
text: >2-
an eerie sense that something is just not right…
Between the two worlds lies The Forgotten Kingdom - home to creatures
long since thought extinct and ancient magic so strong it defies belief!
Only here can you find what has been lost for centuries: An Elixir Of
Life which will restore youth and vitality if only those who seek its
power are brave enough to face up against all manner of dangers lurking
in this mysterious land! But beware; some say there may even exist
powerful entities beyond our comprehension whose intentions towards
humanity remain unclear at best ---- they might want nothing more than
destruction itself rather then anything else from their quest after
immortality (and maybe someone should tell them about modern medicine)?
In any event though – one thing remains true regardless : whether or not
success comes easy depends entirely upon how much effort we put into
conquering whatever challenges lie ahead along with having faith deep
down inside ourselves too ;) So let’s get started now shall We?
pipeline_tag: text-generation
Introduction
The Yi series models are large language models trained from scratch by
developers at 01.AI. The first public release contains two
bilingual(English/Chinese) base models with the parameter sizes of 6B(Yi-6B
)
and 34B(Yi-34B
). Both of them are trained
with 4K sequence length and can be extended to 32K during inference time.
The Yi-6B-200K
and Yi-34B-200K
are base model with
200K context length.
News
- 🔔 2023/11/15: The commercial licensing agreement for the Yi series models is set to be updated.
- 🎯 2023/11/06: The base model of
Yi-6B-200K
andYi-34B-200K
with 200K context length. - 🎯 2023/11/02: The base model of
Yi-6B
andYi-34B
.
Model Performance
Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
---|---|---|---|---|---|---|---|---|
5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | |
LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | 39.8 |
Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
Yi-34B | 76.3 | 83.7 | 81.4 | 82.8 | 54.3 | 80.1 | 76.4 | 37.1 |
Yi-34B-200K | 76.1 | 83.6 | 81.9 | 83.4 | 52.7 | 79.7 | 76.6 | 36.3 |
While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated.
Usage
Please visit our github repository for general guidance on how to use this model.
Disclaimer
Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns.
License
The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the Model License Agreement 2.0. To apply for the official commercial license, please contact us (yi@01.ai).
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 53.06 |
ARC (25-shot) | 55.55 |
HellaSwag (10-shot) | 76.42 |
MMLU (5-shot) | 63.85 |
TruthfulQA (0-shot) | 41.86 |
Winogrande (5-shot) | 73.8 |
GSM8K (5-shot) | 12.66 |
DROP (3-shot) | 47.32 |