File size: 5,971 Bytes
4337fef
 
 
 
 
 
 
 
 
a6c3a82
4337fef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a520acd
4337fef
 
 
d802cb4
 
 
 
4337fef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
license: other
license_name: yi-license
license_link: LICENSE
language:
- en
- ko
pipeline_tag: text-generation
inference: false
base_model: beomi/Yi-Ko-34B
tags:
- pytorch
- Yi-Ko
- 01-ai
- Yi
library_name: transformers
---
# Yi Ko 34B Instruct

## Training Process

1. Further trained with Korean corpus.
2. SFT
3. DPO [(Dataset URL)](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized)

## Model Info

| Context Length | Parameter | Prompt Template | KMMLU(5-shot) |
| --- | --- | --- | --- |
| 4k(4096) | 34B | ChatML | Partly | 49.03 |

## Acknowledgement

The training is supported by [Sionic AI](https://sionic.ai).

# Original Model Card by [beomi](https://huggingface.co/beomi)

Yi-Ko series models serve as advanced iterations of 01-ai/Yi models, 
benefiting from an expanded vocabulary and the inclusion of Korean/English corpus in its further pretraining. 
Just like its predecessor, Yi-Ko series models operate within the broad range of generative text models that stretch from 6 billion to 34 billion parameters.
This repository focuses on the **34B** pretrained version,
which is tailored to fit the Hugging Face Transformers format. 
For access to the other models, feel free to consult the index provided below.

## Model Details

**Model Developers** Junbum Lee (Beomi)

**Variations** Yi-Ko-34B will come in a range of parameter sizes — 6B and 34B — with Ko(Korean+English).

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture** 

Yi-Ko series models are an auto-regressive language model that uses an optimized transformer architecture based on Llama-2*.

<small>*Yi model architecture is based on Llama2, so it can be loaded via `LlamaForCausalLM` class on HF.</small>

|Model Name|Training Data|Params|Context Length|GQA|Trained Tokens|LR|Train tokens (per batch)|
|---|---|---|---|---|---|---|---|
|Yi-Ko-34B|*A mix of Korean + English online data*|34B|4k|O|40B+|5e<sup>-5</sup>|4M|

**Vocab Expansion**

| Model Name | Vocabulary Size | Description | 
| --- | --- | --- |
| Original Yi-Series | 64000 | Sentencepiece BPE |
| **Expanded Yi-Ko Series** | 78464 | Sentencepiece BPE. Added Korean vocab and merges |

**Tokenizing "안녕하세요, 오늘은 날씨가 좋네요.ㅎㅎ"**

| Model | # of tokens | Tokens |
| --- | --- | --- |
| Original Yi-Series | 47 | `['<0xEC>', '<0x95>', '<0x88>', '<0xEB>', '<0x85>', '<0x95>', '하', '<0xEC>', '<0x84>', '<0xB8>', '<0xEC>', '<0x9A>', '<0x94>', ',', '▁', '<0xEC>', '<0x98>', '<0xA4>', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '<0xEC>', '<0x94>', '<0xA8>', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '<0xEC>', '<0x9A>', '<0x94>', '.', '<0xE3>', '<0x85>', '<0x8E>', '<0xE3>', '<0x85>', '<0x8E>']` |
| **Expanded Yi-Ko Series** | 10 | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.', 'ㅎ', 'ㅎ']` |
|<small>*Equal Korean vocab with Llama-2-Ko Series</small>||

**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"**

| Model | # of tokens | Tokens |
| --- | --- | --- |
| Original Yi-Series | 21 | `['The', '▁Y', 'i', '▁series', '▁models', '▁are', '▁large', '▁language', '▁models', '▁trained', '▁from', '▁scratch', '▁by', '▁developers', '▁at', '▁', '0', '1', '.', 'AI', '.']` |
| **Expanded Yi-Ko Series** | 21 | `['▁The', '▁Y', 'i', '▁series', '▁models', '▁are', '▁large', '▁language', '▁models', '▁trained', '▁from', '▁scratch', '▁by', '▁developers', '▁at', '▁', '0', '1', '.', 'AI', '.']` |
|<small>*Equal Korean vocab with Llama-2-Ko Series</small>| | <small>*Since **Expanded Yi-Ko Series** prepends `_` at the beginning of the text(to ensure same tokenization for Korean sentences), it shows negilible difference for the first token on English tokenization. </small>|

# **Model Benchmark**

## LM Eval Harness - Korean Benchmarks

|     Tasks      |Version|Filter|n-shot| Metric |Value |   |Stderr|
|----------------|------:|------|-----:|--------|-----:|---|------|
|**kmmlu_direct**|N/A    |none  |     5|exact_match|**0.5027**|±  |0.1019|
|kobest_boolq    |      1|none  |     5|acc     |0.9202|±  |0.0072|
|                |       |none  |     5|f1      |0.9202|±  |N/A   |
|kobest_copa     |      1|none  |     5|acc     |0.8480|±  |0.0114|
|                |       |none  |     5|f1      |0.8479|±  |N/A   |
|kobest_hellaswag|      1|none  |     5|acc     |0.5320|±  |0.0223|
|                |       |none  |     5|f1      |0.5281|±  |N/A   |
|                |       |none  |     5|acc_norm|0.6340|±  |0.0216|
|kobest_sentineg |      1|none  |     5|acc     |0.9874|±  |0.0056|
|                |       |none  |     5|f1      |0.9874|±  |N/A   |
|haerae                         |N/A    |none  |     5|acc     |0.7965|±  |0.0116|
|                               |       |none  |     5|acc_norm|0.7965|±  |0.0116|
| - haerae_general_knowledge    |      1|none  |     5|acc     |0.5114|±  |0.0378|
|                               |       |none  |     5|acc_norm|0.5114|±  |0.0378|
| - haerae_history              |      1|none  |     5|acc     |0.8511|±  |0.0260|
|                               |       |none  |     5|acc_norm|0.8511|±  |0.0260|
| - haerae_loan_word            |      1|none  |     5|acc     |0.8402|±  |0.0283|
|                               |       |none  |     5|acc_norm|0.8402|±  |0.0283|
| - haerae_rare_word            |      1|none  |     5|acc     |0.8642|±  |0.0170|
|                               |       |none  |     5|acc_norm|0.8642|±  |0.0170|
| - haerae_standard_nomenclature|      1|none  |     5|acc     |0.8301|±  |0.0305|
|                               |       |none  |     5|acc_norm|0.8301|±  |0.0305|

## LICENSE

Follows Yi License

## Citation



## Acknowledgement

The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.