tianyuz commited on
Commit
02f81ad
1 Parent(s): 54be623
Files changed (7) hide show
  1. README.md +152 -0
  2. config.json +27 -0
  3. pytorch_model.bin +3 -0
  4. rinna.png +0 -0
  5. spiece.model +3 -0
  6. spiece.vocab +0 -0
  7. tokenizer_config.json +1 -0
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
3
  license: mit
 
 
4
  language:
5
  - ja
6
  - en
@@ -9,12 +11,162 @@ inference: false
9
 
10
  # bilingual-gpt-neox-4b-instruction-sft
11
 
 
 
12
  ---
13
  # Update
14
 
 
 
 
 
 
 
15
  - **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake.
16
 
17
  ---
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  # Licenese
20
  [The MIT license](https://opensource.org/licenses/MIT)
 
1
  ---
2
  thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
3
  license: mit
4
+ datasets:
5
+ - Anthropic/hh-rlhf
6
  language:
7
  - ja
8
  - en
 
11
 
12
  # bilingual-gpt-neox-4b-instruction-sft
13
 
14
+ ![rinna-icon](./rinna.png)
15
+
16
  ---
17
  # Update
18
 
19
+ - **2023/08/02** We uploaded the newly trained `rinna/bilingual-gpt-neox-4b-instruction-sft` with the MIT licnese.
20
+ - Please refrain from using the previous model released on 2023/07/31 for commercial purposes if you have already downloaded it.
21
+ - The new model released on 2023/08/02 has a more open license and better evaluation performance, so we suggest using the new model.
22
+ - For reference, we provide the MD5 checksum values for the `pytorch_model.bin` files of the previous and current models.
23
+ - 2023/07/31 model: `edf190a323c0ae63f71476700fb0b462`
24
+ - 2023/08/02 model: `de72aa5b66beee7b65783c96f687d186`
25
  - **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake.
26
 
27
  ---
28
 
29
+ # Overview
30
+ This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters.
31
+
32
+ The model is based on [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b) and has been finetuned to serve as an instruction-following conversational agent.
33
+
34
+ * **Model architecture**
35
+
36
+ A 36-layer, 2816-hidden-size transformer-based language model.
37
+
38
+ * **Fine-tuning**
39
+
40
+ The fine-tuning data is the subset of the following datasets.
41
+ * [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation
42
+ * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation
43
+
44
+ * **Model Series**
45
+
46
+ | Variant | Link |
47
+ | :-- | :--|
48
+ | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
49
+ | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
50
+ | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
51
+ | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
52
+ | Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo |
53
+ | Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 |
54
+ | Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft |
55
+ | Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b |
56
+
57
+ * **Authors**
58
+
59
+ [Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
60
+
61
+ ---
62
+
63
+ # Benchmarking
64
+
65
+ Our evaluation experiments suggest that the bilingual-gpt-neox-4b-instruction-sft model performs slightly better than the previous [Japanese GPT-NeoX 3.6B PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) in Japanese tasks.
66
+
67
+ - *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.*
68
+ - *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.*
69
+
70
+ | Model | 4-task average accuracy | 6-task average accuracy |
71
+ | :-- | :-- | :-- |
72
+ | **bilingual-gpt-neox-4b-instruction-sft** | **61.02** | **61.69** |
73
+ | bilingual-gpt-neox-4b | 56.12 | 51.83 |
74
+ | japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
75
+ | japanese-gpt-neox-3.6b | 55.07 | 50.32 |
76
+
77
+ ---
78
+
79
+ # I/O Format
80
+ A special format has been adopted to construct inputs.
81
+ * An input prompt is formatted as a conversation between `ユーザー` and `システム`.
82
+ * Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`).
83
+ * The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response.
84
+ * All the utterances in the input prompt should be separated by a newline `\n`.
85
+
86
+ Following is an example to construct input from a conversation.
87
+ ~~~python
88
+ prompt = [
89
+ {
90
+ "speaker": "ユーザー",
91
+ "text": "Hello, you are an assistant that helps me learn Japanese."
92
+ },
93
+ {
94
+ "speaker": "システム",
95
+ "text": "Sure, what can I do for you?"
96
+ },
97
+ {
98
+ "speaker": "ユーザー",
99
+ "text": "VRはなんですか。"
100
+ }
101
+ ]
102
+ prompt = [
103
+ f"{uttr['speaker']}: {uttr['text']}"
104
+ for uttr in prompt
105
+ ]
106
+ prompt = "\n".join(prompt)
107
+ prompt = (
108
+ prompt
109
+ + "\n"
110
+ + "システム: "
111
+ )
112
+ print(prompt)
113
+ """
114
+ ユーザー: Hello, you are an assistant that helps me learn Japanese.
115
+ システム: Sure, what can I do for you?
116
+ ユーザー: VRはなんですか。
117
+ システム:
118
+ """
119
+ ~~~
120
+
121
+ ---
122
+
123
+ # How to use the model
124
+
125
+ **Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task.
126
+
127
+ ~~~~python
128
+ import torch
129
+ from transformers import AutoTokenizer, AutoModelForCausalLM
130
+
131
+ tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft", use_fast=False)
132
+ model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft")
133
+
134
+ if torch.cuda.is_available():
135
+ model = model.to("cuda")
136
+
137
+ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
138
+
139
+ with torch.no_grad():
140
+ output_ids = model.generate(
141
+ token_ids.to(model.device),
142
+ max_new_tokens=512,
143
+ do_sample=True,
144
+ temperature=1.0,
145
+ top_p=0.85,
146
+ pad_token_id=tokenizer.pad_token_id,
147
+ bos_token_id=tokenizer.bos_token_id,
148
+ eos_token_id=tokenizer.eos_token_id
149
+ )
150
+
151
+ output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
152
+ print(output)
153
+ """VRとはVirtual Realityの略で、仮想現実とも呼ばれます。これは、コンピューターを使用して仮想世界を作り出し、仮想世界上でコンピューターのゲームや仮想世界を体験するための技術です。この技術は、コンピューターやモバイ ルデバイスの進歩によって、2015年以降、ますます普及しています。VRは、ゲームや仮想世界、その他のアプリケー ションなどのさまざまな分野で、コンピューターと人間の相互作用の新しい方法を提供しています。</s>"""
154
+ ~~~~
155
+
156
+ ---
157
+
158
+ # Tokenization
159
+ The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
160
+ * The tokenizer has a vocabulary size of 65,536.
161
+ * It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens.
162
+ * It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better.
163
+ * We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese.
164
+ * Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`).
165
+ * This decision trades the English processing efficiency for a unified way to treat whitespaces.
166
+ * It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict.
167
+ * **Don't forget to set `use_fast=False` to make the above features function correctly.**
168
+
169
+ ---
170
+
171
  # Licenese
172
  [The MIT license](https://opensource.org/licenses/MIT)
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GPTNeoXForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.1,
6
+ "bos_token_id": 2,
7
+ "classifier_dropout": 0.1,
8
+ "eos_token_id": 3,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout": 0.1,
11
+ "hidden_size": 2816,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 11264,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 2048,
16
+ "model_type": "gpt_neox",
17
+ "num_attention_heads": 22,
18
+ "num_hidden_layers": 36,
19
+ "rope_scaling": null,
20
+ "rotary_emb_base": 10000,
21
+ "rotary_pct": 1.0,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "float16",
24
+ "use_cache": true,
25
+ "use_parallel_residual": false,
26
+ "vocab_size": 65536
27
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b686120467aae0df18f7c35e28f5ea19ad7f65bd8620e1f2b20cb77f31c06b9b
3
+ size 7592400645
rinna.png ADDED
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85a0205d37a98bb3b97cf4ca3f507c78873cf8f6cefa3b51d8d6a15006dc889d
3
+ size 1341798
spiece.vocab ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "[UNK]", "pad_token": "[PAD]", "extra_ids": 0, "additional_special_tokens": [], "sp_model_kwargs": {}, "bos_token": "<s>", "cls_token": "[CLS]", "sep_token": "[SEP]", "mask_token": "[MASK]", "do_lower_case": false, "tokenizer_class": "T5Tokenizer"}