tianyuz commited on
Commit
6060312
1 Parent(s): 71878bb

init commit

Browse files
Files changed (7) hide show
  1. README.md +89 -0
  2. config.json +23 -0
  3. pytorch_model.bin +3 -0
  4. rinna.png +0 -0
  5. spiece.model +3 -0
  6. spiece.vocab +0 -0
  7. tokenizer_config.json +1 -0
README.md CHANGED
@@ -1,3 +1,92 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: mit
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: ja
3
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
4
+ tags:
5
+ - ja
6
+ - gpt_neox
7
+ - text-generation
8
+ - lm
9
+ - nlp
10
  license: mit
11
+ datasets:
12
+ - cc100
13
+ - wikipedia
14
+ - mc4
15
+ inference: false
16
  ---
17
+
18
+ # japanese-gpt-neox-3.6b
19
+
20
+ ![rinna-icon](./rinna.png)
21
+
22
+ This repository provides a Japanese GPT-NeoX model of 3.6 billion parameters. The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
23
+
24
+ # How to use the model
25
+
26
+ ~~~~python
27
+ import torch
28
+ from transformers import AutoTokenizer, AutoModelForCausalLM
29
+
30
+ tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False)
31
+ model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b")
32
+
33
+ if torch.cuda.is_available():
34
+ model = model.to("cuda")
35
+
36
+ text = "西田幾多郎は、"
37
+ token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
38
+
39
+ with torch.no_grad():
40
+ output_ids = model.generate(
41
+ token_ids.to(model.device),
42
+ max_new_tokens=100,
43
+ min_new_tokens=100,
44
+ do_sample=True,
45
+ temperature=0.8,
46
+ pad_token_id=tokenizer.pad_token_id,
47
+ bos_token_id=tokenizer.bos_token_id,
48
+ eos_token_id=tokenizer.eos_token_id
49
+ )
50
+
51
+ output = tokenizer.decode(output_ids.tolist()[0])
52
+ print(output)
53
+ """西田幾多郎は、この「絶対矛盾的自己同一」を「世界の自己同一」と置きかえ、さらに西田哲学を出発点として「絶対無」を「世界の成立」に変え、世界と自己を一つの統一物とみなす哲学として展開する。この世界と自己は絶対矛盾的自己同一として同一の性質を有し、同じ働きをする。西田哲学においては、この世界と自己は矛盾しあうのではなく、同一の性質をもっている。この世界と自己は同一である。絶対"""
54
+ ~~~~
55
+
56
+ # Model architecture
57
+ A 36-layer, 2816-hidden-size transformer-based language model.
58
+
59
+ # Training
60
+ The model was trained on around **312.5B** tokens from [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz), [Japanese C4](https://huggingface.co/datasets/mc4), and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective.
61
+
62
+ A final validation perplexity of **8.68** has been reached.
63
+
64
+ # Tokenization
65
+ The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
66
+ * The tokenizer has a vocabulary size of 32,000.
67
+ * It uses sentencepiece's byte fallback feature to decompose unknown text pieces into UTF-8 byte pieces and to avoid producing `<UNK>` tokens.
68
+ * sentencepiece's `--add_dummy_prefix` option was turned off so that a leading whitespace will not be prepended automatically.
69
+ ~~~
70
+ print(tokenizer.tokenize("吾輩は猫である"))
71
+ # ['吾', '輩', 'は', '猫', 'である']
72
+ # instead of ['▁', '吾', '輩', 'は', '猫', 'である'] as in rinna/japanese-gpt-1b
73
+ ~~~
74
+ * sentencepiece's `--remove_extra_whitespaces` option was turned off so that leading, trailing, and duplicate whitespaces are reserved.
75
+ ~~~
76
+ print(tokenizer.tokenize(" 吾輩は 猫である "))
77
+ # ['▁', '▁', '吾', '輩', 'は', '▁', '▁', '猫', 'である', '▁', '▁', '▁']
78
+ # instead of ['▁', '吾', '輩', 'は', '▁猫', 'である'] as in rinna/japanese-gpt-1b
79
+ ~~~
80
+ * Don't forget to set `use_fast=False` to make the above features function correctly.
81
+ ~~~
82
+ good_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False)
83
+ bad_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b")
84
+
85
+ print(good_tokenizer.decode(good_tokenizer.encode("გამარჯობა 吾輩は 猫である ")))
86
+ # 'გამარჯობა 吾輩は 猫である </s>'
87
+ print(bad_tokenizer.decode(bad_tokenizer.encode("გამარჯობა 吾輩は 猫である ")))
88
+ # 'გამარ[UNK]ობა 吾輩は 猫である </s>'
89
+ ~~~
90
+
91
+ # Licenese
92
+ [The MIT license](https://opensource.org/licenses/MIT)
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GPTNeoXForCausalLM"
4
+ ],
5
+ "bos_token_id": 2,
6
+ "eos_token_id": 3,
7
+ "hidden_act": "gelu",
8
+ "hidden_size": 2816,
9
+ "initializer_range": 0.02,
10
+ "intermediate_size": 11264,
11
+ "layer_norm_eps": 1e-05,
12
+ "max_position_embeddings": 2048,
13
+ "model_type": "gpt_neox",
14
+ "num_attention_heads": 22,
15
+ "num_hidden_layers": 36,
16
+ "rotary_emb_base": 10000,
17
+ "rotary_pct": 1.0,
18
+ "tie_word_embeddings": false,
19
+ "torch_dtype": "float16",
20
+ "use_cache": true,
21
+ "use_parallel_residual": false,
22
+ "vocab_size": 32000
23
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f97bb2681d00e08db1eb791c7c4f75c37ef0cc55591585442c680154d35d2609
3
+ size 7365670537
rinna.png ADDED
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d78ab344146700112cd41628ac7ce54b79c0868fe0c7c201750d8237b54dbb4
3
+ size 786216
spiece.vocab ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"eos_token": "</s>", "unk_token": "[UNK]", "pad_token": "[PAD]", "extra_ids": 0, "additional_special_tokens": [], "sp_model_kwargs": {}, "bos_token": "<s>", "cls_token": "[CLS]", "sep_token": "[SEP]", "mask_token": "[MASK]", "do_lower_case": false, "tokenizer_class": "T5Tokenizer"}