tanidaiz commited on
Commit
e79dd5a
1 Parent(s): e866c03

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md CHANGED
@@ -1,3 +1,68 @@
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
4
+ # japanese-reversed-gpt2-medium-unidic
5
+ This is a medium-sized Japanese **reversed** GPT-2 model using BERT-like tokenizer.
6
+
7
+ Not reversed version is published [here](https://huggingface.co/okazaki-lab/japanese-gpt2-medium-unidic/).
8
+
9
+ # How to use
10
+ The model depends on [PyTorch](https://pytorch.org/), [fugashi](https://github.com/polm/fugashi) with [unidic-lite](https://github.com/polm/unidic-lite), and [Hugging Face Transformers](https://github.com/huggingface/transformers).
11
+
12
+ ```sh
13
+ pip install torch torchvision torchaudio
14
+ pip install fugashi[unidic-lite]
15
+ pip install transformers
16
+ ```
17
+
18
+ ```python
19
+ from transformers import AutoTokenizer, AutoModelForCausalLM
20
+ import torch
21
+ tokenizer = AutoTokenizer.from_pretrained('okazaki-lab/japanese-reversed-gpt2-medium-unidic')
22
+ model = AutoModelForCausalLM.from_pretrained('okazaki-lab/japanese-reversed-gpt2-medium-unidic')
23
+
24
+ text = 'ので、散歩に行きました。'
25
+
26
+ bos = tokenizer.convert_tokens_to_ids(['[BOS]']) # [32768]
27
+ input_ids = bos + tokenizer.encode(text)[1:-1][::-1] # [CLS] and [SEP] generated by BERT Tokenizer are removed then reversed
28
+ input_ids = torch.tensor(input_ids).unsqueeze(0)
29
+ output = model.generate(
30
+ input_ids,
31
+ do_sample=True,
32
+ max_new_tokens=30,
33
+ top_k=50,
34
+ top_p=0.95,
35
+ repetition_penalty=1.0,
36
+ num_return_sequences=1,
37
+ pad_token_id=0,
38
+ eos_token_id=32769,
39
+ )[0].flip(0)
40
+
41
+ print(tokenizer.decode(output))
42
+ ```
43
+
44
+ # Model architecture
45
+ Transformer-based Language Model
46
+ - Layers: 24
47
+ - Heads: 16
48
+ - Dimensions of hidden states: 1024
49
+
50
+ # Training
51
+ We used a [codebase](https://github.com/rinnakk/japanese-pretrained-models) provided by rinna Co., Ltd. for training.
52
+
53
+ The model was trained on Japanese CC-100 and Japanese Wikipedia (2022/01/31).
54
+ We employed 8 A100 GPUs for 17 days.
55
+ The perplexity on the validation set is 9.79.
56
+
57
+ # Tokenization
58
+ Our tokenizer is based on [the one](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) provided by Tohoku NLP Group.
59
+ The texts are tokenized by MeCab and then WordPiece.
60
+
61
+ The vocabulary size is 32771 (32768 original tokens + 2 special tokens + 1 unused token).
62
+
63
+ # License
64
+ [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
65
+
66
+ Copyright (c) 2021, Tohoku University
67
+
68
+ Copyright (c) 2023, Tokyo Institute of Technology