File size: 1,385 Bytes
8e2be84
 
 
 
 
 
 
 
 
 
 
 
 
 
eb4f161
 
 
 
 
1fae383
eb4f161
1fae383
eb4f161
390d37c
eb4f161
 
 
 
390d37c
1fae383
eb4f161
 
1fae383
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
language: ja
thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
---

# japanese-gpt2-medium

![rinna-icon](./rinna.png)

This repository provides a medium-sized Japanese GPT-2 model. The model is provided by [rinna](https://corp.rinna.co.jp/).

# How to use the model

*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.

~~~~
from transformers import T5Tokenizer, AutoModelForCausalLM

tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium")
tokenizer.do_lower_case = True  # due to some bug of tokenizer config loading

model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-medium")
~~~~

# Model architecture
A 24-layer, 1024-hidden-size transformer-based language model.

# Training
The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) to optimize a traditional language modelling objective on 8\*V100 GPUs for around 30 days. It reaches around 18 perplexity on a chosen validation set from the same data.

# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer, the vocabulary is also directly adopted from the pre-trained tokenizer in the [link](https://github.com/google/sentencepiece).

# Licenese
[The MIT license](https://opensource.org/licenses/MIT)