kskshr commited on
Commit
1127213
1 Parent(s): 2ec5f39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md CHANGED
@@ -1,3 +1,90 @@
1
  ---
 
2
  license: cc-by-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: ja
3
  license: cc-by-sa-4.0
4
+ library_name: transformers
5
+ tags:
6
+ - roberta
7
+ - fill-mask
8
+ datasets:
9
+ - wikipedia
10
+ - cc100
11
+ mask_token: "[MASK]"
12
+ widget:
13
+ - text: "京都 大学 で [MASK] を 専攻 する 。"
14
+ - text: "東京 は 日本 の [MASK] だ 。"
15
+ - text: "カフェ で [MASK] を 注文 する 。"
16
+ - text: "[MASK] 名人 が タイトル の 防衛 に 成功 する 。"
17
  ---
18
+
19
+ # ku-accms/roberta-base-japanese-ssuw
20
+ ## Model description
21
+ This is a pre-trained Japanese RoBERTa base model for super short unit words (SSUW).
22
+
23
+ ## Pre-processing
24
+ The input text should be converted to full-width (zenkaku) characters and segmented into super short unit words in advance (e.g., by KyTea).
25
+
26
+ ## How to use
27
+ You can use this model directly with a pipeline for masked language modeling:
28
+
29
+ ```python
30
+ >>> from transformers import pipeline
31
+ >>> unmasker = pipeline('fill-mask', model='ku-accms/roberta-base-japanese-ssuw')
32
+ >>> unmasker("京都 大学 で [MASK] を 専攻 する 。")
33
+ [{'sequence': '京都 大学 で 文学 を 専攻 する 。',
34
+ 'score': '0.1479644924402237',
35
+ 'token': '17907',
36
+ 'token_str': '文学'}
37
+ {'sequence': '京都 大学 で 哲学 を 専攻 する 。',
38
+ 'score': '0.07658644765615463',
39
+ 'token': '19302',
40
+ 'token_str': '哲学'}
41
+ {'sequence': '京都 大学 で デザイン を 専攻 する 。',
42
+ 'score': '0.06302948296070099',
43
+ 'token': '14411',
44
+ 'token_str': 'デザイン'}
45
+ {'sequence': '京都 大学 で 建築 を 専攻 する 。',
46
+ 'score': '0.060596249997615814',
47
+ 'token': '15478',
48
+ 'token_str': '建築'}
49
+ {'sequence': '京都 大学 で 工学 を 専攻 する 。',
50
+ 'score': '0.0574776753783226',
51
+ 'token': '18632',
52
+ 'token_str': '工学'}
53
+ ```
54
+
55
+ Here is how to use this model to get the features of a given text in PyTorch:
56
+ ```python
57
+ import zenhan
58
+ import Mykytea
59
+ kytea_model_path = "somewhere"
60
+ kytea = Mykytea.Mykytea("-model {} -notags".format(kytea_model_path))
61
+ def preprocess(text):
62
+ return " ".join(kytea.getWS(zenhan.h2z(text)))
63
+
64
+ from transformers import BertTokenizer, RobertaModel
65
+ tokenizer = BertTokenizer.from_pretrained('ku-accms/roberta-base-japanese-ssuw')
66
+ model = BertModel.from_pretrained("ku-accms/roberta-base-japanese-ssuw")
67
+ text = "京都大学で自然言語処理を専攻する。"
68
+ encoded_input = tokenizer(preprocess(text), return_tensors='pt')
69
+ output = model(**encoded_input)
70
+ ```
71
+
72
+ ## Training data
73
+ We used a Japanese Wikipedia dump (as of 20230101, 3.3GB) and a Japanese portion of CC100 (70GB).
74
+
75
+ ## Training procedure
76
+ We first segmented the texts into words by KyTea and then tokenized the words into subwords using WordPiece with a vocabulary size of 32,000. We pre-trained the BERT model using [transformers](https://github.com/huggingface/transformers) library. The training took about 7 days using 4 NVIDIA A100-SXM4-80GB GPUs.
77
+
78
+ The following hyperparameters were used for the pre-training.
79
+
80
+ - learning_rate: 1e-4
81
+ - weight decay: 1e-2
82
+ - per_device_train_batch_size: 80
83
+ - num_devices: 4
84
+ - gradient_accumulation_steps: 3
85
+ - total_train_batch_size: 960
86
+ - max_seq_length: 512
87
+ - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-06
88
+ - lr_scheduler_type: linear schedule with warmup
89
+ - training_steps: 500,000
90
+ - warmup_steps: 10,000