hiroshi-matsuda-rit commited on
Commit
c2ebddb
1 Parent(s): 6dcb82a

Utilize AutoTokenizer to load custom tokenizer by using trust_remote_code option

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
2
- license: cc-by-sa-4.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: ja
3
+ license: mit
4
+ datasets:
5
+ - mC4 Japanese
6
  ---
7
+
8
+ # electra-base-japanese-discriminator (sudachitra-wordpiece, mC4 Japanese) - [MIYAGINO](https://www.ntj.jac.go.jp/assets/images/member/pertopics/image/per100510_3.jpg)
9
+
10
+ This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences.
11
+
12
+ The input text is tokenized by [SudachiTra](https://github.com/WorksApplications/SudachiTra) with the WordPiece subword tokenizer.
13
+ See `tokenizer_config.json` for the setting details.
14
+
15
+ ## How to use
16
+
17
+ Please install `SudachiTra` in advance.
18
+
19
+ ```console
20
+ $ pip install -U torch transformers sudachitra
21
+ ```
22
+
23
+ You can load the model and the tokenizer via AutoModel and AutoTokenizer, respectively.
24
+
25
+ ```python
26
+ from transformers import AutoModel, AutoTokenizer
27
+ model = AutoModel.from_pretrained("megagonlabs/electra-base-japanese-discriminator")
28
+ tokenizer = AutoTokenizer.from_pretrained("megagonlabs/electra-base-japanese-discriminator", trust_remote_code=True)
29
+ model(**tokenizer("まさにオールマイティーな商品だ。", return_tensors="pt")).last_hidden_state
30
+ tensor([[[-0.0498, -0.0285, 0.1042, ..., 0.0062, -0.1253, 0.0338],
31
+ [-0.0686, 0.0071, 0.0087, ..., -0.0210, -0.1042, -0.0320],
32
+ [-0.0636, 0.1465, 0.0263, ..., 0.0309, -0.1841, 0.0182],
33
+ ...,
34
+ [-0.1500, -0.0368, -0.0816, ..., -0.0303, -0.1653, 0.0650],
35
+ [-0.0457, 0.0770, -0.0183, ..., -0.0108, -0.1903, 0.0694],
36
+ [-0.0981, -0.0387, 0.1009, ..., -0.0150, -0.0702, 0.0455]]],
37
+ grad_fn=<NativeLayerNormBackward>)
38
+ ```
39
+
40
+ ## Model architecture
41
+
42
+ The model architecture is the same as the original ELECTRA base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
43
+
44
+ ## Training data and libraries
45
+
46
+ This model is trained on the Japanese texts extracted from the [mC4](https://huggingface.co/datasets/mc4) Common Crawl's multilingual web crawl corpus.
47
+ We used the [Sudachi](https://github.com/WorksApplications/Sudachi) to split texts into sentences, and also applied a simple rule-based filter to remove nonlinguistic segments of mC4 multilingual corpus.
48
+ The extracted texts contains over 600M sentences in total, and we used approximately 200M sentences for pretraining.
49
+
50
+ We used [NVIDIA's TensorFlow2-based ELECTRA implementation](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/ELECTRA) for pretraining. The time required for the pretrainig was about 110 hours using GCP DGX A100 8gpu instance with enabling Automatic Mixed Precision.
51
+
52
+ ## Licenses
53
+
54
+ The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
55
+
56
+ ## Citations
57
+
58
+ - mC4
59
+
60
+ Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
61
+ ```
62
+ @article{2019t5,
63
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
64
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
65
+ journal = {arXiv e-prints},
66
+ year = {2019},
67
+ archivePrefix = {arXiv},
68
+ eprint = {1910.10683},
69
+ }
70
+ ```