hiroshi-matsuda-rit commited on
Commit
d9a6bcd
1 Parent(s): ca93574

model files

Browse files
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ license: MIT
4
+ datasets:
5
+ - mC4 Japanese
6
+ ---
7
+
8
+ # transformers-ud-japanese-electra-ginza-510 (sudachitra-wordpiece, mC4 Japanese)
9
+
10
+ This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences extracted from the [mC4](https://huggingface.co/datasets/mc4) and finetuned by [spaCy v3](https://spacy.io/usage/v3) on [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html).
11
+
12
+ The base pretrain model is [megagonlabs/transformers-ud-japanese-electra-base-discrimininator](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator).
13
+
14
+ The entire spaCy v3 model is distributed as a python package named [`ja_ginza_electra`](https://pypi.org/project/ja-ginza-electra/) from PyPI along with [`GiNZA v5`](https://github.com/megagonlabs/ginza) which provides some custom pipeline components to recognize the Japanese bunsetu-phrase structures.
15
+ Try running it as below:
16
+ ```console
17
+ $ pip install ginza ja_ginza_electra
18
+ $ ginza
19
+ ```
20
+
21
+ ## Licenses
22
+
23
+ The models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
24
+
25
+ ## Acknowledgments
26
+
27
+ This model is permitted to be published under the `MIT License` under a joint research agreement between NINJAL (National Institute for Japanese Language and Linguistics) and Megagon Labs Tokyo.
28
+
29
+ ## Citations
30
+ - [mC4](https://huggingface.co/datasets/mc4)
31
+
32
+ Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
33
+ ```
34
+ @article{2019t5,
35
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
36
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
37
+ journal = {arXiv e-prints},
38
+ year = {2019},
39
+ archivePrefix = {arXiv},
40
+ eprint = {1910.10683},
41
+ }
42
+ ```
43
+
44
+ - [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html)
45
+
46
+ ```
47
+ Asahara, M., Kanayama, H., Tanaka, T., Miyao, Y., Uematsu, S., Mori, S.,
48
+ Matsumoto, Y., Omura, M., & Murawaki, Y. (2018).
49
+ Universal Dependencies Version 2 for Japanese.
50
+ In LREC-2018.
51
+ ```
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "megagonlabs/transformers-ud-japanese-electra-base-ginza-510",
3
+ "architectures": [
4
+ "ElectraModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "embedding_size": 768,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 0.0,
15
+ "max_position_embeddings": 512,
16
+ "model_name": "base",
17
+ "model_type": "electra",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 0,
21
+ "position_embedding_type": "absolute",
22
+ "summary_activation": "gelu",
23
+ "summary_last_dropout": 0.1,
24
+ "summary_type": "first",
25
+ "summary_use_proj": true,
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.11.3",
28
+ "type_vocab_size": 2,
29
+ "vocab_size": 30112
30
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52ac9a8db9573b67ca572ed0e4da3f0ead34f6030a1247099f97049c15706cc7
3
+ size 434384949
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "do_nfkc": false, "do_word_tokenize": true, "do_subword_tokenize": true, "word_tokenizer_type": "sudachipy", "subword_tokenizer_type": "wordpiece", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "word_form_type": "dictionary_and_surface", "sudachipy_kwargs": {"split_mode": "A", "dict_type": "core"}, "use_fast": false, "tokenizer_class": "ElectraSudachipyTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff