aapot commited on
Commit
5e18a40
1 Parent(s): 23e64bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -101
README.md CHANGED
@@ -1,102 +1,107 @@
1
- ---
2
- language:
3
- - fi
4
- license: apache-2.0
5
- tags:
6
- - finnish
7
- - electra
8
- datasets:
9
- - Finnish-NLP/mc4_fi_cleaned
10
- - wikipedia
11
- widget:
12
- - text: "Moikka olen [MASK] kielimalli."
13
-
14
- ---
15
-
16
- # ELECTRA for Finnish
17
-
18
- Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
19
- [this paper](https://openreview.net/pdf?id=r1xMH1BtvB)
20
- and first released at [this page](https://github.com/google-research/electra).
21
-
22
- **Note**: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish)
23
-
24
- ## Model description
25
-
26
- Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
27
-
28
- More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
29
-
30
- This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
31
-
32
- ## Intended uses & limitations
33
-
34
- You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model instead.
35
-
36
- ### How to use
37
-
38
- Here is how to use this model directly with a pipeline for fill-mask task:
39
-
40
- ```python
41
- >>> from transformers import pipeline
42
- >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/electra-base-generator-finnish')
43
- >>> unmasker("Moikka olen [MASK] kielimalli.")
44
- [{'score': 0.0708453431725502,
45
- 'token': 4619,
46
- 'token_str': 'suomalainen',
47
- 'sequence': 'Moikka olen suomalainen kielimalli.'},
48
- {'score': 0.042563650757074356,
49
- 'token': 1153,
50
- 'token_str': 'uusi',
51
- 'sequence': 'Moikka olen uusi kielimalli.'},
52
- {'score': 0.03219178691506386,
53
- 'token': 591,
54
- 'token_str': 'hyvä',
55
- 'sequence': 'Moikka olen hyvä kielimalli.'},
56
- {'score': 0.03175133094191551,
57
- 'token': 3134,
58
- 'token_str': 'vanha',
59
- 'sequence': 'Moikka olen vanha kielimalli.'},
60
- {'score': 0.019662367179989815,
61
- 'token': 25583,
62
- 'token_str': 'ranskalainen',
63
- 'sequence': 'Moikka olen ranskalainen kielimalli.'}]
64
- ```
65
-
66
- ### Limitations and bias
67
-
68
- The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
69
-
70
- ## Training data
71
-
72
- This Finnish ELECTRA model was pretrained on the combination of five datasets:
73
- - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
74
- - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
75
- - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
76
- - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
77
- - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
78
-
79
- Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
80
-
81
- ## Training procedure
82
-
83
- ### Preprocessing
84
-
85
- The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
86
-
87
- ### Pretraining
88
-
89
- The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
90
-
91
- Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md).
92
-
93
- ## Evaluation results
94
-
95
- For evaluation results, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model repository instead.
96
-
97
- ## Team Members
98
-
99
- - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
100
- - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
101
-
 
 
 
 
 
102
  Feel free to contact us for more details 🤗
 
1
+ ---
2
+ language:
3
+ - fi
4
+ license: apache-2.0
5
+ tags:
6
+ - finnish
7
+ - electra
8
+ datasets:
9
+ - Finnish-NLP/mc4_fi_cleaned
10
+ - wikipedia
11
+ widget:
12
+ - text: "Moikka olen [MASK] kielimalli."
13
+
14
+ ---
15
+
16
+ # ELECTRA for Finnish
17
+
18
+ Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
19
+ [this paper](https://openreview.net/pdf?id=r1xMH1BtvB)
20
+ and first released at [this page](https://github.com/google-research/electra).
21
+
22
+ **Note**: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish)
23
+
24
+ ## Model description
25
+
26
+ Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
27
+
28
+ More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
29
+
30
+ This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
31
+
32
+ ## Intended uses & limitations
33
+
34
+ You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model instead.
35
+
36
+ ### How to use
37
+
38
+ Here is how to use this model directly with a pipeline for fill-mask task:
39
+
40
+ ```python
41
+ >>> from transformers import pipeline
42
+ >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/electra-base-generator-finnish')
43
+ >>> unmasker("Moikka olen [MASK] kielimalli.")
44
+ [{'score': 0.0708453431725502,
45
+ 'token': 4619,
46
+ 'token_str': 'suomalainen',
47
+ 'sequence': 'Moikka olen suomalainen kielimalli.'},
48
+ {'score': 0.042563650757074356,
49
+ 'token': 1153,
50
+ 'token_str': 'uusi',
51
+ 'sequence': 'Moikka olen uusi kielimalli.'},
52
+ {'score': 0.03219178691506386,
53
+ 'token': 591,
54
+ 'token_str': 'hyvä',
55
+ 'sequence': 'Moikka olen hyvä kielimalli.'},
56
+ {'score': 0.03175133094191551,
57
+ 'token': 3134,
58
+ 'token_str': 'vanha',
59
+ 'sequence': 'Moikka olen vanha kielimalli.'},
60
+ {'score': 0.019662367179989815,
61
+ 'token': 25583,
62
+ 'token_str': 'ranskalainen',
63
+ 'sequence': 'Moikka olen ranskalainen kielimalli.'}]
64
+ ```
65
+
66
+ ### Limitations and bias
67
+
68
+ The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
69
+
70
+ ## Training data
71
+
72
+ This Finnish ELECTRA model was pretrained on the combination of five datasets:
73
+ - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
74
+ - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
75
+ - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
76
+ - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
77
+ - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
78
+
79
+ Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
80
+
81
+ ## Training procedure
82
+
83
+ ### Preprocessing
84
+
85
+ The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
86
+
87
+ ### Pretraining
88
+
89
+ The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
90
+
91
+ Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md).
92
+
93
+ ## Evaluation results
94
+
95
+ For evaluation results, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model repository instead.
96
+
97
+ ## Acknowledgements
98
+
99
+ This project would not have been possible without compute generously provided by Google through the
100
+ [TPU Research Cloud](https://sites.research.google/trc/).
101
+
102
+ ## Team Members
103
+
104
+ - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
105
+ - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
106
+
107
  Feel free to contact us for more details 🤗