ondfa commited on
Commit
94b7082
1 Parent(s): 95f185c

licence added

Browse files
Files changed (1) hide show
  1. README.md +34 -5
README.md CHANGED
@@ -1,9 +1,34 @@
1
  # CZERT
2
- This repository keeps trained model Czert-B for the paper [Czert – Czech BERT-like Model for Language Representation
3
  ](https://arxiv.org/abs/2103.13031)
4
  For more information, see the paper
5
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ## How to Use CZERT?
8
 
9
  ### Sentence Level Tasks
@@ -14,14 +39,14 @@ We evaluate our model on two sentence level tasks:
14
 
15
 
16
  <!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
17
- model = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1)
18
 
19
  or
20
 
21
  self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
22
  self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
23
  -->
24
-
25
  ### Document Level Tasks
26
  We evaluate our model on one document level task
27
  * Multi-label Document Classification.
@@ -77,8 +102,8 @@ Comparison of F1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlo
77
 
78
  | | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep |
79
  |:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
80
- | span | 78.547 ± 0.110 | **79.333 ± 0.080** | 51.365 ± 0.423 | 72.254 ± 0.172 | **79.112 ± 0.141** | \- | \- |
81
- | syntax | 90.226 ± 0.224 | **90.492 ± 0.040** | 80.747 ± 0.131 | 80.319 ± 0.054 | **90.516 ± 0.047** | 85.19 | 89.52 |
82
 
83
  SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
84
 
@@ -94,6 +119,9 @@ SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evalua
94
  Comparison of f1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on named entity recognition task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
95
 
96
 
 
 
 
97
  ## How should I cite CZERT?
98
  For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031):
99
  ```
@@ -107,3 +135,4 @@ For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031):
107
  journal={arXiv preprint arXiv:2103.13031},
108
  }
109
  ```
 
 
1
  # CZERT
2
+ This repository keeps trained Czert-B model for the paper [Czert – Czech BERT-like Model for Language Representation
3
  ](https://arxiv.org/abs/2103.13031)
4
  For more information, see the paper
5
 
6
 
7
+ ## Available Models
8
+ You can download **MLM & NSP only** pretrained models
9
+ ~~[CZERT-A-v1](https://air.kiv.zcu.cz/public/CZERT-A-czert-albert-base-uncased.zip)
10
+ [CZERT-B-v1](https://air.kiv.zcu.cz/public/CZERT-B-czert-bert-base-cased.zip)~~
11
+
12
+ After some additional experiments, we found out that the tokenizers config was exported wrongly. In Czert-B-v1, the tokenizer parameter "do_lower_case" was wrongly set to true. In Czert-A-v1 the parameter "strip_accents" was incorrectly set to true.
13
+
14
+ Both mistakes are repaired in v2.
15
+ [CZERT-A-v2](https://air.kiv.zcu.cz/public/CZERT-A-v2-czert-albert-base-uncased.zip)
16
+ [CZERT-B-v2](https://air.kiv.zcu.cz/public/CZERT-B-v2-czert-bert-base-cased.zip)
17
+
18
+
19
+
20
+ or choose from one of **Finetuned Models**
21
+ | | Models |
22
+ | - | - |
23
+ | Sentiment Classification<br> (Facebook or CSFD) | [CZERT-A-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-A_fb.zip) <br> [CZERT-B-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-B_fb.zip) <br> [CZERT-A-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-A_csfd.zip) <br> [CZERT-B-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-B_csfd.zip) | Semantic Text Similarity <br> (Czech News Agency) | [CZERT-A-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-A-sts-CNA.zip) <br> [CZERT-B-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-B-sts-CNA.zip)
24
+ | Named Entity Recognition | [CZERT-A-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-A-ner-CNEC-cased.zip) <br> [CZERT-B-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-B-ner-CNEC-cased.zip) <br>[PAV-ner-CNEC](https://air.kiv.zcu.cz/public/PAV-ner-CNEC-cased.zip) <br> [CZERT-A-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-A-ner-BSNLP-cased.zip)<br>[CZERT-B-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-B-ner-BSNLP-cased.zip) <br>[PAV-ner-BSNLP](https://air.kiv.zcu.cz/public/PAV-ner-BSNLP-cased.zip) |
25
+ | Morphological Tagging<br> | [CZERT-A-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-A-morphtag-126k-cased.zip)<br>[CZERT-B-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-B-morphtag-126k-cased.zip) |
26
+ | Semantic Role Labelling |[CZERT-A-srl](https://air.kiv.zcu.cz/public/CZERT-A-srl-cased.zip)<br> [CZERT-B-srl](https://air.kiv.zcu.cz/public/CZERT-B-srl-cased.zip) |
27
+
28
+
29
+
30
+
31
+
32
  ## How to Use CZERT?
33
 
34
  ### Sentence Level Tasks
 
39
 
40
 
41
  <!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
42
+ model = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1)
43
 
44
  or
45
 
46
  self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
47
  self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
48
  -->
49
+
50
  ### Document Level Tasks
51
  We evaluate our model on one document level task
52
  * Multi-label Document Classification.
 
102
 
103
  | | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep |
104
  |:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
105
+ | span | 78.547 ± 0.110 | 79.333 ± 0.080 | 51.365 ± 0.423 | 72.254 ± 0.172 | **81.861 ± 0.102** | \- | \- |
106
+ | syntax | 90.226 ± 0.224 | 90.492 ± 0.040 | 80.747 ± 0.131 | 80.319 ± 0.054 | **91.462 ± 0.062** | 85.19 | 89.52 |
107
 
108
  SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
109
 
 
119
  Comparison of f1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on named entity recognition task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
120
 
121
 
122
+ ## Licence
123
+ This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/
124
+
125
  ## How should I cite CZERT?
126
  For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031):
127
  ```
 
135
  journal={arXiv preprint arXiv:2103.13031},
136
  }
137
  ```
138
+