ligetinagy commited on
Commit
3d92e62
1 Parent(s): 7ae7393

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -137,7 +137,11 @@ The test data is distributed without the MASK fields. To evaluate your model, pl
137
 
138
  #### Initial Data Collection and Normalization
139
 
140
- -
 
 
 
 
141
 
142
  ## Additional Information
143
 
 
137
 
138
  #### Initial Data Collection and Normalization
139
 
140
+ To produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).
141
+
142
+ The quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.
143
+
144
+ One annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.
145
 
146
  ## Additional Information
147