Jacobo commited on
Commit
8a16c77
1 Parent(s): f7a8680

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -13
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  tags:
3
- - generated_from_trainer
 
 
4
  model-index:
5
  - name: aristoBERTo
6
  results: []
@@ -11,26 +13,21 @@ widget:
11
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
  # aristoBERTo
18
 
19
- This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on the Ancient Greek dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 1.6323
22
-
23
- ## Model description
24
 
25
- More information needed
 
26
 
27
- ## Intended uses & limitations
28
 
29
- More information needed
30
 
31
- ## Training and evaluation data
32
 
33
- More information needed
 
34
 
35
  ## Training procedure
36
 
 
1
  ---
2
  tags:
3
+ - grc, Fill-Mask, PyTorch, bert, Token Classification
4
+ language:
5
+ - grc
6
  model-index:
7
  - name: aristoBERTo
8
  results: []
 
13
 
14
  ---
15
 
 
 
16
 
17
  # aristoBERTo
18
 
19
+ aristoBERTo is a pre-trained model for ancient Greek, a low resource language. We initialized the pre-training with weights from [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1), a Greek version of BERT pre-trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed.
 
 
 
 
20
 
21
+ Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdenberta in most downstream fine-tuning tasks like the labeling of POS, MORPH, DEP and LEMMA. aristoBERTo is provided by the Diogenet project of the University of California, San Diego.
22
+
23
 
24
+ ## Intended uses
25
 
26
+ This model was created for fine-tuning with spaCy and the Universal Dependency datasets for ancient Greek and a NER annotated corpus produced by the Diogenet project.
27
 
 
28
 
29
+ It achieves the following results on the evaluation set:
30
+ - Loss: 1.6323
31
 
32
  ## Training procedure
33