dioBERTo

dioBERTo is a pre-trained model for ancient Greek, a low resource ancient language. We initialized the pre-training with weights from GreekBERT, a Greek version of BERT pre-trained on a large corpus of modern Greek (~ 28 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed.

In the case of ancient Greek, DioBERTo outperforms in downstream fine-tuning tasks like POS, MORPH, DEP, and LEMMA not only GreekBERT but also bert-base-multilingual-cased and xlm-roberta-base, which have been successfully used in the past for fine-tuning with languages not seen by the model in the pre-training process. DioBERTo is provided by the Diogenet project at the University of California, San Diego.

Intended uses

This model was produced for further fine tuning with the Universal Dependency datasets for ancient Greek and a NER annotated corpus produced by the Diogenet project.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10.0

Training results

Framework versions

  • Transformers 4.13.0.dev0
  • Pytorch 1.10.0+cu102
  • Datasets 1.14.0
  • Tokenizers 0.10.3

Exmaples

Downloads last month
33
Hosted inference API
Fill-Mask
Examples
Examples
Mask token: [MASK]
This model can be loaded on the Inference API on-demand.