stefan-it's picture
readme: add initial version of model card (#1)
da6f75d
metadata
language:
  - en
  - ka
license: mit
tags:
  - flair
  - token-classification
  - sequence-tagger-model
base_model: xlm-roberta-large
widget:
  - text: >-
      ამით თავისი ქადაგება დაასრულა და დაბრუნდა იერუსალიმში . ერთ-ერთ გარე
      კედელზე არსებობს ერნესტო ჩე გევარას პორტრეტი . შაკოსკა“ ინახება ბრაზილიაში
      , სან-პაულუს ხელოვნების მუზეუმში .

Fine-tuned English-Georgian NER Model with Flair

This Flair NER model was fine-tuned on the WikiANN dataset (Rahimi et al. splits) using XLM-R Large as backbone LM.

Notice: The dataset is very problematic, because it was automatically constructed.

We did manually inspect the development split of the Georgian data and found a lot of bad labeled examples, e.g. DVD ( 💿 ) as ORG.

Fine-Tuning

The latest Flair version is used for fine-tuning.

We use English and Georgian training splits for fine-tuning and the development set of Georgian for evaluation.

A hyper-parameter search over the following parameters with 5 different seeds per configuration is performed:

  • Batch Sizes: [4]
  • Learning Rates: [5e-06]

More details can be found in this repository.

Results

A hyper-parameter search with 5 different seeds per configuration is performed and micro F1-score on development set is reported:

Configuration Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Average
bs4-e10-lr5e-06 0.9005 0.9012 0.9069 0.905 0.9048 0.9037 ± 0.0027

The result in bold shows the performance of this model.

Additionally, the Flair training log and TensorBoard logs are also uploaded to the model hub.