--- license: cc-by-nc-sa-4.0 library_name: transformers pipeline_tag: token-classification widget: - text: "Do you think that looks like a cat? Answer: I don't think so." - example_title: "cat" --- ### xlm-roberta-base for token classification, specifically fine-tuned for question-answer extraction for English This is the `xlm-roberta-base`, fine-tuned on manually annotated Finnish data and ChatGPT-annotated data. ### Hyperparameters ``` batch_size = 8 epochs = 10 (trained for less) base_LM_model = "xlm-roberta-base" max_seq_len = 512 learning_rate = 5e-5 ``` ### Performance ``` Accuracy = 0.88 Question F1 = 0.77 Answer F1 = 0.81 ``` ### Usage To get the best question-answer pairs use the huggingface pipeline with no aggregation strategy and do some post-processing like in this [script](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/extract_qa_en_no_entropy.py). ## Citing Citing information coming soon!