xlm-roberta-base for register labeling, specifically fine-tuned for question-answer document identification
This is the xlm-roberta-base
, fine-tuned on register annotated data in English (https://github.com/TurkuNLP/CORE-corpus) and Finnish (https://github.com/TurkuNLP/FinCORE_full) as well as unpublished versions of Swedish and French (https://github.com/TurkuNLP/multilingual-register-labeling). The model is trained to predict whether a text includes something related to questions and answers or not.
Hyperparameters
batch_size = 8
epochs = 10 (trained for less)
base_LM_model = "xlm-roberta-base"
max_seq_len = 512
learning_rate = 4e-6
Performance
F1-micro = 0.98
F1-macro = 0.79
F1 QA label = 0.60
F1 not QA label = 0.99
Precision QA label = 0.82
Precision not QA label = 0.99
Recall QA label = 0.47
Recall not QA label = 1.00
Citing
To cite this model use the following bibtex.
@inproceedings{eskelinen-etal-2024-building-question,
title = "Building Question-Answer Data Using Web Register Identification",
author = "Eskelinen, Anni and
Myntti, Amanda and
Henriksson, Erik and
Pyysalo, Sampo and
Laippala, Veronika",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.234",
pages = "2595--2611",
abstract = "This article introduces a resource-efficient method for developing question-answer (QA) datasets by extracting QA pairs from web-scale data using machine learning (ML). Our method benefits from recent advances in web register (genre) identification and consists of two ML steps with an additional post-processing step. First, using XLM-R and the multilingual CORE web register corpus series with categories such as QA Forum, we train a multilingual classifier to retrieve documents that are likely to contain QA pairs from web-scale data. Second, we develop a NER-style token classifier to identify the QA text spans within these documents. To this end, we experiment with training on a semi-synthetic dataset built on top of the English LFQA, a small set of manually cleaned web QA pairs in English and Finnish, and a Finnish web QA pair dataset cleaned using ChatGPT. The evaluation of our pipeline demonstrates its capability to efficiently retrieve a substantial volume of QA pairs. While the approach is adaptable to any language given the availability of language models and extensive web data, we showcase its efficiency in English and Finnish, developing the first open, non-synthetic and non-machine translated QA dataset for Finnish {--} Turku WebQA {--} comprising over 200,000 QA pairs.",
}
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.