|
--- |
|
license: cc-by-nc-sa-4.0 |
|
library_name: transformers |
|
pipeline_tag: token-classification |
|
widget: |
|
- text: "Do you think that looks like a cat? Answer: I don't think so." |
|
- example_title: "cat" |
|
--- |
|
### xlm-roberta-base for token classification, specifically fine-tuned for question-answer extraction for English |
|
|
|
This is the `xlm-roberta-base`, fine-tuned on manually annotated Finnish data and ChatGPT-annotated data. |
|
### Hyperparameters |
|
``` |
|
batch_size = 8 |
|
epochs = 10 (trained for less) |
|
base_LM_model = "xlm-roberta-base" |
|
max_seq_len = 512 |
|
learning_rate = 5e-5 |
|
``` |
|
### Performance |
|
``` |
|
Accuracy = 0.88 |
|
Question F1 = 0.77 |
|
Answer F1 = 0.81 |
|
``` |
|
|
|
### Usage |
|
|
|
Use [this script](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/ner/ner_preds.py) to get the predictions and this [extraction script](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/extract_qa.py) to extract them fully to a (hopefully) usable format, which can be read back to question and answer lists using the script from [here](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/read_json_back.py). |