File size: 5,365 Bytes
0e76951
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136

---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: it
datasets:
- lmqg/qag_itquad
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento."
  example_title: "Questions & Answers Generation Example 1" 
model-index:
- name: lmqg/mt5-base-itquad-qag
  results:
  - task:
      name: Text2text Generation
      type: text2text-generation
    dataset:
      name: lmqg/qag_itquad
      type: default
      args: default
    metrics:
    - name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
      type: qa_aligned_f1_score_bertscore_question_answer_generation
      value: 79.93
    - name: QAAlignedRecall-BERTScore (Question & Answer Generation)
      type: qa_aligned_recall_bertscore_question_answer_generation
      value: 78.87
    - name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
      type: qa_aligned_precision_bertscore_question_answer_generation
      value: 81.06
    - name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
      type: qa_aligned_f1_score_moverscore_question_answer_generation
      value: 53.8
    - name: QAAlignedRecall-MoverScore (Question & Answer Generation)
      type: qa_aligned_recall_moverscore_question_answer_generation
      value: 53.02
    - name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
      type: qa_aligned_precision_moverscore_question_answer_generation
      value: 54.64
---

# Model Card of `lmqg/mt5-base-itquad-qag`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question & answer pair generation task on the [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).


### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)   
- **Language:** it  
- **Training data:** [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)

### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="it", model="lmqg/mt5-base-itquad-qag")

# model prediction
question_answer_pairs = model.generate_qa("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")

```

- With `transformers`
```python
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mt5-base-itquad-qag")
output = pipe("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")

```

## Evaluation


- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_itquad.default.json) 

|                                 |   Score | Type    | Dataset                                                            |
|:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
| QAAlignedF1Score (BERTScore)    |   79.93 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |
| QAAlignedF1Score (MoverScore)   |   53.8  | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |
| QAAlignedPrecision (BERTScore)  |   81.06 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |
| QAAlignedPrecision (MoverScore) |   54.64 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |
| QAAlignedRecall (BERTScore)     |   78.87 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |
| QAAlignedRecall (MoverScore)    |   53.02 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |



## Training hyperparameters

The following hyperparameters were used during fine-tuning:
 - dataset_path: lmqg/qag_itquad
 - dataset_name: default
 - input_types: ['paragraph']
 - output_types: ['questions_answers']
 - prefix_types: None
 - model: google/mt5-base
 - max_length: 512
 - max_length_output: 256
 - epoch: 4
 - batch: 2
 - lr: 0.0005
 - fp16: False
 - random_seed: 1
 - gradient_accumulation_steps: 32
 - label_smoothing: 0.15

The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-itquad-qag/raw/main/trainer_config.json).

## Citation
```
@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}

```