asahi417 commited on
Commit
b5aa653
1 Parent(s): 40392e8

model update

Browse files
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: es
11
+ datasets:
12
+ - lmqg/qg_esquad
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - question generation
16
+ widget:
17
+ - text: "generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India."
18
+ example_title: "Question Generation Example 1"
19
+ - text: "generate question: a <hl> noviembre <hl> , que es también la estación lluviosa."
20
+ example_title: "Question Generation Example 2"
21
+ - text: "generate question: como <hl> el gobierno de Abbott <hl> que asumió el cargo el 18 de septiembre de 2013."
22
+ example_title: "Question Generation Example 3"
23
+ model-index:
24
+ - name: lmqg/mt5-base-esquad
25
+ results:
26
+ - task:
27
+ name: Text2text Generation
28
+ type: text2text-generation
29
+ dataset:
30
+ name: lmqg/qg_esquad
31
+ type: default
32
+ args: default
33
+ metrics:
34
+ - name: BLEU4
35
+ type: bleu4
36
+ value: 0.10153670508318442
37
+ - name: ROUGE-L
38
+ type: rouge-l
39
+ value: 0.25453014251607653
40
+ - name: METEOR
41
+ type: meteor
42
+ value: 0.23431011857989445
43
+ - name: BERTScore
44
+ type: bertscore
45
+ value: 0.8447369242462315
46
+ - name: MoverScore
47
+ type: moverscore
48
+ value: 0.596184026986908
49
+ ---
50
+
51
+ # Language Models Fine-tuning on Question Generation: `lmqg/mt5-base-esquad`
52
+ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the
53
+ [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default).
54
+
55
+
56
+ ### Overview
57
+ - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
58
+ - **Language:** es
59
+ - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
60
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
61
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
62
+ - **Paper:** [TBA](TBA)
63
+
64
+ ### Usage
65
+ ```python
66
+
67
+ from transformers import pipeline
68
+
69
+ model_path = 'lmqg/mt5-base-esquad'
70
+ pipe = pipeline("text2text-generation", model_path)
71
+
72
+ # Question Generation
73
+ input_text = 'generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.'
74
+ question = pipe(input_text)
75
+ ```
76
+
77
+ ## Evaluation Metrics
78
+
79
+
80
+ ### Metrics
81
+
82
+ | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
83
+ |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
84
+ | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | default | 0.10153670508318442 | 0.25453014251607653 | 0.23431011857989445 | 0.8447369242462315 | 0.596184026986908 | [link](https://huggingface.co/lmqg/mt5-base-esquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) |
85
+
86
+
87
+
88
+
89
+ ## Training hyperparameters
90
+
91
+ The following hyperparameters were used during fine-tuning:
92
+ - dataset_path: lmqg/qg_esquad
93
+ - dataset_name: default
94
+ - input_types: ['paragraph_answer']
95
+ - output_types: ['question']
96
+ - prefix_types: None
97
+ - model: google/mt5-base
98
+ - max_length: 512
99
+ - max_length_output: 32
100
+ - epoch: 10
101
+ - batch: 4
102
+ - lr: 0.0005
103
+ - fp16: False
104
+ - random_seed: 1
105
+ - gradient_accumulation_steps: 16
106
+ - label_smoothing: 0.15
107
+
108
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-esquad/raw/main/trainer_config.json).
109
+
110
+ ## Citation
111
+ TBA
eval/{metric.first.answer.paragraph_answer.question.asahi417_qg_esquad.default.json → metric.first.answer.paragraph_answer.question.lmqg_qg_esquad.default.json} RENAMED
File without changes
eval/{metric.first.sentence.paragraph_answer.question.asahi417_qg_esquad.default.json → metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json} RENAMED
File without changes
eval/{samples.test.hyp.paragraph_answer.question.asahi417_qg_esquad.default.txt → samples.test.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt} RENAMED
File without changes
eval/{samples.validation.hyp.paragraph_answer.question.asahi417_qg_esquad.default.txt → samples.validation.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt} RENAMED
File without changes
trainer_config.json CHANGED
@@ -1 +1 @@
1
- {"dataset_path": "asahi417/qg_esquad", "dataset_name": "default", "input_types": ["paragraph_answer"], "output_types": ["question"], "prefix_types": null, "model": "google/mt5-base", "max_length": 512, "max_length_output": 32, "epoch": 10, "batch": 4, "lr": 0.0005, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 16, "label_smoothing": 0.15}
 
1
+ {"dataset_path": "lmqg/qg_esquad", "dataset_name": "default", "input_types": ["paragraph_answer"], "output_types": ["question"], "prefix_types": null, "model": "google/mt5-base", "max_length": 512, "max_length_output": 32, "epoch": 10, "batch": 4, "lr": 0.0005, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 16, "label_smoothing": 0.15}