asahi417 commited on
Commit
522248c
1 Parent(s): ec066d6

model update

Browse files
README.md CHANGED
@@ -7,14 +7,14 @@ metrics:
7
  - rouge-l
8
  - bertscore
9
  - moverscore
10
- language: en
11
  datasets:
12
  - lmqg/qag_frquad
13
  pipeline_tag: text2text-generation
14
  tags:
15
  - questions and answers generation
16
  widget:
17
- - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
18
  example_title: "Questions & Answers Generation Example 1"
19
  model-index:
20
  - name: lmqg/mt5-small-frquad-qag
@@ -35,31 +35,31 @@ model-index:
35
  value: 15.33
36
  - name: METEOR (Question & Answer Generation)
37
  type: meteor_question_answer_generation
38
- value: 16.16
39
  - name: BERTScore (Question & Answer Generation)
40
  type: bertscore_question_answer_generation
41
- value: 81.38
42
  - name: MoverScore (Question & Answer Generation)
43
  type: moverscore_question_answer_generation
44
- value: 54.76
45
  - name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
46
  type: qa_aligned_f1_score_bertscore_question_answer_generation
47
- value: 85.24
48
  - name: QAAlignedRecall-BERTScore (Question & Answer Generation)
49
  type: qa_aligned_recall_bertscore_question_answer_generation
50
- value: 85.53
51
  - name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
52
  type: qa_aligned_precision_bertscore_question_answer_generation
53
- value: 84.97
54
  - name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
55
  type: qa_aligned_f1_score_moverscore_question_answer_generation
56
- value: 61.36
57
  - name: QAAlignedRecall-MoverScore (Question & Answer Generation)
58
  type: qa_aligned_recall_moverscore_question_answer_generation
59
- value: 61.67
60
  - name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
61
  type: qa_aligned_precision_moverscore_question_answer_generation
62
- value: 61.08
63
  ---
64
 
65
  # Model Card of `lmqg/mt5-small-frquad-qag`
@@ -68,7 +68,7 @@ This model is fine-tuned version of [google/mt5-small](https://huggingface.co/go
68
 
69
  ### Overview
70
  - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
71
- - **Language:** en
72
  - **Training data:** [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) (default)
73
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
74
  - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
@@ -80,10 +80,10 @@ This model is fine-tuned version of [google/mt5-small](https://huggingface.co/go
80
  from lmqg import TransformersQG
81
 
82
  # initialize model
83
- model = TransformersQG(language="en", model="lmqg/mt5-small-frquad-qag")
84
 
85
  # model prediction
86
- question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
87
 
88
  ```
89
 
@@ -92,7 +92,7 @@ question_answer_pairs = model.generate_qa("William Turner was an English painter
92
  from transformers import pipeline
93
 
94
  pipe = pipeline("text2text-generation", "lmqg/mt5-small-frquad-qag")
95
- output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
96
 
97
  ```
98
 
@@ -103,19 +103,19 @@ output = pipe("Beyonce further expanded her acting career, starring as blues sin
103
 
104
  | | Score | Type | Dataset |
105
  |:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
106
- | BERTScore | 81.38 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
107
- | Bleu_1 | 9.69 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
108
- | Bleu_2 | 4.74 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
109
  | Bleu_3 | 2.64 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
110
  | Bleu_4 | 1.54 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
111
- | METEOR | 16.16 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
112
- | MoverScore | 54.76 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
113
- | QAAlignedF1Score (BERTScore) | 85.24 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
114
- | QAAlignedF1Score (MoverScore) | 61.36 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
115
- | QAAlignedPrecision (BERTScore) | 84.97 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
116
- | QAAlignedPrecision (MoverScore) | 61.08 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
117
- | QAAlignedRecall (BERTScore) | 85.53 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
118
- | QAAlignedRecall (MoverScore) | 61.67 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
119
  | ROUGE_L | 15.33 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
120
 
121
 
 
7
  - rouge-l
8
  - bertscore
9
  - moverscore
10
+ language: fr
11
  datasets:
12
  - lmqg/qag_frquad
13
  pipeline_tag: text2text-generation
14
  tags:
15
  - questions and answers generation
16
  widget:
17
+ - text: "Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc."
18
  example_title: "Questions & Answers Generation Example 1"
19
  model-index:
20
  - name: lmqg/mt5-small-frquad-qag
 
35
  value: 15.33
36
  - name: METEOR (Question & Answer Generation)
37
  type: meteor_question_answer_generation
38
+ value: 16.12
39
  - name: BERTScore (Question & Answer Generation)
40
  type: bertscore_question_answer_generation
41
+ value: 64.81
42
  - name: MoverScore (Question & Answer Generation)
43
  type: moverscore_question_answer_generation
44
+ value: 50.01
45
  - name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
46
  type: qa_aligned_f1_score_bertscore_question_answer_generation
47
+ value: 77.23
48
  - name: QAAlignedRecall-BERTScore (Question & Answer Generation)
49
  type: qa_aligned_recall_bertscore_question_answer_generation
50
+ value: 77.74
51
  - name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
52
  type: qa_aligned_precision_bertscore_question_answer_generation
53
+ value: 76.76
54
  - name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
55
  type: qa_aligned_f1_score_moverscore_question_answer_generation
56
+ value: 52.36
57
  - name: QAAlignedRecall-MoverScore (Question & Answer Generation)
58
  type: qa_aligned_recall_moverscore_question_answer_generation
59
+ value: 52.54
60
  - name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
61
  type: qa_aligned_precision_moverscore_question_answer_generation
62
+ value: 52.19
63
  ---
64
 
65
  # Model Card of `lmqg/mt5-small-frquad-qag`
 
68
 
69
  ### Overview
70
  - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
71
+ - **Language:** fr
72
  - **Training data:** [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) (default)
73
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
74
  - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
 
80
  from lmqg import TransformersQG
81
 
82
  # initialize model
83
+ model = TransformersQG(language="fr", model="lmqg/mt5-small-frquad-qag")
84
 
85
  # model prediction
86
+ question_answer_pairs = model.generate_qa("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
87
 
88
  ```
89
 
 
92
  from transformers import pipeline
93
 
94
  pipe = pipeline("text2text-generation", "lmqg/mt5-small-frquad-qag")
95
+ output = pipe("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
96
 
97
  ```
98
 
 
103
 
104
  | | Score | Type | Dataset |
105
  |:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
106
+ | BERTScore | 64.81 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
107
+ | Bleu_1 | 9.63 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
108
+ | Bleu_2 | 4.73 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
109
  | Bleu_3 | 2.64 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
110
  | Bleu_4 | 1.54 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
111
+ | METEOR | 16.12 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
112
+ | MoverScore | 50.01 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
113
+ | QAAlignedF1Score (BERTScore) | 77.23 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
114
+ | QAAlignedF1Score (MoverScore) | 52.36 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
115
+ | QAAlignedPrecision (BERTScore) | 76.76 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
116
+ | QAAlignedPrecision (MoverScore) | 52.19 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
117
+ | QAAlignedRecall (BERTScore) | 77.74 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
118
+ | QAAlignedRecall (MoverScore) | 52.54 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
119
  | ROUGE_L | 15.33 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
120
 
121
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "lmqg_output/mt5-small-frquad-qag/best_model",
3
  "add_prefix": false,
4
  "architectures": [
5
  "MT5ForConditionalGeneration"
 
1
  {
2
+ "_name_or_path": "lmqg_output/mt5-small-frquad-qag/model_mzgdpa/epoch_5",
3
  "add_prefix": false,
4
  "architectures": [
5
  "MT5ForConditionalGeneration"
eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.25859671539704127, "Bleu_2": 0.13949939339072215, "Bleu_3": 0.07485661291712845, "Bleu_4": 0.04638711935920634, "METEOR": 0.17697673583128798, "ROUGE_L": 0.22388752550998495, "BERTScore": 0.8470916992519051, "MoverScore": 0.577382649328321, "QAAlignedF1Score (BERTScore)": 0.8621119608089612, "QAAlignedRecall (BERTScore)": 0.8532999200520978, "QAAlignedPrecision (BERTScore)": 0.8713582639828221, "QAAlignedF1Score (MoverScore)": 0.6298209140722376, "QAAlignedRecall (MoverScore)": 0.6194297118727317, "QAAlignedPrecision (MoverScore)": 0.6410915896724291}, "test": {"Bleu_1": 0.09685450309508968, "Bleu_2": 0.04741444591435769, "Bleu_3": 0.026373005579639383, "Bleu_4": 0.015386846865122246, "METEOR": 0.16155892907631575, "ROUGE_L": 0.15327510544926737, "BERTScore": 0.8138035876975421, "MoverScore": 0.5476106864070216, "QAAlignedF1Score (BERTScore)": 0.8524109194415295, "QAAlignedRecall (BERTScore)": 0.855268605848854, "QAAlignedPrecision (BERTScore)": 0.8497056542294297, "QAAlignedF1Score (MoverScore)": 0.6136136229281918, "QAAlignedRecall (MoverScore)": 0.6167186901698161, "QAAlignedPrecision (MoverScore)": 0.6107795531618404}}
 
1
+ {"validation": {"Bleu_1": 0.2564384685675875, "Bleu_2": 0.13902990103351057, "Bleu_3": 0.07604461713510292, "Bleu_4": 0.048106284604614094, "METEOR": 0.17653494749823867, "ROUGE_L": 0.2232438933303529, "BERTScore": 0.7172536270615334, "MoverScore": 0.5143960911054434, "QAAlignedF1Score (BERTScore)": 0.7857067185106722, "QAAlignedRecall (BERTScore)": 0.7718289041633918, "QAAlignedPrecision (BERTScore)": 0.8009370848859868, "QAAlignedF1Score (MoverScore)": 0.537190892301648, "QAAlignedRecall (MoverScore)": 0.5271966829181792, "QAAlignedPrecision (MoverScore)": 0.5480096039350669}, "test": {"Bleu_1": 0.09633246448107434, "Bleu_2": 0.04727655391960404, "Bleu_3": 0.026361268946164768, "Bleu_4": 0.015435301672827135, "METEOR": 0.16121154834362886, "ROUGE_L": 0.15333781495958926, "BERTScore": 0.6480726455069834, "MoverScore": 0.5000641898713922, "QAAlignedF1Score (BERTScore)": 0.7722652721437624, "QAAlignedRecall (BERTScore)": 0.7774018039561464, "QAAlignedPrecision (BERTScore)": 0.7676207746892701, "QAAlignedF1Score (MoverScore)": 0.5235572755796504, "QAAlignedRecall (MoverScore)": 0.5254281358359643, "QAAlignedPrecision (MoverScore)": 0.5218568654099053}}
eval/samples.test.hyp.paragraph.questions_answers.lmqg_qag_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph.questions_answers.lmqg_qag_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6534fd8ae07d6a16b9af184cc028218cceebef32c003865b77797de4a5e1cf15
3
- size 1200724741
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d22f1d493c5ce9892df6a599cb43e073e551b25738ee087aba5a8ce4042819a8
3
+ size 1200727429
tokenizer_config.json CHANGED
@@ -2,7 +2,7 @@
2
  "additional_special_tokens": null,
3
  "eos_token": "</s>",
4
  "extra_ids": 0,
5
- "name_or_path": "lmqg_output/mt5-small-frquad-qag/best_model",
6
  "pad_token": "<pad>",
7
  "sp_model_kwargs": {},
8
  "special_tokens_map_file": "/home/c.c2042013/.cache/huggingface/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276",
 
2
  "additional_special_tokens": null,
3
  "eos_token": "</s>",
4
  "extra_ids": 0,
5
+ "name_or_path": "lmqg_output/mt5-small-frquad-qag/model_mzgdpa/epoch_5",
6
  "pad_token": "<pad>",
7
  "sp_model_kwargs": {},
8
  "special_tokens_map_file": "/home/c.c2042013/.cache/huggingface/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276",