asahi417 commited on
Commit
7ec4c6f
1 Parent(s): 68df59c

model update

Browse files
README.md CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: text2text-generation
14
  tags:
15
  - questions and answers generation
16
  widget:
17
- - text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
18
  example_title: "Questions & Answers Generation Example 1"
19
  model-index:
20
  - name: lmqg/t5-base-tweetqa-qag
@@ -29,25 +29,25 @@ model-index:
29
  metrics:
30
  - name: BLEU4
31
  type: bleu4
32
- value: 0.13263946554139405
33
  - name: ROUGE-L
34
  type: rouge-l
35
- value: 0.36935780155247455
36
  - name: METEOR
37
  type: meteor
38
- value: 0.3081166404528711
39
  - name: BERTScore
40
  type: bertscore
41
- value: 0.9085398159959508
42
  - name: MoverScore
43
  type: moverscore
44
- value: 0.6231917023300243
45
  - name: QAAlignedF1Score (BERTScore)
46
  type: qa_aligned_f1_score_bertscore
47
- value: 0.9251122858041756
48
  - name: QAAlignedF1Score (MoverScore)
49
  type: qa_aligned_f1_score_moverscore
50
- value: 0.6503876079996429
51
  ---
52
 
53
  # Model Card of `lmqg/t5-base-tweetqa-qag`
@@ -89,7 +89,7 @@ from lmqg import TransformersQG
89
  # initialize model
90
  model = TransformersQG(language='en', model='lmqg/t5-base-tweetqa-qag')
91
  # model prediction
92
- question = model.generate_qa(list_context=["William Turner was an English painter who specialised in watercolour landscapes"], list_answer=["William Turner"])
93
 
94
  ```
95
 
@@ -100,7 +100,7 @@ from transformers import pipeline
100
  # initialize model
101
  pipe = pipeline("text2text-generation", 'lmqg/t5-base-tweetqa-qag')
102
  # question generation
103
- question = pipe('generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.')
104
 
105
  ```
106
 
@@ -111,14 +111,14 @@ question = pipe('generate question and answer: Beyonce further expanded her act
111
 
112
  | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
113
  |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
114
- | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | default | 0.133 | 0.369 | 0.308 | 0.909 | 0.623 | [link](https://huggingface.co/lmqg/t5-base-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) |
115
 
116
 
117
  ### Metrics (QAG)
118
 
119
  | Dataset | Type | QA Aligned F1 Score (BERTScore) | QA Aligned F1 Score (MoverScore) | Link |
120
  |:--------|:-----|--------------------------------:|---------------------------------:|-----:|
121
- | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | default | 0.925 | 0.65 | [link](https://huggingface.co/lmqg/t5-base-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) |
122
 
123
 
124
 
@@ -134,13 +134,13 @@ The following hyperparameters were used during fine-tuning:
134
  - model: t5-base
135
  - max_length: 256
136
  - max_length_output: 128
137
- - epoch: 14
138
  - batch: 32
139
  - lr: 0.0001
140
  - fp16: False
141
  - random_seed: 1
142
  - gradient_accumulation_steps: 2
143
- - label_smoothing: 0.0
144
 
145
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-tweetqa-qag/raw/main/trainer_config.json).
146
 
 
14
  tags:
15
  - questions and answers generation
16
  widget:
17
+ - text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
18
  example_title: "Questions & Answers Generation Example 1"
19
  model-index:
20
  - name: lmqg/t5-base-tweetqa-qag
 
29
  metrics:
30
  - name: BLEU4
31
  type: bleu4
32
+ value: 0.12931835496445465
33
  - name: ROUGE-L
34
  type: rouge-l
35
+ value: 0.36535644337488943
36
  - name: METEOR
37
  type: meteor
38
+ value: 0.30354623890919497
39
  - name: BERTScore
40
  type: bertscore
41
+ value: 0.9055079668504705
42
  - name: MoverScore
43
  type: moverscore
44
+ value: 0.6182219856624396
45
  - name: QAAlignedF1Score (BERTScore)
46
  type: qa_aligned_f1_score_bertscore
47
+ value: 0.9237100142709498
48
  - name: QAAlignedF1Score (MoverScore)
49
  type: qa_aligned_f1_score_moverscore
50
+ value: 0.646258531258488
51
  ---
52
 
53
  # Model Card of `lmqg/t5-base-tweetqa-qag`
 
89
  # initialize model
90
  model = TransformersQG(language='en', model='lmqg/t5-base-tweetqa-qag')
91
  # model prediction
92
+ question = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
93
 
94
  ```
95
 
 
100
  # initialize model
101
  pipe = pipeline("text2text-generation", 'lmqg/t5-base-tweetqa-qag')
102
  # question generation
103
+ question = pipe('generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.')
104
 
105
  ```
106
 
 
111
 
112
  | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
113
  |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
114
+ | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | default | 0.129 | 0.365 | 0.304 | 0.906 | 0.618 | [link](https://huggingface.co/lmqg/t5-base-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) |
115
 
116
 
117
  ### Metrics (QAG)
118
 
119
  | Dataset | Type | QA Aligned F1 Score (BERTScore) | QA Aligned F1 Score (MoverScore) | Link |
120
  |:--------|:-----|--------------------------------:|---------------------------------:|-----:|
121
+ | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | default | 0.924 | 0.646 | [link](https://huggingface.co/lmqg/t5-base-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) |
122
 
123
 
124
 
 
134
  - model: t5-base
135
  - max_length: 256
136
  - max_length_output: 128
137
+ - epoch: 15
138
  - batch: 32
139
  - lr: 0.0001
140
  - fp16: False
141
  - random_seed: 1
142
  - gradient_accumulation_steps: 2
143
+ - label_smoothing: 0.15
144
 
145
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-tweetqa-qag/raw/main/trainer_config.json).
146
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "lmqg_output/t5_base_tweetqa/best_model",
3
  "add_prefix": true,
4
  "architectures": [
5
  "T5ForConditionalGeneration"
 
1
  {
2
+ "_name_or_path": "lmqg_output/t5_base_tweetqa/model_eszyci/epoch_10",
3
  "add_prefix": true,
4
  "architectures": [
5
  "T5ForConditionalGeneration"
eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.3738463313336194, "Bleu_2": 0.25187710741309555, "Bleu_3": 0.17301367663064196, "Bleu_4": 0.12056853153991282, "METEOR": 0.3347661830653345, "ROUGE_L": 0.3823696390369866, "BERTScore": 0.9027718756456522, "MoverScore": 0.6215475528953677, "QAAlignedF1Score (BERTScore)": 0.9188373178327536, "QAAlignedF1Score (MoverScore)": 0.6489597272156573}, "test": {"Bleu_1": 0.4051769369789305, "Bleu_2": 0.2754558448773594, "Bleu_3": 0.18928799859905374, "Bleu_4": 0.13263946554139405, "METEOR": 0.3081166404528711, "ROUGE_L": 0.36935780155247455, "BERTScore": 0.9085398159959508, "MoverScore": 0.6231917023300243, "QAAlignedF1Score (BERTScore)": 0.9251122858041756, "QAAlignedF1Score (MoverScore)": 0.6503876079996429}}
 
1
+ {"validation": {"Bleu_1": 0.3745080763582746, "Bleu_2": 0.2533067107107663, "Bleu_3": 0.17606641193871814, "Bleu_4": 0.12466416914978967, "METEOR": 0.33022954796005266, "ROUGE_L": 0.3766922696498115, "BERTScore": 0.9021096628280691, "MoverScore": 0.6197868517703407, "QAAlignedF1Score (BERTScore)": 0.919022094072905, "QAAlignedF1Score (MoverScore)": 0.6496686673949033}, "test": {"Bleu_1": 0.3929101495604297, "Bleu_2": 0.2668844135148667, "Bleu_3": 0.1839940844567295, "Bleu_4": 0.12931835496445465, "METEOR": 0.30354623890919497, "ROUGE_L": 0.36535644337488943, "BERTScore": 0.9055079668504705, "MoverScore": 0.6182219856624396, "QAAlignedF1Score (BERTScore)": 0.9237100142709498, "QAAlignedF1Score (MoverScore)": 0.646258531258488}}
eval/samples.test.hyp.paragraph.questions_answers.lmqg_qag_tweetqa.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph.questions_answers.lmqg_qag_tweetqa.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6de481d78e08319db5c05315e56fe506310cc08fb807168fc12425f511319949
3
- size 891614207
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:494cd04d8952275505d4e477022569e499b9fb9d27d4f5c5c6cb7e707df4590d
3
+ size 891617855
tokenizer_config.json CHANGED
@@ -104,7 +104,7 @@
104
  "eos_token": "</s>",
105
  "extra_ids": 100,
106
  "model_max_length": 512,
107
- "name_or_path": "lmqg_output/t5_base_tweetqa/best_model",
108
  "pad_token": "<pad>",
109
  "special_tokens_map_file": null,
110
  "tokenizer_class": "T5Tokenizer",
 
104
  "eos_token": "</s>",
105
  "extra_ids": 100,
106
  "model_max_length": 512,
107
+ "name_or_path": "lmqg_output/t5_base_tweetqa/model_eszyci/epoch_10",
108
  "pad_token": "<pad>",
109
  "special_tokens_map_file": null,
110
  "tokenizer_class": "T5Tokenizer",
trainer_config.json CHANGED
@@ -1 +1 @@
1
- {"dataset_path": "lmqg/qag_tweetqa", "dataset_name": "default", "input_types": ["paragraph"], "output_types": ["questions_answers"], "prefix_types": ["qag"], "model": "t5-base", "max_length": 256, "max_length_output": 128, "epoch": 14, "batch": 32, "lr": 0.0001, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 2, "label_smoothing": 0.0}
 
1
+ {"dataset_path": "lmqg/qag_tweetqa", "dataset_name": "default", "input_types": ["paragraph"], "output_types": ["questions_answers"], "prefix_types": ["qag"], "model": "t5-base", "max_length": 256, "max_length_output": 128, "epoch": 15, "batch": 32, "lr": 0.0001, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 2, "label_smoothing": 0.15}