asahi417 commited on
Commit
348a0f1
•
1 Parent(s): bd49219

model update

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -19,7 +19,7 @@ widget:
19
  - text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014"
20
  example_title: "Question Answering Example 2"
21
  model-index:
22
- - name: lmqg/bart-base-tweetqa-question-answering
23
  results:
24
  - task:
25
  name: Text2text Generation
@@ -52,7 +52,7 @@ model-index:
52
  value: 48.38
53
  ---
54
 
55
- # Model Card of `lmqg/bart-base-tweetqa-question-answering`
56
  This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question answering task on the [lmqg/qg_tweetqa](https://huggingface.co/datasets/lmqg/qg_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
@@ -70,7 +70,7 @@ This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/
70
  from lmqg import TransformersQG
71
 
72
  # initialize model
73
- model = TransformersQG(language="en", model="lmqg/bart-base-tweetqa-question-answering")
74
 
75
  # model prediction
76
  answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
@@ -81,7 +81,7 @@ answers = model.answer_q(list_question="What is a person called is practicing he
81
  ```python
82
  from transformers import pipeline
83
 
84
- pipe = pipeline("text2text-generation", "lmqg/bart-base-tweetqa-question-answering")
85
  output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
86
 
87
  ```
@@ -89,7 +89,7 @@ output = pipe("question: What is a person called is practicing heresy?, context:
89
  ## Evaluation
90
 
91
 
92
- - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/bart-base-tweetqa-question-answering/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_tweetqa.default.json)
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-------------------------------------------------------------------|
@@ -125,7 +125,7 @@ The following hyperparameters were used during fine-tuning:
125
  - gradient_accumulation_steps: 2
126
  - label_smoothing: 0.15
127
 
128
- The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-tweetqa-question-answering/raw/main/trainer_config.json).
129
 
130
  ## Citation
131
  ```
 
19
  - text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014"
20
  example_title: "Question Answering Example 2"
21
  model-index:
22
+ - name: lmqg/bart-base-tweetqa-qa
23
  results:
24
  - task:
25
  name: Text2text Generation
 
52
  value: 48.38
53
  ---
54
 
55
+ # Model Card of `lmqg/bart-base-tweetqa-qa`
56
  This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question answering task on the [lmqg/qg_tweetqa](https://huggingface.co/datasets/lmqg/qg_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
 
70
  from lmqg import TransformersQG
71
 
72
  # initialize model
73
+ model = TransformersQG(language="en", model="lmqg/bart-base-tweetqa-qa")
74
 
75
  # model prediction
76
  answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
 
81
  ```python
82
  from transformers import pipeline
83
 
84
+ pipe = pipeline("text2text-generation", "lmqg/bart-base-tweetqa-qa")
85
  output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
86
 
87
  ```
 
89
  ## Evaluation
90
 
91
 
92
+ - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/bart-base-tweetqa-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_tweetqa.default.json)
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-------------------------------------------------------------------|
 
125
  - gradient_accumulation_steps: 2
126
  - label_smoothing: 0.15
127
 
128
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-tweetqa-qa/raw/main/trainer_config.json).
129
 
130
  ## Citation
131
  ```