metadata
language: en
tags:
- summarization
- bart
- medical question answering
- question understanding
- consumer health question
- prompt engineering
license: apache-2.0
datasets:
- bigbio/meqsum
widget:
- text: >-
SUBJECT: high inner eye pressure above 21 possible glaucoma
MESSAGE: have seen inner eye pressure increase as I have begin taking
Rizatriptan. I understand the med narrows blood vessels. Can this med.
cause or effect the closed or wide angle issues with the eyelense/glacoma.
model-index:
- name: medqsum-bart-large-xsum-meqsum
results: []
task:
- type: summarization
- name: Summarization
metrics:
- rouge
pipeline_tag: summarization
library_name: transformers
Bart Xsum model finetuned on MeqSum
medqsum-bart-large-xsum-meqsum is the best fine-tuned model in the paper Enhancing Large Language Models' Utility for Medical Question-Answering: A Patient Health Question Summarization Approach, which introduces a solution to get the most out of LLMs, when answering health-related questions. We address the challenge of crafting accurate prompts by summarizing consumer health questions (CHQs) to generate clear and concise medical questions. Our approach involves fine-tuning Transformer-based models, including Flan-T5 in resource-constrained environments and three medical question summarization datasets.
Hyperparameters
{
"dataset_name": "MeQSum",
"learning_rate": 3e-05,
"model_name_or_path": "facebook/bart-large-xsum",
"num_train_epochs": 4,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"seed": 7
}
Usage
from transformers import pipeline
summarizer = pipeline("summarization", model="NouRed/medqsum-bart-large-xsum-meqsum")
chq = '''SUBJECT: high inner eye pressure above 21 possible glaucoma
MESSAGE: have seen inner eye pressure increase as I have begin taking
Rizatriptan. I understand the med narrows blood vessels. Can this med.
cause or effect the closed or wide angle issues with the eyelense/glacoma.
'''
summarizer(chq)
Results
key | value |
---|---|
eval_rouge1 | 54.32 |
eval_rouge2 | 38.08 |
eval_rougeL | 51.98 |
eval_rougeLsum | 51.99 |