Automatic Personalized Impression Generation for PET Reports Using Large Language Models πβ
Authored by: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
π Model Description
This is the domain-adapted BARTScore for evaluating the quality of PET impressions.
To check our domain-adapted text-generation-based evaluation metrics:
π Usage
Clone this GitHub repository in a local folder
git clone https://github.com/xtie97/PET-Report-Summarization.git
Go the the folder containing codes for computing BARTScore and create a new folder called "checkpoints"
cd ./PET-Report-Summarization/evaluation_metrics/metrics/BARTScore
mkdir checkpoints
mkdir checkpoints/bart-large
Download the model weights and put them in the folder "checkpoints/bart-large". Run the code for computing text-generation-based metrics
python compute_metrics_text_generation.py
π Additional Resources
- Codebase for evaluation metrics: GitHub
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.