title-generator / README.md
TusharJoshi89's picture
reformatting Readme.md
4775a3c
|
raw
history blame
6.36 kB
metadata
license: apache-2.0
language:
  - en
metrics:
  - Rouge
pipeline_tag: summarization
tags:
  - t5
  - t5-small
  - summarization
  - medical-research

Model Card for Model ID

This model create summarizes text paragraphs from medical research journals into brief titles.

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

This is a text generative model to summarize long abstract from medical jourals into one liners. These one liners can be used as titles in the journal.

  • Developed by: Tushar Joshi
  • Shared by [optional]: Tushar Joshi
  • Model type: t5-small
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model [optional]: t5-small baseline

Model Sources [optional]

Uses

  • As a text summarizer for medical journals paragraphs
  • As a tunable language model for other medical tasks

Direct Use

  • As a text summarizer for medical abstracts and journals.

Out-of-Scope Use

Should not be used as a text summarizer for very long tasks. Maximum token size of 1024.

Bias, Risks, and Limitations

  • Max input token size of 1024
  • Max output token size of 24

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import pipeline
text = """Text that needs to be summarized"""

summarizer = pipeline("summarization", model="path-to-model")
summary = summarizer(text)[0]["summary_text"]

print (summary)

Training Details

Training Data

The training data is internally curated and canot be exposed.

Training Procedure

None

Preprocessing [optional]

None

Training Hyperparameters

  • Training regime: [More Information Needed]
  • None

Speeds, Sizes, Times [optional]

The training was done using GPU T4x 2. The task took 4:09:47 to complete. The dataset size of 10,000 examples was used for training the generative model.

Evaluation

The quality of summarization was tested on 5000 medical journals created over last 20 years. The data of medical jounals is scraped from various sources.

Testing Data, Factors & Metrics

Test Data Size: 5000 examples

Testing Data

The testing data is internally generated and curated.

Factors

[More Information Needed]

Metrics

The model was evaluated on Rouge Metrics below are the baseline results achieved

Results

Epoch Training Loss Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
18 2.442800 2.375408 0.313700 0.134600 0.285400 0.285400 16.414100
19 2.454800 2.372553 0.312900 0.134100 0.284900 0.285000 16.445100
20 2.438900 2.372551 0.312300 0.134000 0.284500 0.284600 16.435500

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: GPU T4 x 2
  • Hours used: 4.5
  • Cloud Provider: GCP
  • Compute Region: Ireland
  • Carbon Emitted: Unknown

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

Tushar Joshi

Model Card Contact

Tushar Joshi LinkedIn - https://www.linkedin.com/in/tushar-joshi-816133100/