pszemraj's picture
Update README.md
b3a0b3f
metadata
license:
  - apache-2.0
  - bsd-3-clause
tags:
  - summarization
  - summary
  - booksum
  - long-document
  - long-form
  - tglobal-xl
  - XL
  - 8bit
  - quantized
datasets:
  - kmfoda/booksum
metrics:
  - rouge
inference: false
pipeline_tag: summarization

long-t5-tglobal-xl-16384-book-summary: 8-bit quantized version

Open In Colab

This is an 8-bit quantized version of the pszemraj/long-t5-tglobal-xl-16384-book-summary model, The model has been compressed using bitsandbytes and can be loaded with low memory usage.

Refer to the original model for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the 4.28.0 release information and the example repository.

  • The total size of the model is only ~3.5 GB (vs original 12 GB)
  • Enables low-RAM loading, making it easier to use in memory-limited environments like Colab
  • Requires bitsandbytes - AFAIK at time of writing, only works on GPU

Basic Usage

To use the model, install or upgrade transformers, accelerate, and bitsandbytes. Make sure to have transformers>=4.28.0 and bitsandbytes>0.37.2.

pip install -U -q transformers bitsandbytes accelerate

Load the model with AutoTokenizer and AutoModelForSeq2SeqLM:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_name = "pszemraj/long-t5-tglobal-xl-16384-book-summary-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

More information about long-t5-tglobal-xl-16384-book-summary

  • This is an 8-bit quantized version of pszemraj/long-t5-tglobal-xl-16384-book-summary.
    • It generalizes reasonably well to academic and narrative text.
    • The XL checkpoint typically generates summaries that are considerably better from a human evaluation perspective.