flan-t5-small-summary-peft

Model Details

Model Description

Enhanced Dialogue Summarization Model using Parameter-Efficient Fine-Tuning (PEFT) with LoRA adapters on google/flan-t5-small. Achieves improved summary quality while training only 0.16% of parameters.

  • Developed by: Paul
  • Model type: Seq2Seq LM with LoRA adapters
  • Language(s): English
  • License: Apache 2.0 (inherited from base model)
  • Finetuned from: google/flan-t5-small
  • Training Efficiency: 94% parameter reduction vs full fine-tuning.

Model Sources

  • Repository: [Your HF Repo Link]
  • Paper: DialogSum Paper
  • Demo: [Gradio Space Link]

Uses

Direct Use

Optimized for dialogue summarization tasks in customer service, meeting transcripts, and conversational analysis.

Downstream Use

  • Conversational AI systems
  • Dialogue content indexing
  • Customer interaction analytics

Out-of-Scope Use

  • Medical/legal document analysis
  • Multilingual summarization
  • Real-time low-latency applications

Bias & Limitations

While LoRA maintains similar bias profiles to full fine-tuning, users should:

⚠️ Validate outputs for sensitive domains
⚠️ Test with diverse dialogue samples
⚠️ Monitor for hallucination in summaries

Quick Start

Downloads last month
22
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text2text-generation models for peft library.

Model tree for gnokit/flan-t5-small-summary-peft

Adapter
(46)
this model

Dataset used to train gnokit/flan-t5-small-summary-peft

Collection including gnokit/flan-t5-small-summary-peft