flan-t5
Collection
A set flant-t5 fine tuned models for practicing
•
2 items
•
Updated
Enhanced Dialogue Summarization Model using Parameter-Efficient Fine-Tuning (PEFT) with LoRA adapters on google/flan-t5-small. Achieves improved summary quality while training only 0.16% of parameters.
Optimized for dialogue summarization tasks in customer service, meeting transcripts, and conversational analysis.
While LoRA maintains similar bias profiles to full fine-tuning, users should:
⚠️ Validate outputs for sensitive domains
⚠️ Test with diverse dialogue samples
⚠️ Monitor for hallucination in summaries
Base model
google/flan-t5-small