Edit model card

MeetPEFT: Parameter Efficient Fine-Tuning on LLMs for Long Meeting Summarization

We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.

The model is fine-tuned on MeetingBank and QMSum datasets.

Downloads last month
4

Datasets used to train MeetPEFT/MeetPEFT-7B-16K