MeetPEFT-7B-16K / README.md
LYQIN's picture
Update README.md
a6a0e28
|
raw
history blame
389 Bytes
metadata
datasets:
  - huuuyeah/meetingbank
  - pszemraj/qmsum-cleaned
language:
  - en
pipeline_tag: summarization
tags:
  - Meeting
  - Summarization

MeetPEFT: Parameter Efficient Fine-Tuning on LLMs for Long Meeting Summarization

We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.

The model is fine-tuned on MeetingBank and QMSum datasets.