Edit model card

DialoGPT-small finetuned in the Switchboard Dialogue Act (SwDa) Corpus. The repository with additionally pre-processed SwDa dialogues of concatenated utterances per speaker, 80/10/10 train/val/test split and metadata, is a fork of the Nathan Duran's repository. For finetuning the train_dialogpt.ipynb notebook from Nathan Cooper's Tutorial was used to finetune the model with slight modifications in Google Collab.

History dialogue context = 5. Number of utterances: 80704 (train set), 9749 (test set), 9616 (val set). Checkpoint-84000 after fine-tuning for 2 epochs with batch size 2.

Evaluation perplexity in SwDa from 635.6993 (pre-trainded model) to 18.1693 (fine-tuned model).

Downloads last month
2