Edit model card

DialoGPT-small finetuned in the Maptask Corpus. The repository with additionally pre-processed Maptask dialogues of concatenated utterances per speaker, 80/10/10 train/val/test split and metadata, is a fork of the Nathan Duran's repository. For finetuning the train_dialogpt.ipynb notebook from Nathan Cooper's Tutorial was used to finetune the model with slight modifications in Google Collab.

History dialogue context = 5. Number of utterances: 14712 (train set), 1964 (test set), 2017 (val set). Fine-tuning for 3 epochs with batch size 2.

Evaluation perplexity in Maptask from 410.7796 (pre-trainded model) to 19.7469 (fine-tuned model).

Downloads last month
5