Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
A variant of the Persona-Chat dataset was used, which contains 19319 short dialogues. MarianMT, a free and efficient Neural Machine Translation framework, was used to translate this dataset into Greek.
|
3 |
|
4 |
|
5 |
-
## Fine-tuning
|
6 |
Using the pre-trained "gpt2-greek" (https://huggingface.co/nikokons/gpt2-greek) model, we fine-tune it on this Greek version of translated Persona-Chat dataset for 3 epochs until there is no progress in validation loss. The model's input is customized to the Greek version of the PERSONA-CHAT dataset to perform the fine-tuning procedure. A batch size of 4 is used, and gradients are accumulated over 8 iterations, resulting in a total batch size of 32. The Adam optimization scheme is used, with a learning rate of 5.7e-5. The fine-tuning procedure is based on the https://github.com/huggingface/transfer-learning-conv-ai repository.
|
7 |
|
8 |
## Interact with the Chatbot:
|
|
|
2 |
A variant of the Persona-Chat dataset was used, which contains 19319 short dialogues. MarianMT, a free and efficient Neural Machine Translation framework, was used to translate this dataset into Greek.
|
3 |
|
4 |
|
5 |
+
## Fine-tuning for the task of dialogue:
|
6 |
Using the pre-trained "gpt2-greek" (https://huggingface.co/nikokons/gpt2-greek) model, we fine-tune it on this Greek version of translated Persona-Chat dataset for 3 epochs until there is no progress in validation loss. The model's input is customized to the Greek version of the PERSONA-CHAT dataset to perform the fine-tuning procedure. A batch size of 4 is used, and gradients are accumulated over 8 iterations, resulting in a total batch size of 32. The Adam optimization scheme is used, with a learning rate of 5.7e-5. The fine-tuning procedure is based on the https://github.com/huggingface/transfer-learning-conv-ai repository.
|
7 |
|
8 |
## Interact with the Chatbot:
|