--- license: cc-by-nc-4.0 --- Collated datasets from 10 sources and preprocessed it to have ["texts", "labels"] columns to train/finetune sequence-to-sequence models such as T5/Blenderbot ... Below are the 10 datasets: 1. blended_skill_talk, 2. conv_ai_2 3. empathetic_dialogues 4. wizard_of_wikipedia 5. meta_woz 6. multi_woz, 7. spolin 8. dailydialog 9. cornell_movie_dialogues 10. taskmaster The data access and preprocessing code is [here](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/data_preprocessing/DataPreprocessing.ipynb)