File size: 561 Bytes
a85fc8a
718fb0e
a85fc8a
cf29923
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: cc-by-nc-4.0
---

Collated datasets from 10 sources and preprocessed it to have ["texts", "labels"] columns to train/finetune sequence-to-sequence models such as T5/Blenderbot ... Below are the 10 datasets:
1. blended_skill_talk, 
2. conv_ai_2
3. empathetic_dialogues
4. wizard_of_wikipedia
5. meta_woz
6. multi_woz, 
7. spolin 
8. dailydialog
9. cornell_movie_dialogues
10. taskmaster

The data access and preprocessing code is [here](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/data_preprocessing/DataPreprocessing.ipynb)