File size: 1,455 Bytes
455e3af ffc23db 455e3af 81fa613 034e0d4 81fa613 aaaf580 455e3af 034e0d4 455e3af 66fc1e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
dataset_info:
features:
- name: text
sequence: string
splits:
- name: train
num_bytes: 327344603
num_examples: 668582
- name: validation
num_bytes: 8406146
num_examples: 17144
download_size: 189165954
dataset_size: 335750749
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Tiny Conversations
## Overview
This dataset consists of dialogue samples sourced from two main resources: the **Cornell Movie Dialogs** and the **Taiga TV Series Subtitles**. The dataset primarily contains Russian language dialogues and is designed for various natural language processing tasks such as language modeling, and dialogue systems.
### Sources
1. **Cornell Movie Dialogs**:
- **Source**: [Cornell Movie Dialogs](https://github.com/Koziev/NLP_Datasets)
- **License**: CC0-1.0
- **Description**: This dataset includes cleaned subtitles from a collection of movie dialogues. Notably, many dialogues are sampled from the middle of conversations.
2. **Taiga TV Series Subtitles**:
- **Source**: [Russian Subtitles Dataset](https://github.com/dbklim/Russian_subtitles_dataset)
- **License**: Apache-2.0
- **Description**: The dataset is based on the Taiga corpus, specifically from a collection of subtitles across 347 TV series in multiple languages. For this dataset, only the Russian language subtitles were retained. |