Update README.md
Browse files
README.md
CHANGED
@@ -20,3 +20,21 @@ configs:
|
|
20 |
- split: validation
|
21 |
path: data/validation-*
|
22 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
- split: validation
|
21 |
path: data/validation-*
|
22 |
---
|
23 |
+
|
24 |
+
# Tiny Conversations
|
25 |
+
|
26 |
+
## Overview
|
27 |
+
|
28 |
+
This dataset consists of dialogue samples sourced from two main resources: the **Cornell Movie Dialogs** and the **Taiga TV Series Subtitles**. The dataset primarily contains Russian language dialogues and is designed for various natural language processing tasks such as language modeling, and dialogue systems.
|
29 |
+
|
30 |
+
### Sources
|
31 |
+
|
32 |
+
1. **Cornell Movie Dialogs**:
|
33 |
+
- **Source**: [Cornell Movie Dialogs](https://github.com/Koziev/NLP_Datasets)
|
34 |
+
- **License**: CC0-1.0
|
35 |
+
- **Description**: This dataset includes cleaned subtitles from a collection of movie dialogues. Notably, many dialogues are sampled from the middle of conversations.
|
36 |
+
|
37 |
+
2. **Taiga TV Series Subtitles**:
|
38 |
+
- **Source**: [Russian Subtitles Dataset](https://github.com/dbklim/Russian_subtitles_dataset)
|
39 |
+
- **License**: Apache-2.0
|
40 |
+
- **Description**: The dataset is based on the Taiga corpus, specifically from a collection of subtitles across 347 TV series in multiple languages. For this dataset, only the Russian language subtitles were retained.
|