File size: 4,622 Bytes
3e7b668
7d838f4
7da8fbe
 
3bfd79e
7da8fbe
 
 
 
b04a69c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e7b668
7da8fbe
eeba4a0
7da8fbe
 
 
16c2532
 
 
dd71245
 
7da8fbe
 
16c2532
7da8fbe
16c2532
7da8fbe
 
 
 
 
16c2532
7da8fbe
 
16c2532
7da8fbe
c364f2e
 
 
 
 
 
7da8fbe
 
 
aec67be
 
 
7da8fbe
 
16c2532
7da8fbe
 
 
 
 
 
 
 
 
 
 
 
 
 
8654ee2
25691f4
 
 
 
 
8654ee2
 
25691f4
 
 
 
 
 
 
8654ee2
25691f4
 
 
 
 
7da8fbe
8654ee2
7da8fbe
 
 
7079858
 
 
 
 
 
 
 
 
 
 
 
683c141
 
07f3c79
7da8fbe
 
 
 
 
dd71245
7da8fbe
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
---
license: other
task_categories:
- text-generation
- question-answering
language:
- ru
size_categories:
- 100K<n<1M
dataset_info:
  features:
  - name: question_id
    dtype: uint32
  - name: url
    dtype: string
  - name: answer_count
    dtype: uint32
  - name: text_html
    dtype: string
  - name: text_markdown
    dtype: string
  - name: score
    dtype: int32
  - name: title
    dtype: string
  - name: tags
    sequence: string
  - name: views
    dtype: uint64
  - name: author
    dtype: string
  - name: timestamp
    dtype: uint64
  - name: comments
    sequence:
    - name: text
      dtype: string
    - name: author
      dtype: string
    - name: comment_id
      dtype: uint32
    - name: score
      dtype: int32
    - name: timestamp
      dtype: uint64
  - name: answers
    sequence:
    - name: answer_id
      dtype: uint32
    - name: is_accepted
      dtype: uint8
    - name: text_html
      dtype: string
    - name: text_markdown
      dtype: string
    - name: score
      dtype: int32
    - name: author
      dtype: string
    - name: timestamp
      dtype: uint64
    - name: comments
      sequence:
      - name: text
        dtype: string
      - name: author
        dtype: string
      - name: comment_id
        dtype: uint32
      - name: score
        dtype: int32
      - name: timestamp
        dtype: uint64
  splits:
  - name: train
    num_bytes: 3013377174
    num_examples: 437604
  download_size: 670468664
  dataset_size: 3013377174
---

# Russian StackOverflow dataset

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)

## Description

**Summary:** Dataset of questions, answers, and comments from [ru.stackoverflow.com](https://ru.stackoverflow.com/).

**Script:** [create_stackoverflow.py](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py)

**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)

**Languages:** The dataset is in Russian with some programming code.


## Usage

Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```

Loading:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train")
for example in dataset:
    print(example["text_markdown"])
    print()
```

## Data Instances

```
{
  "question_id": 11235,
  "answer_count": 1,
  "url": "https://ru.stackoverflow.com/questions/11235",
  "score": 2,
  "tags": ["c++", "сериализация"],
  "title": "Извлечение из файла, запись в файл",
  "views": 1309,
  "author": "...",
  "timestamp": 1303205289,
  "text_html": "...",
  "text_markdown": "...",
  "comments": {
    "text": ["...", "...",
    "author": ["...", "..."],
    "comment_id": [11236, 11237],
    "score": [0, 0],
    "timestamp": [1303205411, 1303205678]
  },
  "answers": {
    "answer_id": [11243, 11245],
    "timestamp": [1303207791, 1303207792],
    "is_accepted": [1, 0],
    "text_html": ["...", "..."],
    "text_markdown": ["...", "..."],
    "score": [3, 0],
    "author": ["...", "..."],
    "comments": {
      "text": ["...", "..."],
      "author": ["...", "..."],
      "comment_id": [11246, 11249],
      "score": [0, 0],
      "timestamp": [1303207961, 1303207800]
    }
  }
}
```

You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
    fixed_records = []
    for key, values in records.items():
        if not fixed_records:
            fixed_records = [{} for _ in range(len(values))]
        for i, value in enumerate(values):
            fixed_records[i][key] = value
    return fixed_records
```

The original JSONL is already unflattened.

## Source Data

* The data source is the [Russian StackOverflow](https://ru.stackoverflow.com/) website.
* Original XMLs: [ru.stackoverflow.com.7z](https://ia600107.us.archive.org/27/items/stackexchange/ru.stackoverflow.com.7z).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py). 

## Personal and Sensitive Information

The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.

## Licensing Information

According to the license of original data, this dataset is distributed under [CC BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/).