ru_stackoverflow / README.md
IlyaGusev's picture
Update README.md
3bfd79e
metadata
license: other
task_categories:
  - text-generation
  - question-answering
language:
  - ru
size_categories:
  - 100K<n<1M
dataset_info:
  features:
    - name: question_id
      dtype: uint32
    - name: url
      dtype: string
    - name: answer_count
      dtype: uint32
    - name: text_html
      dtype: string
    - name: text_markdown
      dtype: string
    - name: score
      dtype: int32
    - name: title
      dtype: string
    - name: tags
      sequence: string
    - name: views
      dtype: uint64
    - name: author
      dtype: string
    - name: timestamp
      dtype: uint64
    - name: comments
      sequence:
        - name: text
          dtype: string
        - name: author
          dtype: string
        - name: comment_id
          dtype: uint32
        - name: score
          dtype: int32
        - name: timestamp
          dtype: uint64
    - name: answers
      sequence:
        - name: answer_id
          dtype: uint32
        - name: is_accepted
          dtype: uint8
        - name: text_html
          dtype: string
        - name: text_markdown
          dtype: string
        - name: score
          dtype: int32
        - name: author
          dtype: string
        - name: timestamp
          dtype: uint64
        - name: comments
          sequence:
            - name: text
              dtype: string
            - name: author
              dtype: string
            - name: comment_id
              dtype: uint32
            - name: score
              dtype: int32
            - name: timestamp
              dtype: uint64
  splits:
    - name: train
      num_bytes: 3013377174
      num_examples: 437604
  download_size: 670468664
  dataset_size: 3013377174

Russian StackOverflow dataset

Table of Contents

Description

Summary: Dataset of questions, answers, and comments from ru.stackoverflow.com.

Script: create_stackoverflow.py

Point of Contact: Ilya Gusev

Languages: The dataset is in Russian with some programming code.

Usage

Prerequisites:

pip install datasets zstandard jsonlines pysimdjson

Loading:

from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train")
for example in dataset:
    print(example["text_markdown"])
    print()

Data Instances

{
  "question_id": 11235,
  "answer_count": 1,
  "url": "https://ru.stackoverflow.com/questions/11235",
  "score": 2,
  "tags": ["c++", "сериализация"],
  "title": "Извлечение из файла, запись в файл",
  "views": 1309,
  "author": "...",
  "timestamp": 1303205289,
  "text_html": "...",
  "text_markdown": "...",
  "comments": {
    "text": ["...", "...",
    "author": ["...", "..."],
    "comment_id": [11236, 11237],
    "score": [0, 0],
    "timestamp": [1303205411, 1303205678]
  },
  "answers": {
    "answer_id": [11243, 11245],
    "timestamp": [1303207791, 1303207792],
    "is_accepted": [1, 0],
    "text_html": ["...", "..."],
    "text_markdown": ["...", "..."],
    "score": [3, 0],
    "author": ["...", "..."],
    "comments": {
      "text": ["...", "..."],
      "author": ["...", "..."],
      "comment_id": [11246, 11249],
      "score": [0, 0],
      "timestamp": [1303207961, 1303207800]
    }
  }
}

You can use this little helper to unflatten sequences:

def revert_flattening(records):
    fixed_records = []
    for key, values in records.items():
        if not fixed_records:
            fixed_records = [{} for _ in range(len(values))]
        for i, value in enumerate(values):
            fixed_records[i][key] = value
    return fixed_records

The original JSONL is already unflattened.

Source Data

Personal and Sensitive Information

The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.

Licensing Information

According to the license of original data, this dataset is distributed under CC BY-SA 2.5.