File size: 1,773 Bytes
86456f1
 
0242fb6
 
 
 
 
 
 
 
 
 
 
 
75a1db2
 
0242fb6
75a1db2
 
0242fb6
75a1db2
0242fb6
 
 
 
 
 
 
86456f1
0242fb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: apache-2.0
dataset_info:
  features:
  - name: word
    dtype: string
  - name: form
    dtype: string
  - name: sentence
    dtype: string
  - name: paraphrase
    dtype: string
  splits:
  - name: train
    num_bytes: 480909
    num_examples: 1007
  - name: test
    num_bytes: 42006
    num_examples: 77
  download_size: 290128
  dataset_size: 522915
task_categories:
- text-generation
- text2text-generation
language:
- ru
size_categories:
- 1K<n<10K
---

# Dataset Card for Ru Anglicism

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Splits](#data-splits)

## Dataset Description

### Dataset Summary

Dataset for detection and substraction anglicisms from sentences in Russian. Sentences with anglicism automatically parsed from National Corpus of the Russian language, Habr and Pikabu. The paraphrases for the sentences were created manually.  

### Languages

The dataset is in Russian.

### Usage

Loading dataset:
```python
from datasets import load_dataset
dataset = load_dataset('shershen/ru_anglicism')
```

## Dataset Structure

### Data Instunces

For each instance, there are four strings: word, form, sentence and paraphrase.

```
{
  'word': 'коллаб',
  'form': 'коллабу',
  'sentence': 'Сделаем коллабу, раскрутимся.',
  'paraphrase': 'Сделаем совместный проект, раскрутимся.'
}
```

### Data Splits

Full dataset contains 1084 sentences. Split of dataset is:

| Dataset Split | Number of Rows
|:---------|:---------|
| Train | 1007 |
| Test | 77 |