File size: 6,616 Bytes
817f06e
 
 
 
 
ec9e7a4
817f06e
ec9e7a4
ea9066a
817f06e
 
 
 
 
 
 
02f7002
817f06e
 
2804e25
2dc0dfb
 
 
 
 
 
 
 
 
5f8e56e
 
 
 
 
 
 
 
 
 
 
 
 
2dc0dfb
 
 
 
7b0f972
 
 
2dc0dfb
 
 
 
 
 
 
 
 
 
5f8e56e
 
 
 
 
 
 
 
 
 
 
 
 
2dc0dfb
 
 
 
7b0f972
 
 
2dc0dfb
 
 
 
 
 
 
 
 
 
5f8e56e
 
 
 
 
 
 
 
 
 
 
 
 
2dc0dfb
 
 
 
7b0f972
 
 
2dc0dfb
 
817f06e
 
 
 
 
 
 
0b67214
817f06e
 
 
0b67214
 
817f06e
 
 
 
 
 
 
 
 
 
 
 
 
70eee97
817f06e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70eee97
 
 
2dc0dfb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Persian NER
dataset_info:
- config_name: fold1
  features:
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence:
      class_label:
        names:
          '0': O
          '1': I-event
          '2': I-fac
          '3': I-loc
          '4': I-org
          '5': I-pers
          '6': I-pro
          '7': B-event
          '8': B-fac
          '9': B-loc
          '10': B-org
          '11': B-pers
          '12': B-pro
  splits:
  - name: train
    num_bytes: 3362102
    num_examples: 5121
  - name: test
    num_bytes: 1646481
    num_examples: 2560
  download_size: 1931170
  dataset_size: 5008583
- config_name: fold2
  features:
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence:
      class_label:
        names:
          '0': O
          '1': I-event
          '2': I-fac
          '3': I-loc
          '4': I-org
          '5': I-pers
          '6': I-pro
          '7': B-event
          '8': B-fac
          '9': B-loc
          '10': B-org
          '11': B-pers
          '12': B-pro
  splits:
  - name: train
    num_bytes: 3344561
    num_examples: 5120
  - name: test
    num_bytes: 1664022
    num_examples: 2561
  download_size: 1931170
  dataset_size: 5008583
- config_name: fold3
  features:
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence:
      class_label:
        names:
          '0': O
          '1': I-event
          '2': I-fac
          '3': I-loc
          '4': I-org
          '5': I-pers
          '6': I-pro
          '7': B-event
          '8': B-fac
          '9': B-loc
          '10': B-org
          '11': B-pers
          '12': B-pro
  splits:
  - name: train
    num_bytes: 3310491
    num_examples: 5121
  - name: test
    num_bytes: 1698092
    num_examples: 2560
  download_size: 1931170
  dataset_size: 5008583
---

# Dataset Card for [Persian NER]

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [Github](https://github.com/HaniehP/PersianNER)
- **Repository:** [Github](https://github.com/HaniehP/PersianNER)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/C16-1319)
- **Leaderboard:**
- **Point of Contact:**

### Dataset Summary

The dataset includes 7,682 Persian sentences, split into 250,015 tokens and their NER labels. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB format.

### Supported Tasks and Leaderboards

[More Information Needed]

### Languages

[More Information Needed]

## Dataset Structure

### Data Instances


### Data Fields

- `id`: id of the sample
 - `tokens`: the tokens of the example text
 - `ner_tags`: the NER tags of each token

The NER tags correspond to this list:
 ```
"O", "I-event", "I-fac", "I-loc", "I-org", "I-pers", "I-pro", "B-event", "B-fac", "B-loc", "B-org", "B-pers", "B-pro"
 ```

### Data Splits

Training and test splits

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

Hanieh Poostchi, Ehsan Zare Borzeshi, Mohammad Abdous, Massimo Piccardi

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

Hanieh Poostchi, Ehsan Zare Borzeshi, Mohammad Abdous, Massimo Piccardi

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

Dataset is published for academic use only

### Dataset Curators

[More Information Needed]

### Licensing Information

Creative Commons Attribution 4.0 International License.

### Citation Information

@inproceedings{poostchi-etal-2016-personer,
    title = "{P}erso{NER}: {P}ersian Named-Entity Recognition",
    author = "Poostchi, Hanieh  and
      Zare Borzeshi, Ehsan  and
      Abdous, Mohammad  and
      Piccardi, Massimo",
    booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
    month = dec,
    year = "2016",
    address = "Osaka, Japan",
    publisher = "The COLING 2016 Organizing Committee",
    url = "https://www.aclweb.org/anthology/C16-1319",
    pages = "3381--3389",
    abstract = "Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network.",
}

### Contributions

Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset.