File size: 6,141 Bytes
137f894
 
 
 
 
 
c94a936
137f894
c94a936
137f894
 
 
 
 
 
 
 
340192a
137f894
340192a
 
14215f4
 
137f894
 
 
 
 
 
 
 
 
 
 
 
 
 
c94a936
137f894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8cd54b7
137f894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c94a936
137f894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93a609e
137f894
 
 
 
 
93a609e
137f894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c94a936
137f894
 
 
 
 
 
 
c94a936
 
137f894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- bsd-2-clause
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
- text-scoring
pretty_name: HuSST
---

# Dataset Card for HuSST

## Table of Contents

- [Table of Contents](#table-of-contents)

- [Dataset Description](#dataset-description)

  - [Dataset Summary](#dataset-summary)

  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)

  - [Language](#language)

- [Dataset Structure](#dataset-structure)

  - [Data Instances](#data-instances)

  - [Data Fields](#data-fields)

  - [Data Splits](#data-splits)

- [Dataset Creation](#dataset-creation)

  - [Curation Rationale](#curation-rationale)

  - [Source Data](#source-data)

  - [Annotations](#annotations)

  - [Personal and Sensitive Information](#personal-and-sensitive-information)

- [Considerations for Using the Data](#considerations-for-using-the-data)

  - [Social Impact of Dataset](#social-impact-of-dataset)

  - [Discussion of Biases](#discussion-of-biases)

  - [Other Known Limitations](#other-known-limitations)

- [Additional Information](#additional-information)

  - [Dataset Curators](#dataset-curators)

  - [Licensing Information](#licensing-information)

  - [Citation Information](#citation-information)

  - [Contributions](#contributions)

## Dataset Description
- **Homepage:**
- **Repository:**
[HuSST dataset](https://github.com/nytud/HuSST)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)

### Dataset Summary

This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).

### Supported Tasks and Leaderboards

'sentiment classification'


'sentiment scoring'

### Language

The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU. 

## Dataset Structure

### Data Instances

For each instance, there is an id, a sentence and a sentiment label.

An example:

```
{
"Sent_id": "dev_0",
"Sent": "Nos, a Jason elment Manhattanbe és a Pokolba kapcsán, azt hiszem, az elkerülhetetlen folytatások ötletlistájáról kihúzhatunk egy űrállomást 2455-ben (hé, ne lődd   le a poént).",
"Label": "neutral"
 }

```

### Data Fields

- Sent_id: unique id of the instances;

- Sent: the sentence, translation of an instance of the SST dataset;

- Label: "negative", "neutral", or "positive".

### Data Splits

HuSST has 3 splits: *train*, *validation* and *test*. 

| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train         | 9344                              |
| validation    | 1168                              |
| test          | 1168                              |

The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment). 

## Dataset Creation

### Source Data

#### Initial Data Collection and Normalization

The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator. 

### Annotations

#### Annotation process

The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.

#### Who are the annotators?

The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background. 

## Additional Information

### Licensing Information


### Citation Information

If you use this resource or any part of its documentation, please refer to:

Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.

```

@inproceedings{ligetinagy2022hulu,
  title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
  author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Vadász, T.},
  booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
  year={2022},
  pages = {431--446}
}

```

and to:

Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.

```

@inproceedings{socher-etal-2013-recursive,
    title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
    author = "Socher, Richard  and
      Perelygin, Alex  and
      Wu, Jean  and
      Chuang, Jason  and
      Manning, Christopher D.  and
      Ng, Andrew  and
      Potts, Christopher",
    booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
    month = oct,
    year = "2013",
    address = "Seattle, Washington, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/D13-1170",
    pages = "1631--1642",
}

```

### Contributions

Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset.