emilio-ariza commited on
Commit
de32b11
1 Parent(s): 82b1055

Update README.md

Browse files

Completing missing fields of the dataset card.

Files changed (1) hide show
  1. README.md +48 -14
README.md CHANGED
@@ -87,66 +87,100 @@ Labels can be "entailment" if the premise entails the hypothesis, "contradiction
87
 
88
  ### Data Fields
89
 
90
- [Needs More Information]
 
 
 
 
 
 
91
 
92
  ### Data Splits
93
 
94
- [Needs More Information]
95
 
96
  ## Dataset Creation
97
 
98
  ### Curation Rationale
99
 
100
- [Needs More Information]
101
 
102
  ### Source Data
103
 
104
  #### Initial Data Collection and Normalization
105
 
106
- [Needs More Information]
 
 
 
 
107
 
108
  #### Who are the source language producers?
109
 
110
- [Needs More Information]
 
 
 
111
 
112
  ### Annotations
113
 
114
  #### Annotation process
115
 
116
- [Needs More Information]
 
 
 
117
 
118
  #### Who are the annotators?
119
 
120
- [Needs More Information]
 
 
 
121
 
122
  ### Personal and Sensitive Information
123
 
124
- [Needs More Information]
 
 
 
 
125
 
126
  ## Considerations for Using the Data
127
 
128
  ### Social Impact of Dataset
129
 
130
- [Needs More Information]
131
 
132
  ### Discussion of Biases
133
 
134
- [Needs More Information]
 
 
 
135
 
136
  ### Other Known Limitations
137
 
138
- [Needs More Information]
 
 
 
 
139
 
140
  ## Additional Information
141
 
142
  ### Dataset Curators
143
 
144
- [Needs More Information]
145
 
146
  ### Licensing Information
147
 
148
- [Needs More Information]
 
 
 
 
149
 
150
  ### Citation Information
151
 
152
- [Needs More Information]
 
87
 
88
  ### Data Fields
89
 
90
+ gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
91
+
92
+ pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
93
+
94
+ sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
95
+
96
+ sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
97
 
98
  ### Data Splits
99
 
100
+ The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
101
 
102
  ## Dataset Creation
103
 
104
  ### Curation Rationale
105
 
106
+ This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
107
 
108
  ### Source Data
109
 
110
  #### Initial Data Collection and Normalization
111
 
112
+ Please refer to the respective documentations of the original datasets:
113
+ https://nlp.stanford.edu/projects/snli/
114
+ https://arxiv.org/pdf/1809.05053.pdf
115
+ https://cims.nyu.edu/~sbowman/multinli/
116
+
117
 
118
  #### Who are the source language producers?
119
 
120
+ Please refer to the respective documentations of the original datasets:
121
+ https://nlp.stanford.edu/projects/snli/
122
+ https://arxiv.org/pdf/1809.05053.pdf
123
+ https://cims.nyu.edu/~sbowman/multinli/
124
 
125
  ### Annotations
126
 
127
  #### Annotation process
128
 
129
+ Please refer to the respective documentations of the original datasets:
130
+ https://nlp.stanford.edu/projects/snli/
131
+ https://arxiv.org/pdf/1809.05053.pdf
132
+ https://cims.nyu.edu/~sbowman/multinli/
133
 
134
  #### Who are the annotators?
135
 
136
+ Please refer to the respective documentations of the original datasets:
137
+ https://nlp.stanford.edu/projects/snli/
138
+ https://arxiv.org/pdf/1809.05053.pdf
139
+ https://cims.nyu.edu/~sbowman/multinli/
140
 
141
  ### Personal and Sensitive Information
142
 
143
+ In general, no sensitive information is conveyed in the sentences.
144
+ Please refer to the respective documentations of the original datasets:
145
+ https://nlp.stanford.edu/projects/snli/
146
+ https://arxiv.org/pdf/1809.05053.pdf
147
+ https://cims.nyu.edu/~sbowman/multinli/
148
 
149
  ## Considerations for Using the Data
150
 
151
  ### Social Impact of Dataset
152
 
153
+ The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
154
 
155
  ### Discussion of Biases
156
 
157
+ Please refer to the respective documentations of the original datasets:
158
+ https://nlp.stanford.edu/projects/snli/
159
+ https://arxiv.org/pdf/1809.05053.pdf
160
+ https://cims.nyu.edu/~sbowman/multinli/
161
 
162
  ### Other Known Limitations
163
 
164
+ The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
165
+ For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
166
+ https://nlp.stanford.edu/projects/snli/
167
+ https://arxiv.org/pdf/1809.05053.pdf
168
+ https://cims.nyu.edu/~sbowman/multinli/
169
 
170
  ## Additional Information
171
 
172
  ### Dataset Curators
173
 
174
+ The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
175
 
176
  ### Licensing Information
177
 
178
+ This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
179
+ Please refer to the respective documentations of the original datasets for information on their licenses:
180
+ https://nlp.stanford.edu/projects/snli/
181
+ https://arxiv.org/pdf/1809.05053.pdf
182
+ https://cims.nyu.edu/~sbowman/multinli/
183
 
184
  ### Citation Information
185
 
186
+ If you need to cite this dataset, you can link to this readme.