Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
lhoestq HF staff commited on
Commit
04d6090
1 Parent(s): 5f13a8c

add dataset_info in dataset metadata

Browse files
Files changed (1) hide show
  1. README.md +162 -1
README.md CHANGED
@@ -33,6 +33,167 @@ configs:
33
  - es-fr
34
  - es-sv
35
  - fr-sv
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ---
37
 
38
  # Dataset Card for [Dataset Name]
@@ -169,4 +330,4 @@ English (en), Spanish (es), German (de), French (fr), Swedish (sv)
169
 
170
  ### Contributions
171
 
172
- Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset.
 
33
  - es-fr
34
  - es-sv
35
  - fr-sv
36
+ dataset_info:
37
+ - config_name: de-en
38
+ features:
39
+ - name: id
40
+ dtype: string
41
+ - name: translation
42
+ dtype:
43
+ translation:
44
+ languages:
45
+ - de
46
+ - en
47
+ splits:
48
+ - name: train
49
+ num_bytes: 38683
50
+ num_examples: 177
51
+ download_size: 16029
52
+ dataset_size: 38683
53
+ - config_name: de-es
54
+ features:
55
+ - name: id
56
+ dtype: string
57
+ - name: translation
58
+ dtype:
59
+ translation:
60
+ languages:
61
+ - de
62
+ - es
63
+ splits:
64
+ - name: train
65
+ num_bytes: 2316
66
+ num_examples: 24
67
+ download_size: 2403
68
+ dataset_size: 2316
69
+ - config_name: de-fr
70
+ features:
71
+ - name: id
72
+ dtype: string
73
+ - name: translation
74
+ dtype:
75
+ translation:
76
+ languages:
77
+ - de
78
+ - fr
79
+ splits:
80
+ - name: train
81
+ num_bytes: 41300
82
+ num_examples: 173
83
+ download_size: 16720
84
+ dataset_size: 41300
85
+ - config_name: de-sv
86
+ features:
87
+ - name: id
88
+ dtype: string
89
+ - name: translation
90
+ dtype:
91
+ translation:
92
+ languages:
93
+ - de
94
+ - sv
95
+ splits:
96
+ - name: train
97
+ num_bytes: 37414
98
+ num_examples: 178
99
+ download_size: 15749
100
+ dataset_size: 37414
101
+ - config_name: en-es
102
+ features:
103
+ - name: id
104
+ dtype: string
105
+ - name: translation
106
+ dtype:
107
+ translation:
108
+ languages:
109
+ - en
110
+ - es
111
+ splits:
112
+ - name: train
113
+ num_bytes: 2600
114
+ num_examples: 25
115
+ download_size: 2485
116
+ dataset_size: 2600
117
+ - config_name: en-fr
118
+ features:
119
+ - name: id
120
+ dtype: string
121
+ - name: translation
122
+ dtype:
123
+ translation:
124
+ languages:
125
+ - en
126
+ - fr
127
+ splits:
128
+ - name: train
129
+ num_bytes: 39503
130
+ num_examples: 175
131
+ download_size: 16038
132
+ dataset_size: 39503
133
+ - config_name: en-sv
134
+ features:
135
+ - name: id
136
+ dtype: string
137
+ - name: translation
138
+ dtype:
139
+ translation:
140
+ languages:
141
+ - en
142
+ - sv
143
+ splits:
144
+ - name: train
145
+ num_bytes: 35778
146
+ num_examples: 180
147
+ download_size: 15147
148
+ dataset_size: 35778
149
+ - config_name: es-fr
150
+ features:
151
+ - name: id
152
+ dtype: string
153
+ - name: translation
154
+ dtype:
155
+ translation:
156
+ languages:
157
+ - es
158
+ - fr
159
+ splits:
160
+ - name: train
161
+ num_bytes: 2519
162
+ num_examples: 21
163
+ download_size: 2469
164
+ dataset_size: 2519
165
+ - config_name: es-sv
166
+ features:
167
+ - name: id
168
+ dtype: string
169
+ - name: translation
170
+ dtype:
171
+ translation:
172
+ languages:
173
+ - es
174
+ - sv
175
+ splits:
176
+ - name: train
177
+ num_bytes: 3110
178
+ num_examples: 28
179
+ download_size: 2726
180
+ dataset_size: 3110
181
+ - config_name: fr-sv
182
+ features:
183
+ - name: id
184
+ dtype: string
185
+ - name: translation
186
+ dtype:
187
+ translation:
188
+ languages:
189
+ - fr
190
+ - sv
191
+ splits:
192
+ - name: train
193
+ num_bytes: 38627
194
+ num_examples: 175
195
+ download_size: 15937
196
+ dataset_size: 38627
197
  ---
198
 
199
  # Dataset Card for [Dataset Name]
 
330
 
331
  ### Contributions
332
 
333
+ Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset.