Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,25 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
- config_name: all
|
4 |
features:
|
@@ -178,3 +199,48 @@ configs:
|
|
178 |
- split: train
|
179 |
path: en-ru/train-*
|
180 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- multilingual
|
5 |
+
- ar
|
6 |
+
- cs
|
7 |
+
- de
|
8 |
+
- es
|
9 |
+
- fr
|
10 |
+
- it
|
11 |
+
- ja
|
12 |
+
- nl
|
13 |
+
- pt
|
14 |
+
- ru
|
15 |
+
size_categories:
|
16 |
+
- 100K<n<1M
|
17 |
+
task_categories:
|
18 |
+
- feature-extraction
|
19 |
+
- sentence-similarity
|
20 |
+
pretty_name: News-Commentary
|
21 |
+
tags:
|
22 |
+
- sentence-transformers
|
23 |
dataset_info:
|
24 |
- config_name: all
|
25 |
features:
|
|
|
199 |
- split: train
|
200 |
path: en-ru/train-*
|
201 |
---
|
202 |
+
|
203 |
+
# Dataset Card for Parallel Sentences - News Commentary
|
204 |
+
|
205 |
+
This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. Most of the sentences originate from the [OPUS website](https://opus.nlpl.eu/).
|
206 |
+
In particular, this dataset contains the [News-Commentary](https://opus.nlpl.eu/News-Commentary/corpus/version/News-Commentary) dataset.
|
207 |
+
|
208 |
+
## Related Datasets
|
209 |
+
|
210 |
+
The following datasets are also a part of the Parallel Sentences collection:
|
211 |
+
* [parallel-sentences-europarl](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-europarl)
|
212 |
+
* [parallel-sentences-global-voices](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-global-voices)
|
213 |
+
* [parallel-sentences-muse](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-muse)
|
214 |
+
* [parallel-sentences-jw300](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-jw300)
|
215 |
+
* [parallel-sentences-news-commentary](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-news-commentary)
|
216 |
+
* [parallel-sentences-opensubtitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-opensubtitles)
|
217 |
+
* [parallel-sentences-talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks)
|
218 |
+
* [parallel-sentences-taboeba](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-taboeba)
|
219 |
+
* [parallel-sentences-wikimatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikimatrix)
|
220 |
+
* [parallel-sentences-wikititles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikititles)
|
221 |
+
|
222 |
+
These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html).
|
223 |
+
|
224 |
+
## Dataset Subsets
|
225 |
+
|
226 |
+
### `all` subset
|
227 |
+
|
228 |
+
* Columns: "english", "non_english"
|
229 |
+
* Column types: `str`, `str`
|
230 |
+
* Examples:
|
231 |
+
```python
|
232 |
+
|
233 |
+
```
|
234 |
+
* Collection strategy: Combining all other subsets from this dataset.
|
235 |
+
* Deduplified: No
|
236 |
+
|
237 |
+
### `en-...` subsets
|
238 |
+
|
239 |
+
* Columns: "english", "non_english"
|
240 |
+
* Column types: `str`, `str`
|
241 |
+
* Examples:
|
242 |
+
```python
|
243 |
+
|
244 |
+
```
|
245 |
+
* Collection strategy: Processing the raw data from [parallel-sentences](https://huggingface.co/datasets/sentence-transformers/parallel-sentences) and formatting it in Parquet, followed by deduplication.
|
246 |
+
* Deduplified: Yes
|