Datasets:

Languages:
German
Multilinguality:
monolingual
Size Categories:
10M<n<100M
ArXiv:
License:
PhilipMay commited on
Commit
09ece4c
1 Parent(s): a99ca95

improve formatting

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -18,15 +18,15 @@ The source of the paraphrases are different parallel German / English text corpo
18
  The English texts were machine translated back into German. This is how the paraphrases were obtained.
19
 
20
  ## Columns description
21
- - `uuid`: a uuid calculated with Python `uuid.uuid4()`
22
- - `de`: the original German texts from the corpus
23
- - `en_de`: the German texts translated back from English
24
- - `corpus`: the name of the corpus
25
- - `min_char_len`: the number of characters of the shortest text
26
- - `jaccard_similarity`: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
27
- - `de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
28
- - `en_de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
29
- - `cos_sim`: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
30
 
31
  ## Load this dataset with Pandas
32
  If you want to download the csv file and then load it with Pandas you can do it like this:
 
18
  The English texts were machine translated back into German. This is how the paraphrases were obtained.
19
 
20
  ## Columns description
21
+ - **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
22
+ - **`de`**: the original German texts from the corpus
23
+ - **`en_de`**: the German texts translated back from English
24
+ - **`corpus`**: the name of the corpus
25
+ - **`min_char_len`**: the number of characters of the shortest text
26
+ - **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
27
+ - **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
28
+ - **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
29
+ - **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
30
 
31
  ## Load this dataset with Pandas
32
  If you want to download the csv file and then load it with Pandas you can do it like this: