PhilipMay commited on
Commit
5ca6d47
1 Parent(s): 89e5502

add columns description

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -18,15 +18,15 @@ The source of the paraphrases are different parallel German / English text corpo
18
  The English texts were machine translated back into German. This is how the paraphrases were obtained.
19
 
20
  ## Columns description
21
- - `uuid`: xxx
22
- - `de`: xxx
23
- - `en_de`: xxx
24
- - `corpus`: xxx
25
- - `min_char_len`: xxx
26
- - `jaccard_similarity`: xxx
27
- - `de_token_count`: xxx
28
- - `en_de_token_count`: xxx
29
- - `cos_sim`: xxx
30
 
31
  ## Load this dataset with Pandas
32
  If you want to download the csv file and then load it with Pandas you can do it like this:
@@ -48,6 +48,7 @@ df = pd.read_csv("train.csv")
48
  ## To-do
49
  - add column description
50
  - upload dataset
 
51
 
52
  ## Back translation
53
  We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
 
18
  The English texts were machine translated back into German. This is how the paraphrases were obtained.
19
 
20
  ## Columns description
21
+ - `uuid`: a uuid calculated with Python `uuid.uuid4()`
22
+ - `de`: the original German texts from the corpus
23
+ - `en_de`: the German texts translated back from English
24
+ - `corpus`: the name of the corpus
25
+ - `min_char_len`: the number of characters of the shortest text
26
+ - `jaccard_similarity`: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences
27
+ - `de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
28
+ - `en_de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
29
+ - `cos_sim`: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
30
 
31
  ## Load this dataset with Pandas
32
  If you want to download the csv file and then load it with Pandas you can do it like this:
 
48
  ## To-do
49
  - add column description
50
  - upload dataset
51
+ - add jaccard calculation
52
 
53
  ## Back translation
54
  We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).