Datasets:

Languages:
German
Multilinguality:
monolingual
Size Categories:
10M<n<100M
ArXiv:
License:
PhilipMay commited on
Commit
a99ca95
1 Parent(s): 5ca6d47

add jaccard calculation

Browse files
Files changed (1) hide show
  1. README.md +28 -2
README.md CHANGED
@@ -23,7 +23,7 @@ The English texts were machine translated back into German. This is how the para
23
  - `en_de`: the German texts translated back from English
24
  - `corpus`: the name of the corpus
25
  - `min_char_len`: the number of characters of the shortest text
26
- - `jaccard_similarity`: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences
27
  - `de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
28
  - `en_de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
29
  - `cos_sim`: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
@@ -48,7 +48,6 @@ df = pd.read_csv("train.csv")
48
  ## To-do
49
  - add column description
50
  - upload dataset
51
- - add jaccard calculation
52
 
53
  ## Back translation
54
  We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
@@ -64,6 +63,33 @@ en2de = torch.hub.load(
64
  )
65
  ```
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ## Citations & Acknowledgements
68
 
69
  **OpenSubtitles**
 
23
  - `en_de`: the German texts translated back from English
24
  - `corpus`: the name of the corpus
25
  - `min_char_len`: the number of characters of the shortest text
26
+ - `jaccard_similarity`: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
27
  - `de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
28
  - `en_de_token_count`: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
29
  - `cos_sim`: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
 
48
  ## To-do
49
  - add column description
50
  - upload dataset
 
51
 
52
  ## Back translation
53
  We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
 
63
  )
64
  ```
65
 
66
+ ## How the Jaccard similarity was calculated
67
+ To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index)
68
+ we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo)
69
+ to split the texts into tokens.
70
+ We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details:
71
+
72
+ ```python
73
+ from somajo import SoMaJo
74
+
75
+ LANGUAGE = "de_CMC"
76
+ somajo_tokenizer = SoMaJo(LANGUAGE)
77
+
78
+ def get_token_set(text, somajo_tokenizer):
79
+ sentences = somajo_tokenizer.tokenize_text([text])
80
+ tokens = [t.text.lower() for sentence in sentences for t in sentence]
81
+ token_set = set(tokens)
82
+ return token_set
83
+
84
+ def jaccard_similarity(text1, text2, somajo_tokenizer):
85
+ token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer)
86
+ token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer)
87
+ intersection = token_set1.intersection(token_set2)
88
+ union = token_set1.union(token_set2)
89
+ jaccard_similarity = float(len(intersection)) / len(union)
90
+ return jaccard_similarity
91
+ ```
92
+
93
  ## Citations & Acknowledgements
94
 
95
  **OpenSubtitles**