nreimers commited on
Commit
c997db5
1 Parent(s): 2eb8292

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -23
README.md CHANGED
@@ -8,7 +8,6 @@ language:
8
  multilinguality:
9
  - multilingual
10
 
11
- pretty_name: MIRACL-corpus
12
  size_categories: []
13
  source_datasets: []
14
  tags: []
@@ -23,7 +22,7 @@ task_ids:
23
  - document-retrieval
24
  ---
25
 
26
- # MIRACL (SW) embedded with cohere.ai `multilingual-22-12` encoder
27
 
28
  We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
29
 
@@ -43,6 +42,8 @@ We compute for `title+" "+text` the embeddings using our `multilingual-22-12` em
43
 
44
  ## Loading the dataset
45
 
 
 
46
  You can either load the dataset like this:
47
  ```python
48
  from datasets import load_dataset
@@ -63,7 +64,7 @@ for doc in docs:
63
 
64
  ## Search
65
 
66
- Have a look at [miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) where we provide also the query embeddings for the MIRACL dataset.
67
 
68
  To search in the documents, you must use **dot-product**.
69
 
@@ -114,24 +115,38 @@ query_embedding = response.embeddings[0] # Get the embedding for the first text
114
 
115
  ## Performance
116
 
117
- In the following table we provide the nDCG@10 scores for the cohere multilingual-22-12 model in comparison to BM25 lexical search and mDPR (as provided in the [MIRACL paper](https://arxiv.org/abs/2210.09984))
118
-
119
- | Model | cohere multilingual-22-12 | BM25 lexical search | mDPR |
120
- |-------|---------------------------|--------------------|------|
121
- | miracl-ar | **64.2** | 48.1 | 49.9 |
122
- | miracl-bn | **61.5** | 50.8 | 44.3 |
123
- | miracl-es | **47.0** | 31.9 | 47.8 |
124
- | miracl-fa | **44.8** | 33.3 | 48 |
125
- | miracl-fi | **63.7** | 55.1 | 47.2 |
126
- | miracl-fr | **46.8** | 18.3 | 43.5 |
127
- | miracl-hi | **50.7** | 45.8 | 38.3 |
128
- | miracl-id | 44.8 | **44.9** | 27.2 |
129
- | miracl-ja | **49.0** | 36.9 | 43.9 |
130
- | miracl-ko | **50.9** | 41.9 | 41.9 |
131
- | miracl-ru | **49.2** | 33.4 | 40.7 |
132
- | miracl-sw | **61.4** | 38.3 | 29.9 |
133
- | miracl-te | **67.8** | 49.4 | 35.6 |
134
- | miracl-th | **60.2** | 48.4 | 35.8 |
135
- | miracl-zh | 43.8 | 18 | **51.2** |
136
- | **Avg** | **53.7** | 39.6 | 41.7 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137
 
 
8
  multilinguality:
9
  - multilingual
10
 
 
11
  size_categories: []
12
  source_datasets: []
13
  tags: []
 
22
  - document-retrieval
23
  ---
24
 
25
+ # MIRACL (sw) embedded with cohere.ai `multilingual-22-12` encoder
26
 
27
  We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
28
 
 
42
 
43
  ## Loading the dataset
44
 
45
+ In [miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
46
+
47
  You can either load the dataset like this:
48
  ```python
49
  from datasets import load_dataset
 
64
 
65
  ## Search
66
 
67
+ Have a look at [miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
68
 
69
  To search in the documents, you must use **dot-product**.
70
 
 
115
 
116
  ## Performance
117
 
118
+ In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
119
+
120
+
121
+ We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
122
+
123
+ Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
124
+
125
+
126
+ | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
127
+ |---|---|---|---|---|
128
+ | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
129
+ | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
130
+ | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
131
+ | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
132
+ | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
133
+ | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
134
+ | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
135
+ | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
136
+ | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
137
+ | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
138
+ | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
139
+
140
+ Further languages (not supported by Elasticsearch):
141
+ | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
142
+ |---|---|---|
143
+ | miracl-fa | 44.8 | 53.6 |
144
+ | miracl-ja | 49.0 | 61.0 |
145
+ | miracl-ko | 50.9 | 64.8 |
146
+ | miracl-sw | 61.4 | 74.5 |
147
+ | miracl-te | 67.8 | 72.3 |
148
+ | miracl-th | 60.2 | 71.9 |
149
+ | miracl-yo | 56.4 | 62.2 |
150
+ | miracl-zh | 43.8 | 56.5 |
151
+ | **Avg** | 54.3 | 64.6 |
152