nreimers commited on
Commit
0523de7
1 Parent(s): 5d952dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -41
README.md CHANGED
@@ -1,44 +1,152 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: query_id
5
- dtype: string
6
- - name: query
7
- dtype: string
8
- - name: positive_passages
9
- list:
10
- - name: docid
11
- dtype: string
12
- - name: text
13
- dtype: string
14
- - name: title
15
- dtype: string
16
- - name: negative_passages
17
- list:
18
- - name: docid
19
- dtype: string
20
- - name: text
21
- dtype: string
22
- - name: title
23
- dtype: string
24
- - name: emb
25
- sequence: float32
26
- splits:
27
- - name: dev
28
- num_bytes: 8952102
29
- num_examples: 799
30
- - name: testB
31
- num_bytes: 5640892
32
- num_examples: 1790
33
- - name: train
34
- num_bytes: 31729691
35
- num_examples: 2863
36
- - name: testA
37
- num_bytes: 2302195
38
- num_examples: 734
39
- download_size: 40594349
40
- dataset_size: 48624880
41
  ---
42
- # Dataset Card for "miracl-en-queries-22-12"
43
 
44
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+
5
+ language:
6
+ - en
7
+
8
+ multilinguality:
9
+ - multilingual
10
+
11
+ size_categories: []
12
+ source_datasets: []
13
+ tags: []
14
+
15
+ task_categories:
16
+ - text-retrieval
17
+
18
+ license:
19
+ - apache-2.0
20
+
21
+ task_ids:
22
+ - document-retrieval
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ---
 
24
 
25
+ # MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
26
+
27
+ We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
28
+
29
+ The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12).
30
+
31
+ For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
32
+
33
+
34
+ Dataset info:
35
+ > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
36
+ >
37
+ > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
38
+
39
+ ## Embeddings
40
+ We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
41
+
42
+
43
+ ## Loading the dataset
44
+
45
+ In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
46
+
47
+ You can either load the dataset like this:
48
+ ```python
49
+ from datasets import load_dataset
50
+ docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
51
+ ```
52
+
53
+ Or you can also stream it without downloading it before:
54
+ ```python
55
+ from datasets import load_dataset
56
+ docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True)
57
+
58
+ for doc in docs:
59
+ docid = doc['docid']
60
+ title = doc['title']
61
+ text = doc['text']
62
+ emb = doc['emb']
63
+ ```
64
+
65
+ ## Search
66
+
67
+ Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
68
+
69
+ To search in the documents, you must use **dot-product**.
70
+
71
+
72
+ And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
73
+
74
+ A full search example:
75
+ ```python
76
+ # Attention! For large datasets, this requires a lot of memory to store
77
+ # all document embeddings and to compute the dot product scores.
78
+ # Only use this for smaller datasets. For large datasets, use a vector DB
79
+
80
+ from datasets import load_dataset
81
+ import torch
82
+
83
+ #Load documents + embeddings
84
+ docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
85
+ doc_embeddings = torch.tensor(docs['emb'])
86
+
87
+ # Load queries
88
+ queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev")
89
+
90
+ # Select the first query as example
91
+ qid = 0
92
+ query = queries[qid]
93
+ query_embedding = torch.tensor(queries['emb'])
94
+
95
+ # Compute dot score between query embedding and document embeddings
96
+ dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
97
+ top_k = torch.topk(dot_scores, k=3)
98
+
99
+ # Print results
100
+ print("Query:", query['query'])
101
+ for doc_id in top_k.indices[0].tolist():
102
+ print(docs[doc_id]['title'])
103
+ print(docs[doc_id]['text'])
104
+ ```
105
+
106
+ You can get embeddings for new queries using our API:
107
+ ```python
108
+ #Run: pip install cohere
109
+ import cohere
110
+ co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
111
+ texts = ['my search query']
112
+ response = co.embed(texts=texts, model='multilingual-22-12')
113
+ query_embedding = response.embeddings[0] # Get the embedding for the first text
114
+ ```
115
+
116
+ ## Performance
117
+
118
+ In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
119
+
120
+
121
+ We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
122
+
123
+ Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
124
+
125
+
126
+ | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
127
+ |---|---|---|---|---|
128
+ | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
129
+ | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
130
+ | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
131
+ | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
132
+ | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
133
+ | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
134
+ | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
135
+ | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
136
+ | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
137
+ | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
138
+ | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
139
+
140
+ Further languages (not supported by Elasticsearch):
141
+ | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
142
+ |---|---|---|
143
+ | miracl-fa | 44.8 | 53.6 |
144
+ | miracl-ja | 49.0 | 61.0 |
145
+ | miracl-ko | 50.9 | 64.8 |
146
+ | miracl-sw | 61.4 | 74.5 |
147
+ | miracl-te | 67.8 | 72.3 |
148
+ | miracl-th | 60.2 | 71.9 |
149
+ | miracl-yo | 56.4 | 62.2 |
150
+ | miracl-zh | 43.8 | 56.5 |
151
+ | **Avg** | 54.3 | 64.6 |
152
+