prithivida commited on
Commit
905627f
1 Parent(s): db0e443

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -3
README.md CHANGED
@@ -1,3 +1,174 @@
1
- ---
2
- license: cc-by-nc-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ language:
4
+ - hi
5
+ datasets:
6
+ - MIRACL
7
+ tags:
8
+ - miniMiracle
9
+ - passage-retrieval
10
+ - knowledge-distillation
11
+ - middle-training
12
+ pretty_name: >-
13
+ miniMiracle is a family of High-quality, Light Weight and Easy deploy
14
+ multilingual embedders / retrievers, primarily focussed on Indo-Aryan and
15
+ Indo-Dravidin Languages.
16
+ library_name: transformers
17
+ pipeline_tag: sentence-similarity
18
+ ---
19
+
20
+
21
+ <center>
22
+ <img src="./logo.png" width=150/>
23
+ <img src="./hi_intro.png" width=120%/>
24
+ </center>
25
+
26
+
27
+ <center>
28
+ <img src="./hi_metrics_1.png" width=110%/>
29
+ <b><p>Table 1: Hindi retrieval performance on the MIRACL dev set (measured by nDCG@10)</p></b>
30
+ </center>
31
+
32
+
33
+ <br/>
34
+
35
+ <center>
36
+ <h1> Table Of Contents </h1>
37
+ </center>
38
+
39
+
40
+ - [License and Terms:](#license-and-terms)
41
+ - [Detailed comparison & Our Contribution:](#detailed-comparison--our-contribution)
42
+ - [ONNX & GGUF Variants:](#detailed-comparison--our-contribution)
43
+ - [Usage:](#usage)
44
+ - [With Sentence Transformers:](#with-sentence-transformers)
45
+ - [With Huggingface Transformers:](#with-huggingface-transformers)
46
+ - [How do I optimise vector index cost?](#how-do-i-optimise-vector-index-cost)
47
+ - [How do I offer hybrid search to address Vocabulary Mismatch Problem?](#how-do-i-offer)
48
+ - [Notes on Reproducing:](#notes-on-reproducing)
49
+ - [Reference:](#reference)
50
+ - [Note on model bias](#note-on-model-bias)
51
+
52
+
53
+ ## License and Terms:
54
+
55
+ <center>
56
+ <img src="./terms.png" width=200%/>
57
+ </center>
58
+
59
+
60
+ ## Detailed comparison & Our Contribution:
61
+
62
+ English language famously have **all-minilm** series models which were great for quick experimentations and for certain production workloads. The Idea is to have same for the other popular langauges, starting with Indo-Aryan and Indo-Dravidian languages. Our innovation is in bringing high quality models which easy to serve and embeddings are cheaper to store without ANY pretraining or expensive finetuning. For instance, **all-minilm** are finetuned on 1-Billion pairs. We offer a very lean model but with a huge vocabulary - around 250K.
63
+ We will add more details here.
64
+
65
+
66
+ <center>
67
+ <img src="./hi_metrics_2.png" width=120%/>
68
+ <b><p>Table 2: Detailed Hindi retrieval performance on the MIRACL dev set (measured by nDCG@10)</p></b>
69
+
70
+ </center>
71
+
72
+ Full set of evaluation numbers for our model
73
+
74
+ ```python
75
+ {'NDCG@1': 0.42571, 'NDCG@3': 0.42062, 'NDCG@5': 0.44842, 'NDCG@10': 0.5039, 'NDCG@100': 0.56175, 'NDCG@1000': 0.57772}
76
+ {'MAP@1': 0.22683, 'MAP@3': 0.33514, 'MAP@5': 0.37345, 'MAP@10': 0.40861, 'MAP@100': 0.42833, 'MAP@1000': 0.42916}
77
+ {'Recall@10': 0.63964, 'Recall@50': 0.80537, 'Recall@100': 0.87136, 'Recall@200': 0.9211, 'Recall@500': 0.96851, 'Recall@1000': 0.97987}
78
+ {'P@1': 0.42571, 'P@3': 0.27429, 'P@5': 0.212, 'P@10': 0.13943, 'P@100': 0.01911, 'P@1000': 0.00211}
79
+ {'MRR@10': 0.53057, 'MRR@100': 0.53736, 'MRR@1000': 0.5377}
80
+ ```
81
+
82
+ <br/>
83
+
84
+ ## Usage:
85
+
86
+ #### With Sentence Transformers:
87
+
88
+ ```python
89
+ import scipy.spatial
90
+
91
+ corpus = [
92
+ 'एक आदमी खाना खा रहा है।',
93
+ 'लोग ब्रेड का एक टुकड़ा खा रहे हैं।',
94
+ 'लड़की एक बच्चे को उठाए हुए है।',
95
+ 'एक आदमी घोड़े पर सवार है।',
96
+ 'एक महिला वायलिन बजा रही है।',
97
+ 'दो आदमी जंगल में गाड़ी धकेल रहे हैं।',
98
+ 'एक आदमी एक सफेद घोड़े पर एक बंद मैदान में सवारी कर रहा है।',
99
+ 'एक बंदर ड्रम बजा रहा है।',
100
+ 'एक चीता अपने शिकार के पीछे दौड़ रहा है।',
101
+ 'एक बड़ा डिनर है।'
102
+ ]
103
+
104
+ corpus_embeddings = model.encode(corpus)
105
+
106
+ queries = [
107
+ 'एक आदमी पास्ता खा रहा है।',
108
+ 'एक गोरिल्ला सूट पहने व्यक्ति ड्रम बजा रहा है।'
109
+ ]
110
+
111
+ query_embeddings = model.encode(queries)
112
+
113
+ # Find the closest 3 sentences of the corpus for each query sentence based on cosine similarity
114
+ closest_n = 3
115
+ for query, query_embedding in zip(queries, query_embeddings):
116
+ distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
117
+
118
+ results = zip(range(len(distances)), distances)
119
+ results = sorted(results, key=lambda x: x[1])
120
+
121
+ print("\n======================\n")
122
+ print("Query:", query)
123
+ print("\nTop 3 most similar sentences in corpus:\n")
124
+
125
+ for idx, distance in results[0:closest_n]:
126
+ print(corpus[idx].strip(), "(Score: %.4f)" % (1-distance))
127
+
128
+ # Optional: How to quantize the embeddings
129
+ # binary_embeddings = quantize_embeddings(embeddings, precision="ubinary")
130
+
131
+ ```
132
+
133
+ #### With Huggingface Transformers:
134
+ - T.B.A
135
+
136
+ #### How do I optimise vector index cost ?
137
+ [Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
138
+
139
+ <h4>How do I offer hybrid search to address Vocabulary Mismatch Problem?</h4>
140
+ MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option: The below numbers are with mDPR model, but miniMiracle_hi_v1 should give a even better hybrid performance.
141
+
142
+ | Language | ISO | nDCG@10 BM25 | nDCG@10 mDPR | nDCG@10 Hybrid |
143
+ |-----------|-----|--------------|--------------|----------------|
144
+ | **Hindi** | **hi** | **0.458** | **0.383** | **0.616** |
145
+
146
+
147
+ ## Notes on reproducing:
148
+
149
+ We welcome everyone to reproduce our results. Here are some tips and observations:
150
+
151
+ - Use CLS Pooling and Inner Product.
152
+ - There *may be* minor differences in the numbers when reproducing, for instance BGE-M3 reports a nDCG@10 of 59.3 for MIRACL hindi and we Observed only 58.9.
153
+
154
+ Here are our numbers for the full hindi run on BGE-M3
155
+
156
+ ```python
157
+ {'NDCG@1': 0.49714, 'NDCG@3': 0.5115, 'NDCG@5': 0.53908, 'NDCG@10': 0.58936, 'NDCG@100': 0.6457, 'NDCG@1000': 0.65336}
158
+ {'MAP@1': 0.28845, 'MAP@3': 0.42424, 'MAP@5': 0.46455, 'MAP@10': 0.49955, 'MAP@100': 0.51886, 'MAP@1000': 0.51933}
159
+ {'Recall@10': 0.73032, 'Recall@50': 0.8987, 'Recall@100': 0.93974, 'Recall@200': 0.95763, 'Recall@500': 0.97813, 'Recall@1000': 0.9902}
160
+ {'P@1': 0.49714, 'P@3': 0.33048, 'P@5': 0.24629, 'P@10': 0.15543, 'P@100': 0.0202, 'P@1000': 0.00212}
161
+ {'MRR@10': 0.60893, 'MRR@100': 0.615, 'MRR@1000': 0.6151}
162
+ ```
163
+
164
+ Fair warning BGE-M3 is $ expensive to evaluate, probably that's why it's not part of any of the MTEB benchmarks.
165
+
166
+
167
+ # Reference:
168
+ - [All Cohere numbers are copied form here](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12)
169
+
170
+
171
+ # Note on model bias:
172
+ - Like any model this model might carry inherent biases from the base models and the datasets it was pretrained and finetuned on. Please use responsibly.
173
+
174
+