prithivida
commited on
Commit
•
0d1ccc3
1
Parent(s):
1eeec15
Update README.md
Browse files
README.md
CHANGED
@@ -170,7 +170,13 @@ Fair warning BGE-M3 is $ expensive to evaluate, probably that's why it's not par
|
|
170 |
|
171 |
# Reference:
|
172 |
- [All Cohere numbers are copied form here](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12)
|
173 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
174 |
|
175 |
# Note on model bias:
|
176 |
- Like any model this model might carry inherent biases from the base models and the datasets it was pretrained and finetuned on. Please use responsibly.
|
|
|
170 |
|
171 |
# Reference:
|
172 |
- [All Cohere numbers are copied form here](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12)
|
173 |
+
- [BGE M3-Embedding: Multi-Lingual, Multi-Functionality,
|
174 |
+
Multi-Granularity Text Embeddings Through Self-Knowledge Distillation](https://arxiv.org/pdf/2402.03216.pdf)
|
175 |
+
- [Making a MIRACL: Multilingual Information Retrieval
|
176 |
+
Across a Continuum of Languages](https://arxiv.org/pdf/2210.09984.pdf)
|
177 |
+
- [IndicIRSuite: Multilingual Dataset and Neural
|
178 |
+
Information Models for Indian Languages](https://arxiv.org/pdf/2312.09508)
|
179 |
+
|
180 |
|
181 |
# Note on model bias:
|
182 |
- Like any model this model might carry inherent biases from the base models and the datasets it was pretrained and finetuned on. Please use responsibly.
|