prithivida commited on
Commit
bfcc611
1 Parent(s): 161a8f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -187,7 +187,7 @@ Refer tables above
187
 
188
  #### Long Document Retrieval
189
 
190
- This is very ambitious eval because we have not trained for long context, the max_len was 512 for all the models below.
191
 
192
  <center>
193
  <img src="./ar_metrics_4.png" width=150%/>
@@ -197,11 +197,11 @@ This is very ambitious eval because we have not trained for long context, the ma
197
 
198
  #### X-lingual Retrieval
199
 
200
- Almost all models below are monolingual arabic models so they have no notion of any other languages. But the below table shows how our model excels in cross-lingual scenarios owing to its deep multilingual understanding.
201
- This also explains its competitive performance when compared to models lot larger.
202
 
203
  <center>
204
- <img src="./ar_metrics_5.png" width=80%/>
205
  <b><p>Table 4: Detailed Arabic retrieval performance on the 3 X-lingual test set (measured by nDCG@10)</p></b>
206
  </center>
207
 
 
187
 
188
  #### Long Document Retrieval
189
 
190
+ This is very ambitious eval because we have not trained for long context, the max_len was 512 for all the models below except BGE-M3 which had 8192 context and finetuned for long doc.
191
 
192
  <center>
193
  <img src="./ar_metrics_4.png" width=150%/>
 
197
 
198
  #### X-lingual Retrieval
199
 
200
+ Except BGE-M3 all are monolingual arabic models so they have no notion of any other languages. But the below table shows how our model understands arabic in context with other languages.
201
+ This explains it's overall competitive performance when compared to models that are a LOT larger.
202
 
203
  <center>
204
+ <img src="./ar_metrics_5.png" width=120%/>
205
  <b><p>Table 4: Detailed Arabic retrieval performance on the 3 X-lingual test set (measured by nDCG@10)</p></b>
206
  </center>
207