Update README.md
Browse files
README.md
CHANGED
@@ -68,7 +68,7 @@ In-batch negative loss was applied, and we did not use any distillation methods
|
|
68 |
|
69 |
See the table below for an overview of results, vs previous Japanese-only models and the current multilingual state-of-the-art (multilingual-e5).
|
70 |
|
71 |
-
Worth noting: JaColBERT is evaluated out-of-domain on all three datasets, whereas JSQuAD is partially (English version) and MIRACL & Mr.TyDi are fully in-domain for e5, likely contributing to their strong performance. In a real-world setting, I'm hopeful this could be bridged with moderate, quick (
|
72 |
|
73 |
(refer to the technical report for exact evaluation method + code. * indicates the best monolingual/out-of-domain result. **bold** is best overall result. _italic_ indicates the task is in-domain for the model.)
|
74 |
|
|
|
68 |
|
69 |
See the table below for an overview of results, vs previous Japanese-only models and the current multilingual state-of-the-art (multilingual-e5).
|
70 |
|
71 |
+
Worth noting: JaColBERT is evaluated out-of-domain on all three datasets, whereas JSQuAD is partially (English version) and MIRACL & Mr.TyDi are fully in-domain for e5, likely contributing to their strong performance. In a real-world setting, I'm hopeful this could be bridged with moderate, quick (>2hrs) fine-tuning.
|
72 |
|
73 |
(refer to the technical report for exact evaluation method + code. * indicates the best monolingual/out-of-domain result. **bold** is best overall result. _italic_ indicates the task is in-domain for the model.)
|
74 |
|