ColBERT-X for English-Persian CLIR using Translate-Distill
Model Description
Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation.
plaidx-large-fas-tdist-t53b-engeng
is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on
English MS MARCO training queries and English passages.
Teacher Models:
Training Parameters
- learning rate: 5e-6
- update steps: 200,000
- nway (number of passages per query): 6 (randomly selected from 50)
- per device batch size (number of query-passage set): 8
- training GPU: 8 NVIDIA V100 with 32 GB memory
Usage
To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X.
pip install git+https://github.com/hltcoe/ColBERT-X.git@plaid-x
Following code snippet loads the model through Huggingface API.
from colbert.modeling.checkpoint import Checkpoint
from colbert.infra import ColBERTConfig
Checkpoint('plaidx-large-fas-tdist-t53b-engeng', colbert_config=ColBERTConfig())
BibTeX entry and Citation Info
Please cite the following two papers if you use the model.
@inproceedings{colbert-x,
author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard},
title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models},
booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)},
year = {2022},
url = {https://arxiv.org/abs/2201.08471}
}
@inproceedings{translate-distill,
author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller},
title = {Translate-Distill: Learning Cross-Language \ Dense Retrieval by Translation and Distillation},
booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)},
year = {2024},
url = {tba}
}