# Margin-MSE trained Bert_Dot (or BERT Dense Retrieval) We provide a retrieval trained (with Margin-MSE using a 3 teacher Bert_Cat Ensemble on MSMARCO-Passage) DistilBert-based instance here. This instance can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecure is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) - to receive a query/passage representation we pool the CLS vector. If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉 For more information and a minimal usage example, please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd If you use our model checkpoint please cite our work as: ``` @misc{hofstaetter2020_crossarchitecture_kd, title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation}, author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury}, year={2020}, eprint={2010.02666}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```