Edit model card

Beam Retrieval: General End-to-End Retrieval for Multi-Hop Question Answering (Zhang et all 2023)

Unofficial mirror of Beam Retriever

This is the finetuned encoder only DebertaV3Large of the Beam Retriever model which can be used for maximum inner product search.

Usage

from transformers import DebertaV2Model

finetuned_encoder = DebertaV2Model.from_pretrained('scholarly-shadows-syndicate/beam_retriever_unofficial_encoder_only')

Citations

@article{Zhang2023BeamRG,
  title={Beam Retrieval: General End-to-End Retrieval for Multi-Hop Question Answering},
  author={Jiahao Zhang and H. Zhang and Dongmei Zhang and Yong Liu and Sheng Huang},
  journal={ArXiv},
  year={2023},
  volume={abs/2308.08973},
  url={https://api.semanticscholar.org/CorpusID:261030563}
}
@article{He2020DeBERTaDB,
  title={DeBERTa: Decoding-enhanced BERT with Disentangled Attention},
  author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.03654},
  url={https://api.semanticscholar.org/CorpusID:219531210}
}
Downloads last month
1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.