BERT-Base model (
google/bert_uncased_L-12_H-768_A-12) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a
ForSequenceClassification model; see the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights.