Edit model card

Introduction

The context/passage encoder model based on DPRContextEncoder architecture. It uses the transformer's pooler outputs as context/passage representations.

Training

We trained vblagoje/dpr-ctx_encoder-single-lfqa-base using FAIR's dpr-scale starting with PAQ based pretrained checkpoint and fine-tuned the retriever on the question-answer pairs from the LFQA dataset. As dpr-scale requires DPR formatted training set input with positive, negative, and hard negative samples - we created a training file with an answer being positive, negatives being question unrelated answers, while hard negative samples were chosen from answers on questions between 0.55 and 0.65 of cosine similarity.

Performance

LFQA DPR-based retriever (vblagoje/dpr-question_encoder-single-lfqa-base and vblagoje/dpr-ctx_encoder-single-lfqa-base) had a score of 6.69 for R-precision and 14.5 for Recall@5 on KILT benchmark.

Usage

from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
model = DPRQuestionEncoder.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-base").to(device)
tokenizer = AutoTokenizer.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-base")
input_ids = tokenizer("Why do airplanes leave contrails in the sky?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output

Author

Downloads last month
4
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train vblagoje/dpr-ctx_encoder-single-lfqa-base

Spaces using vblagoje/dpr-ctx_encoder-single-lfqa-base 3