soheeyang commited on
Commit
a6ad80d
1 Parent(s): cf2a637

Update readme

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -6,13 +6,13 @@ Dense Passage Retrieval (`DPR`)
6
 
7
  Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020.
8
 
9
- This model is the context encoder of DPR trained on TriviaQA using the [official implementation of DPR](https://github.com/facebookresearch/DPR).
10
 
11
- Disclaimer: This model is not from the authors of DPR. It is my own reproduction. The authors did not release the DPR weights for TriviaQA.
12
 
13
  ## Performance
14
 
15
- The performance is answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
16
 
17
  The values in parentheses are those reported in the paper.
18
 
@@ -38,4 +38,4 @@ ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/dpr-ctx_encoder-singl
38
 
39
  data = tokenizer("context comes here", return_tensors="pt")
40
  ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context
41
- ```
 
6
 
7
  Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020.
8
 
9
+ This model is the context encoder of DPR trained solely on TriviaQA (single-trivia) using the [official implementation of DPR](https://github.com/facebookresearch/DPR).
10
 
11
+ Disclaimer: This model is not from the authors of DPR, but my reproduction. The authors did not release the DPR weights trained solely on TriviaQA. I hope this model checkpoint can be helpful for those who want to use DPR trained only on TriviaQA.
12
 
13
  ## Performance
14
 
15
+ The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
16
 
17
  The values in parentheses are those reported in the paper.
18
 
 
38
 
39
  data = tokenizer("context comes here", return_tensors="pt")
40
  ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context
41
+ ```