Update readme
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ Reader-Distilled Retriever (`RDR`)
|
|
4 |
|
5 |
Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020
|
6 |
|
7 |
-
The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at
|
8 |
|
9 |
This model is the context encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR.
|
10 |
|
|
|
4 |
|
5 |
Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020
|
6 |
|
7 |
+
The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k.
|
8 |
|
9 |
This model is the context encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR.
|
10 |
|