michaelrglass commited on
Commit
517a132
1 Parent(s): b8ccc5d

Added citation, github repo and paper link to model card.

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md CHANGED
@@ -1,3 +1,97 @@
1
  ---
 
 
 
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - information retrieval
4
+ - reranking
5
  license: apache-2.0
6
  ---
7
+
8
+ # Model Card for T-REx Reranker in Re2G
9
+
10
+ # Model Details
11
+
12
+ > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
13
+ >
14
+ >It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking.
15
+ >
16
+ >In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate).
17
+
18
+ <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
19
+
20
+ ## Training, Evaluation and Inference
21
+ The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
22
+
23
+ ## Usage
24
+
25
+ The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py)
26
+
27
+ ## Citation
28
+ ```
29
+ @inproceedings{glass-etal-2022-re2g,
30
+ title = "{R}e2{G}: Retrieve, Rerank, Generate",
31
+ author = "Glass, Michael and
32
+ Rossiello, Gaetano and
33
+ Chowdhury, Md Faisal Mahbub and
34
+ Naik, Ankita and
35
+ Cai, Pengshan and
36
+ Gliozzo, Alfio",
37
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
38
+ month = jul,
39
+ year = "2022",
40
+ address = "Seattle, United States",
41
+ publisher = "Association for Computational Linguistics",
42
+ url = "https://aclanthology.org/2022.naacl-main.194",
43
+ doi = "10.18653/v1/2022.naacl-main.194",
44
+ pages = "2701--2715",
45
+ abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
46
+ }
47
+ ```
48
+
49
+ ## Model Description
50
+ The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
51
+ > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
52
+
53
+ - **Developed by:** IBM
54
+ - **Shared by [Optional]:** IBM
55
+
56
+ - **Model type:** Query/Passage Reranker
57
+ - **Language(s) (NLP):** English
58
+ - **License:** Apache 2.0
59
+ - **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco)
60
+ - **Resources for more information:**
61
+ - [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
62
+ - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
63
+
64
+
65
+ # Uses
66
+
67
+
68
+ ## Direct Use
69
+ This model can be used for the task of reranking passage results for a question.
70
+
71
+
72
+ # Citation
73
+
74
+
75
+ **BibTeX:**
76
+
77
+ ```bibtex
78
+ @inproceedings{glass-etal-2022-re2g,
79
+ title = "{R}e2{G}: Retrieve, Rerank, Generate",
80
+ author = "Glass, Michael and
81
+ Rossiello, Gaetano and
82
+ Chowdhury, Md Faisal Mahbub and
83
+ Naik, Ankita and
84
+ Cai, Pengshan and
85
+ Gliozzo, Alfio",
86
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
87
+ month = jul,
88
+ year = "2022",
89
+ address = "Seattle, United States",
90
+ publisher = "Association for Computational Linguistics",
91
+ url = "https://aclanthology.org/2022.naacl-main.194",
92
+ doi = "10.18653/v1/2022.naacl-main.194",
93
+ pages = "2701--2715",
94
+ abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
95
+ }
96
+
97
+ ```