|
--- |
|
language: en |
|
tags: |
|
- question-answering |
|
--- |
|
|
|
# ReAtt |
|
ReAtt is a retrieval-augmented model for knowledge-intensive tasks proposed in [Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer](https://arxiv.org/pdf/2212.02027.pdf). The original Github repository is [https://github.com/jzbjyb/ReAtt](https://github.com/jzbjyb/ReAtt). |
|
|
|
## Description |
|
`neulab/reatt-large-nq` (based on T5 architecture) is initialized with `google/t5-large-lm-adapt` and fine-tuned on Natural Questions with end-to-end retrieval-augmented training. |
|
|
|
## Usage |
|
Please refer to [https://github.com/jzbjyb/ReAtt](https://github.com/jzbjyb/ReAtt) for instructions to use this model. |
|
|
|
## Reference |
|
|
|
```bibtex |
|
@inproceedings{jiang-etal-2022-reatt, |
|
title = {Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer}, |
|
author = {Zhengbao Jiang and Luyu Gao and Jun Araki and Haibo Ding and Zhiruo Wang and Jamie Callan and Graham Neubig}, |
|
booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)}, |
|
address = {Abu Dhabi, UAE}, |
|
month = {December}, |
|
year = {2022} |
|
} |
|
``` |
|
|