mabornea's picture
Update README.md
62d44ed
---
tags:
- Generative QA
- LFQA
- ELI5
- facebook/bart-large
license: apache-2.0
---
# Model description
A Fusion in Decoder(FiD) model based on BART for the [KILT-ELI5](https://github.com/facebookresearch/KILT) task.
The FiD model was in introduced in the paper 'Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering'.
This model has been initialized with [facebook/bart-large](https://huggingface.co/facebook/bart-large).
This model was trained with the KILT-ELI5 questions and supporting passages obtained from a ColBERT index of the 2019/08/01 Wikipedia dump.
## Intended uses & limitations
You can use this raw model for the generative question answering task. Biases associated with the pre-existing language model that we used, facebook/bart-large, may be present in our fine-tuned model.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa)'s [Generative FiD Reader](https://github.com/primeqa/primeqa/tree/primeqa-fid/primeqa/pipelines#readme).
### BibTeX entry and citation info
```bibtex
@inproceedings{Izacard2021LeveragingPR,
title={Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering},
author={Gautier Izacard and Edouard Grave},
booktitle={EACL},
year={2021}
}
```
```bibtex
@inproceedings{petroni-etal-2021-kilt,
title = "{KILT}: a Benchmark for Knowledge Intensive Language Tasks",
author = {Petroni, Fabio and Piktus, Aleksandra and
Fan, Angela and Lewis, Patrick and
Yazdani, Majid and De Cao, Nicola and
Thorne, James and Jernite, Yacine and
Karpukhin, Vladimir and Maillard, Jean and
Plachouras, Vassilis and Rockt{\"a}schel, Tim and
Riedel, Sebastian},
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies",
year = "2021",
}
```