Edit model card

Model description

A Fusion in Decoder(FiD) model based on BART for the KILT-ELI5 task.

The FiD model was in introduced in the paper 'Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering'.

This model has been initialized with facebook/bart-large.

This model was trained with the KILT-ELI5 questions and supporting passages obtained from a ColBERT index of the 2019/08/01 Wikipedia dump.

Intended uses & limitations

You can use this raw model for the generative question answering task. Biases associated with the pre-existing language model that we used, facebook/bart-large, may be present in our fine-tuned model.

Usage

You can use this model directly with the PrimeQA's Generative FiD Reader.

BibTeX entry and citation info

@inproceedings{Izacard2021LeveragingPR,
  title={Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering},
  author={Gautier Izacard and Edouard Grave},
  booktitle={EACL},
  year={2021}
}
@inproceedings{petroni-etal-2021-kilt,
    title = "{KILT}: a Benchmark for Knowledge Intensive Language Tasks",
    author = {Petroni, Fabio  and Piktus, Aleksandra  and
      Fan, Angela  and Lewis, Patrick  and
      Yazdani, Majid  and De Cao, Nicola  and
      Thorne, James  and Jernite, Yacine  and
      Karpukhin, Vladimir  and Maillard, Jean  and
      Plachouras, Vassilis  and Rockt{\"a}schel, Tim  and
      Riedel, Sebastian},
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association 
                 for Computational Linguistics: Human Language Technologies",
    year = "2021",
   }
Downloads last month
4
Unable to determine this model’s pipeline type. Check the docs .