<p align="center">
<p align="center">
<h3 align="center">Awesome RAG 📄🔍</h3>
<p align="center">
  A curated list of awesome resources for RAG (Retrieval Augmentation Generation) exploration.
</p>
<p align="center">
  <a href="https://github.com/sindresorhus/awesome">
    <img alt="Awesome" src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg">
  </a>
</p>
</p>

## Table of Contents

- [Papers](#papers)
  - [Retrieval](#retrieval)
  - [RAG vs Finetuning](#rag-vs-finetuning)
  - [RAG With Knowledge Graphs](#rag-with-knowledge-graphs)
  - [Evaluation](#evaluation)
  - [Agents/Tools](#agentstools)
  - [Survey Papers](#survey-papers)
- [Blogposts](#blogs)

## Papers

### Retrieval

- [RAG: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](./papers/rag.md) - An information retrieval augmented generation model that can be used for various knowledge-intensive NLP tasks. (Lewis, Patrick, et al. 2020)

- [CRAG - Corrective Retrieval Augmented Generation](./papers/crag.md) - Enhance the robustness of language model generation by evaluating and augmenting the relevance of retrieved documents through a an evaluator and large-scale web searches. (Shi-Qi Yan, Jia-Chen Gu, et al. 2024) [(llamapack)](https://github.com/run-llama/llama_index/tree/main/llama-index-packs/llama-index-packs-corrective-rag)

- [Dense x Retrieval: What Retrieval Grnaularity Should We Use?](./papers/dense-retrieval.md) - Improve dense retrieval by using a more fine-grained retrieval granularity as known as Propositions. (Tong Chen, et al. 2023) [(llamapack)](https://github.com/run-llama/llama_index/tree/main/llama-index-packs/llama-index-packs-dense-x-retrieval)

- [In-Context Learning for Extreme Multi-Label Classification](./papers/in-context-learning.md) - A retrieval-augmented generation model that can be used for extreme multi-label classification. (Karel, et al. 2021) [(llamapack)](https://github.com/run-llama/llama_index/tree/main/llama-index-packs/llama-index-packs-infer-retrieve-rerank)

- [Self-Discover: Large Language Models Self-Compose Reasoning Structures](./papers/self-discover.md) - A retrieval-augmented generation model that can be used for self-composing reasoning structures. (Pei, et al. 2021) [(llamapack)](https://github.com/run-llama/llama_index/tree/main/llama-index-packs/llama-index-packs-self-discover)

- [SELF-RAG: LEARNING TO RETRIEVE, GENERATE, AND CRITIQUE THROUGH SELF-REFLECTION](./papers/self-rag.md) - A retrieval-augmented generation model that can be used for self-reflection. (Akari, et al. 2023) [(llamapack)](https://github.com/run-llama/llama_index/tree/main/llama-index-packs/llama-index-packs-self-rag)

- [Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding](./papers/chain-of-table.md) - A retrieval-augmented generation model that can be used for table understanding. (Zilong, et al. 2024) [(llamapack)](https://github.com/run-llama/llama_index/tree/main/llama-index-packs/llama-index-packs-tables)

- [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](./papers/raptor.md) - An approach to enhance RAG by creating a summary tree from text chunks, providing deeper insights and overcoming the limitations of short, contiguous text retrieval. (Sarthi, Parth, et al. 2024)

- [HiQA: A Hierarchical Contextual Augmentation RAG for Massive Documents QA](./papers/hiqa.md) - An advanced multi-document question-answering framework that integrates cascading metadata and a multi-route retrieval mechanism, enhancing the accuracy of RAG pipeline. (Chen, Xinyue, et al. 2024)

- [ActiveRAG: Revealing the Treasures of Knowledge via Active Learning](./papers/active_rag.md) - Enhances RAG by active learning to deepen LLMs' understanding of external knowledge through innovative Knowledge Construction and Cognitive Nexus mechanisms. (Xu, Zhipeng, et al. 2024)

### RAG vs Finetuning

- [RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture](./papers/rag_finetuning_agriculture.md) - RAG vs Fine-tuning case study on agriculture domain datasets. (Gupta, Aman, et al. 2024)

- [RA-DIT: Retrieval-Augmented Dual Instruction Tuning (RA-DIT)](./papers/ra-dit.md) - Improve the performance of retrieval-augmented generation models by fine-tuning the retrieval and generation components jointly. (Khattar, Dheeraj, et al. 2021)

- [InstructRetro: Instruction Tuning Post Retrieval-Augmented Pretraining](./papers/instructretro.md) - A Large Language Model pretrained with retrieval before instruction tuning (Wei Ping, et al. 2023)

### RAG With Knowledge Graphs

### Evaluation

### Agents/Tools

### Survey Papers


## Contributing
Interested in contributing? Please read the [contribution guidelines](CONTRIBUTING.md) first.
