Sefika commited on
Commit
21e2f4c
1 Parent(s): f05576d

update the paper link

Browse files

The paper link has the spelling mistake, so it was not working. It was fixed.

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,7 +5,7 @@ This model is a 7B [Self-RAG](https://selfrag.github.io/) model that generates o
5
 
6
  Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
7
  At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
8
- See full descriptions in See full descriptions in [our paper](hhttps://arxiv.org/abs/2310.11511).
9
 
10
  ## Usage
11
  Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/requirements.txt).
 
5
 
6
  Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
7
  At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
8
+ See full descriptions in See full descriptions in [our paper](https://arxiv.org/abs/2310.11511).
9
 
10
  ## Usage
11
  Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/requirements.txt).