Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,68 @@
|
|
1 |
---
|
2 |
license: llama2
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
---
|
4 |
+
|
5 |
+
|
6 |
+
# RepLLaMA-7B-Passage
|
7 |
+
|
8 |
+
[Fine-Tuning LLaMA for Multi-Stage Text Retrieval](https://arxiv.org/abs/2310.08319).
|
9 |
+
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
|
10 |
+
|
11 |
+
This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is **flexible**.
|
12 |
+
|
13 |
+
## Training Data
|
14 |
+
The model is fine-tuned on the training split of [MS MARCO Passage Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch.
|
15 |
+
Please check our paper for details.
|
16 |
+
|
17 |
+
## Usage
|
18 |
+
|
19 |
+
Below is an example to encode a query and a passage, and then compute their similarity using their embedding.
|
20 |
+
|
21 |
+
```python
|
22 |
+
import torch
|
23 |
+
from transformers import AutoModel, AutoTokenizer
|
24 |
+
|
25 |
+
# Load the tokenizer and model
|
26 |
+
tokenizer = AutoTokenizer.from_pretrained('castorini/repllama-v1-mrl-7b-lora-passage')
|
27 |
+
model = AutoModel.from_pretrained('castorini/repllama-v1-mrl-7b-lora-passage')
|
28 |
+
dim = 512
|
29 |
+
|
30 |
+
# Define query and passage inputs
|
31 |
+
query = "What is llama?"
|
32 |
+
title = "Llama"
|
33 |
+
passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
|
34 |
+
query_input = tokenizer(f'query: {query}</s>', return_tensors='pt')
|
35 |
+
passage_input = tokenizer(f'passage: {title} {passage}</s>', return_tensors='pt')
|
36 |
+
|
37 |
+
# Run the model forward to compute embeddings and query-passage similarity score
|
38 |
+
with torch.no_grad():
|
39 |
+
# compute query embedding
|
40 |
+
query_outputs = model(**query_input)
|
41 |
+
query_embedding = query_outputs.last_hidden_state[0][-1][:dim]
|
42 |
+
query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=0)
|
43 |
+
|
44 |
+
# compute passage embedding
|
45 |
+
passage_outputs = model(**passage_input)
|
46 |
+
passage_embeddings = passage_outputs.last_hidden_state[0][-1][:dim]
|
47 |
+
passage_embeddings = torch.nn.functional.normalize(passage_embeddings, p=2, dim=0)
|
48 |
+
|
49 |
+
# compute similarity score
|
50 |
+
score = torch.dot(query_embedding, passage_embeddings)
|
51 |
+
print(score)
|
52 |
+
|
53 |
+
```
|
54 |
+
## Batch inference and training
|
55 |
+
An unofficial replication of the inference and training code can be found [here](https://github.com/texttron/tevatron/tree/main/examples/repllama)
|
56 |
+
|
57 |
+
## Citation
|
58 |
+
|
59 |
+
If you find our paper or models helpful, please consider cite as follows:
|
60 |
+
|
61 |
+
```
|
62 |
+
@article{rankllama,
|
63 |
+
title={Fine-Tuning LLaMA for Multi-Stage Text Retrieval},
|
64 |
+
author={Xueguang Ma and Liang Wang and Nan Yang and Furu Wei and Jimmy Lin},
|
65 |
+
year={2023},
|
66 |
+
journal={arXiv:2310.08319},
|
67 |
+
}
|
68 |
+
```
|