MrLight commited on
Commit
5ffbc1f
1 Parent(s): 670bed5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -9,6 +9,11 @@ Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
9
 
10
  This model is fine-tuned from LLaMA-2-13B using LoRA for passage reranking.
11
 
 
 
 
 
 
12
  ## Usage
13
 
14
  Below is an example to compute the similarity score of a query-passage pair
@@ -30,13 +35,13 @@ def get_model(peft_model_name):
30
  tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-13b-hf')
31
  model = get_model('castorini/rankllama-v1-13b-lora-passage')
32
 
33
- # Define a query-document pair
34
  query = "What is llama?"
35
  title = "Llama"
36
- document = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
37
 
38
- # Tokenize the query-document pair
39
- inputs = tokenizer(f'query: {query}', f'document: {title} {document}</s>', return_tensors='pt')
40
 
41
  # Run the model forward
42
  with torch.no_grad():
 
9
 
10
  This model is fine-tuned from LLaMA-2-13B using LoRA for passage reranking.
11
 
12
+ ## Training Data
13
+ The model is fine-tuned on the training split of [MS MARCO Passage Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch.
14
+ Please check our paper for details.
15
+
16
+
17
  ## Usage
18
 
19
  Below is an example to compute the similarity score of a query-passage pair
 
35
  tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-13b-hf')
36
  model = get_model('castorini/rankllama-v1-13b-lora-passage')
37
 
38
+ # Define a query-passage pair
39
  query = "What is llama?"
40
  title = "Llama"
41
+ passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
42
 
43
+ # Tokenize the query-passage pair
44
+ inputs = tokenizer(f'query: {query}', f'document: {title} {passage}</s>', return_tensors='pt')
45
 
46
  # Run the model forward
47
  with torch.no_grad():