ronak commited on
Commit
0f3556b
1 Parent(s): 7828246

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: llama2
4
+ language:
5
+ - en
6
+ tags:
7
+ - information retrieval
8
+ - reranker
9
+ ---
10
+
11
+ # RankVicuna (FP16) Model Card
12
+
13
+ ## Model Details
14
+
15
+ RankVicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
16
+
17
+ - **Developed by:** [Castorini](https://github.com/castorini)
18
+ - **Model type:** An auto-regressive language model based on the transformer architecture
19
+ - **License:** Llama 2 Community License Agreement
20
+ - **Finetuned from base model:** [Llama 2](https://arxiv.org/abs/2307.09288)
21
+
22
+ This specific model is a 7B variant and is trained with data augmentation.
23
+ It is also worth noting that it is converted to FP16.
24
+
25
+ ### Model Sources
26
+
27
+ - **Repository:** https://github.com/castorini/rank_llm
28
+ - **Paper:** https://arxiv.org/abs/2309.15088
29
+
30
+ ## Uses
31
+
32
+ The primary use of RankVicuna is research at the intersection of large language models and retrieval.
33
+ The primary intended users of the model are researchers and hobbyists in natural language processing and information retrieval.
34
+
35
+ ## Training Details
36
+
37
+ RankVicuna is finetuned from `lmsys/vicuna-7b-v1.5` with supervised instruction fine-tuning.
38
+
39
+ ## Evaluation
40
+
41
+ RankVicuna is currently evaluated on DL19/DL20. See more details in our [paper](https://arxiv.org/pdf/2309.15088.pdf).