zyznull commited on
Commit
a6d6fb8
1 Parent(s): d4be5c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md CHANGED
@@ -1,3 +1,77 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # RankingGPT-qwen-7b
6
+
7
+ RankingGPT is a text ranker based on large language models with significant in-domain and out-domain effectiveness.
8
+ We provide RankingGPT in different sizes and types, including bloom-560m, bloom-1b1, bloom-3b, bloom-7b, llama2-7b, baichuan2-7b and qwen-7b.
9
+
10
+ More details please refer to our [paper](https://arxiv.org/abs/2311.16720) and [github](https://github.com/Alibaba-NLP/RankingGPT).
11
+
12
+
13
+ ## Usage
14
+
15
+ Code example
16
+ ```python
17
+ import torch
18
+ from transformers import AutoTokenizer, AutoModelForCausalLM
19
+
20
+ tokenizer = AutoTokenizer.from_pretrained('zyznull/RankingGPT-qwen-7b',trust_remote_code=True)
21
+ model = AutoModelForCausalLM.from_pretrained('zyznull/RankingGPT-qwen-7b',trust_remote_code=True).eval()
22
+
23
+ query='when should a baby walk'
24
+ document='Most babies start to walk around 13 months, but your baby may start walking as early as 9 or 10 months or as late as 15 or 16 months.'
25
+
26
+ context=f'Document: {document} Query:'
27
+ example=context+query
28
+
29
+ context_enc = tokenizer.encode(context, add_special_tokens=False)
30
+ continuation_enc = tokenizer.encode(query, add_special_tokens=False)
31
+ model_input = torch.tensor(context_enc+continuation_enc[:-1])
32
+ continuation_len = len(continuation_enc)
33
+ input_len, = model_input.shape
34
+
35
+
36
+ with torch.no_grad():
37
+ logprobs = torch.nn.functional.log_softmax(model(model_input.unsqueeze(dim=0))[0], dim=-1)[0]
38
+
39
+ logprobs = logprobs[input_len-continuation_len:]
40
+ logprobs = torch.gather(logprobs, 1, torch.tensor(continuation_enc).unsqueeze(-1)).squeeze(-1)
41
+ score = torch.sum(logprobs)/logprobs.shape[0]
42
+
43
+ print(f"Document: {document[:20] + '...'} Score: {score}")
44
+ ```
45
+
46
+
47
+ ### Result
48
+ | | DL19 | DL20 | BEIR | url |
49
+ |---------|------|------|------|-----------------|
50
+ | MonoBERT-340M | 72.3 | 70.3 | 50.5 | [huggingface](https://huggingface.co/veneres/monobert-msmarco) |
51
+ | MonoT5-220M | 71.5 | 69.7 | 49.3 | [huggingface](https://huggingface.co/castorini/monot5-base-msmarco) |
52
+ | MonoT5-770M | 73.2 | 71.2 | 53.1 | [huggingface](https://huggingface.co/castorini/monot5-large-msmarco) |
53
+ | MonoT5-3B | 72.8 | 74.5 | 54.6 | [huggingface](https://huggingface.co/castorini/monot5-3b-msmarco) |
54
+ | RankT5-770M | - | - | 53.7 | [huggingface](https://huggingface.co/bergum/rank-T5-flan) |
55
+ | RankLLaMA| 74.6 | 76.6 | 52.5 | [huggingface](https://huggingface.co/castorini/rankllama-v1-7b-lora-passage) |
56
+ | RankingGPT-bloom-560m| 75.3 | 73.2 | 53.7 | [huggingface](https://huggingface.co/zyznull/RankingGPT-bloom-560m) [modelscope](https://modelscope.cn/models/damo/RankingGPT-bloom-560m) |
57
+ | RankingGPT-bloom-1b1| 75.6 | 73.2 | 54.5 | [huggingface](https://huggingface.co/zyznull/RankingGPT-bloom-1b1) [modelscope](https://modelscope.cn/models/damo/RankingGPT-bloom-1b1) |
58
+ | RankingGPT-bloom-3b| 76.8 | 73.6 | 56.2 | [huggingface](https://huggingface.co/zyznull/RankingGPT-bloom-3b) [modelscope](https://modelscope.cn/models/damo/RankingGPT-bloom-3b) |
59
+ | RankingGPT-bloom-7b| 77.3 | 74.6 | 56.6 | [huggingface](https://huggingface.co/zyznull/RankingGPT-bloom-7b) [modelscope](https://modelscope.cn/models/damo/RankingGPT-bloom-7b) |
60
+ | RankingGPT-llama2-7b| 76.2 | 76.3 | 57.8 | [huggingface](https://huggingface.co/zyznull/RankingGPT-llama2-7b) [modelscope](https://modelscope.cn/models/damo/RankingGPT-llama2-7b) |
61
+ | RankingGPT-baichuan2-7b| 75.9 | 74.3 | 57.5 | [huggingface](https://huggingface.co/zyznull/RankingGPT-baichuan2-7b) [modelscope](https://modelscope.cn/models/damo/RankingGPT-baichuan2-7b) |
62
+ | RankingGPT-qwen-7b| 75.8 | 74.3 | 58.3 | [huggingface](https://huggingface.co/zyznull/RankingGPT-qwen-7b) [modelscope](https://modelscope.cn/models/damo/RankingGPT-qwen-7b) |
63
+
64
+ ### Citation
65
+
66
+ If you find our paper or models helpful, please consider citing them as follows:
67
+
68
+ ```
69
+ @misc{zhang2023rankinggpt,
70
+ title={RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement},
71
+ author={Longhui Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang and Min Zhang},
72
+ year={2023},
73
+ eprint={2311.16720},
74
+ archivePrefix={arXiv},
75
+ primaryClass={cs.IR}
76
+ }
77
+ ```