intfloat commited on
Commit
b8d1338
1 Parent(s): 315af1b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval
2
+
3
+ paper available at [https://arxiv.org/pdf/2207.02578](https://arxiv.org/pdf/2207.02578)
4
+
5
+ code available at [https://github.com/microsoft/unilm/tree/master/simlm](https://github.com/microsoft/unilm/tree/master/simlm)
6
+
7
+ ## Paper abstract
8
+
9
+ In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval.
10
+ It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
11
+ We use a replaced language modeling objective, which is inspired by ELECTRA,
12
+ to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning.
13
+ SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries.
14
+ We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings.
15
+ Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost.
16
+
17
+ ## Results on MS-MARCO passage ranking task
18
+
19
+ | Model | dev MRR@10 | dev R@50 | dev R@1k | TREC DL 2019 nDCG@10 | TREC DL 2020 nDCG@10 |
20
+ |--|---|---|---|---|---|
21
+ | **SimLM (this model)** | 43.8 | 89.2 | 98.6 | 74.6 | 72.7 |
22
+
23
+ ## Usage
24
+
25
+ Since we use a listwise loss to train the re-ranker,
26
+ the relevance score is not bounded to a specific numerical range.
27
+ Higher scores mean more relevant between the given query and passage.
28
+
29
+ Get relevance score from our re-ranker:
30
+
31
+ ```python
32
+ import torch
33
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
34
+ from transformers.modeling_outputs import SequenceClassifierOutput
35
+
36
+ def encode(tokenizer: PreTrainedTokenizerFast,
37
+ query: str, passage: str, title: str = '-') -> BatchEncoding:
38
+ return tokenizer(query,
39
+ text_pair='{}: {}'.format(title, passage),
40
+ max_length=192,
41
+ padding=True,
42
+ truncation=True,
43
+ return_tensors='pt')
44
+
45
+ tokenizer = AutoTokenizer.from_pretrained('intfloat/simlm-msmarco-reranker')
46
+ model = AutoModelForSequenceClassification.from_pretrained('intfloat/simlm-msmarco-reranker')
47
+ model.eval()
48
+
49
+ with torch.no_grad():
50
+ batch_dict = encode(tokenizer, 'how long is super bowl game', 'The Super Bowl is typically four hours long. The game itself takes about three and a half hours, with a 30 minute halftime show built in.')
51
+ outputs: SequenceClassifierOutput = model(**batch_dict, return_dict=True)
52
+ print(outputs.logits[0])
53
+
54
+ batch_dict = encode(tokenizer, 'how long is super bowl game', 'The cost of a Super Bowl commercial runs about $5 million for 30 seconds of airtime. But the benefits that the spot can bring to a brand can help to justify the cost.')
55
+ outputs: SequenceClassifierOutput = model(**batch_dict, return_dict=True)
56
+ print(outputs.logits[0])
57
+ ```
58
+
59
+ ## Citation
60
+
61
+ ```bibtex
62
+ @article{Wang2022SimLMPW,
63
+ title={SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval},
64
+ author={Liang Wang and Nan Yang and Xiaolong Huang and Binxing Jiao and Linjun Yang and Daxin Jiang and Rangan Majumder and Furu Wei},
65
+ journal={ArXiv},
66
+ year={2022},
67
+ volume={abs/2207.02578}
68
+ }
69
+ ```