File size: 4,056 Bytes
4936a42
 
 
 
 
 
 
 
 
 
 
 
18e6702
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d7e79ea
 
 
 
 
 
 
 
 
18e6702
d7e79ea
 
18e6702
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d4b558
18e6702
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
language: 
  - en
tags:
- search
- ranking
- vespa
license: mit

base_model: "cross-encoder/ms-marco-MiniLM-L-6-v2"
---

# MS Marco Ranking with ColBERT on Vespa.ai 

Model is based on [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT](https://arxiv.org/abs/2004.12832). 
This BERT model is based on [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) and trained using the
original [ColBERT training routine](https://github.com/stanford-futuredata/ColBERT/).

This model has 22.3M trainable parameters and is approximately 2x faster than 
[vespa-engine/colbert-medium](https://huggingface.co/vespa-engine/colbert-medium) and with better or on pair MRR@10 on dev. 

The model weights have been tuned by training using a randomized sample of MS Marco training triplets 
[MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking). 

To use this model with vespa.ai for MS Marco Passage Ranking, see 
[MS Marco Ranking using Vespa.ai sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking).

# MS Marco Passage Ranking

| MS Marco Passage Ranking Query Set | MRR@10 ColBERT on Vespa.ai |
|------------------------------------|----------------|
| Dev                                | 0.364          |

Recall@k On Dev (6980 queries)

|K                                   | Recall@K |
|------------------------------------|----------------|
| 50                                 | 0.816          |
| 200                                | 0.905          |
| 1000                               | 0.939          |


The MRR@10 on dev is achieved by re-ranking 1K retrieved by a dense retriever based on 
[sentence-transformers/msmarco-MiniLM-L-6-v3](https://huggingface.co/sentence-transformers/msmarco-MiniLM-L-6-v3). Re-ranking the original top 1000 dev 
is 0.354 MRR@10 (Recall@1K 0.82).

The official baseline BM25 ranking model MRR@10 0.16 on eval and 0.167 on dev question set. 
See [MS Marco Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).

## Export ColBERT query encoder to ONNX 
We represent the ColBERT query encoder in the Vespa runtime, to map the textual query representation to the tensor representation. For this
we use Vespa's support for running ONNX models. One can use the following snippet to export the model for serving.

```python
from transformers import BertModel
from transformers import BertPreTrainedModel
from transformers import BertConfig
import torch 
import torch.nn as nn

class VespaColBERT(BertPreTrainedModel):
   
    def __init__(self,config):
        super().__init__(config)
        self.bert = BertModel(config)
        self.linear = nn.Linear(config.hidden_size, 32, bias=False)
        self.init_weights()
        
    def forward(self, input_ids, attention_mask):
        Q = self.bert(input_ids,attention_mask=attention_mask)[0]
        Q = self.linear(Q)
        return torch.nn.functional.normalize(Q, p=2, dim=2)  

colbert_query_encoder = VespaColBERT.from_pretrained("vespa-engine/col-minilm") 

#Export model to ONNX for serving in Vespa 

input_names = ["input_ids", "attention_mask"]
output_names = ["contextual"]
#input, max 32 query term
input_ids = torch.ones(1,32, dtype=torch.int64)
attention_mask = torch.ones(1,32,dtype=torch.int64)
args = (input_ids, attention_mask)
torch.onnx.export(colbert_query_encoder,
                args=args,
                f="query_encoder_colbert.onnx",
                input_names = input_names,
                output_names = output_names,
                dynamic_axes = {
                    "input_ids": {0: "batch"},
                    "attention_mask": {0: "batch"},
                    "contextual": {0: "batch"},
                },
                opset_version=11)
```

# Representing the model on Vespa.ai
See [Ranking with ONNX models](https://docs.vespa.ai/documentation/onnx.html) and the [MS Marco Ranking sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking)