Zhichao Geng commited on
Commit
5428781
1 Parent(s): b07df81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md CHANGED
@@ -1,3 +1,103 @@
1
  ---
 
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  license: apache-2.0
4
+ tags:
5
+ - learned sparse
6
+ - opensearch
7
+ - transformers
8
+ - retrieval
9
  ---
10
+
11
+ # opensearch-neural-sparse-encoding-v1
12
+ This is a learned sparse retrieval model. It encodes the queries and documents to 30522 dimensional **sparse vectors**. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token.
13
+
14
+ OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
15
+
16
+ ## Usage (HuggingFace)
17
+ This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
18
+
19
+ ```
20
+ import itertools
21
+ import torch
22
+ from transformers import AutoModelForMaskedLM, AutoTokenizer
23
+
24
+ # get sparse vector from dense vectors with shape batch_size * seq_len * vocab_size
25
+ def get_sparse_vector(feature, output):
26
+ values, _ = torch.max(output*feature["attention_mask"].unsqueeze(-1), dim=1)
27
+ values = torch.log(1 + torch.relu(values))
28
+ return values
29
+
30
+ # transform the sparse vector to a dict of (token, weight)
31
+ def transform_sparse_vector_to_dict(sparse_vector, id_to_token):
32
+ sample_indices,token_indices=torch.nonzero(sparse_vector,as_tuple=True)
33
+ non_zero_values = sparse_vector[(sample_indices,token_indices)].tolist()
34
+ number_of_tokens_for_each_sample = torch.bincount(sample_indices).cpu().tolist()
35
+ tokens = [id_to_token[_id] for _id in token_indices.tolist()]
36
+
37
+ output = []
38
+ end_idxs = list(itertools.accumulate([0]+number_of_tokens_for_each_sample))
39
+ for i in range(len(end_idxs)-1):
40
+ token_strings = tokens[end_idxs[i]:end_idxs[i+1]]
41
+ weights = non_zero_values[end_idxs[i]:end_idxs[i+1]]
42
+ output.append(dict(zip(token_strings, weights)))
43
+ return output
44
+
45
+
46
+ # load the model
47
+ model = AutoModelForMaskedLM.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-v1")
48
+ tokenizer = AutoTokenizer.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-v1")
49
+
50
+ query = "What's the weather in ny now?"
51
+ document = "Currently New York is rainy."
52
+
53
+ # encode the query & document
54
+ feature = tokenizer([query, document], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False)
55
+ output = model(**feature)[0]
56
+ sparse_vector = get_sparse_vector(feature, output)
57
+
58
+ # get similarity score
59
+ sim_score = torch.matmul(sparse_vector[0],sparse_vector[1])
60
+ print(sim_score) # tensor(22.3299, grad_fn=<DotBackward0>)
61
+
62
+ # get the array to transform token id to token string
63
+ id_to_token = ["" for i in range(tokenizer.vocab_size)]
64
+ for token, _id in tokenizer.vocab.items():
65
+ id_to_token[_id] = token
66
+
67
+
68
+ query_token_weight, document_query_token_weight = transform_sparse_vector_to_dict(sparse_vector, id_to_token)
69
+ for token in sorted(query_token_weight, key=lambda x:query_token_weight[x], reverse=True):
70
+ if token in document_query_token_weight:
71
+ print("score in query: %.4f, score in document: %.4f, token: %s"%(query_token_weight[token],document_query_token_weight[token],token))
72
+
73
+
74
+
75
+ # result:
76
+ # score in query: 2.9262, score in document: 2.1335, token: ny
77
+ # score in query: 2.5206, score in document: 1.5277, token: weather
78
+ # score in query: 2.0373, score in document: 2.3489, token: york
79
+ # score in query: 1.5786, score in document: 0.8752, token: cool
80
+ # score in query: 1.4636, score in document: 1.5132, token: current
81
+ # score in query: 0.7761, score in document: 0.8860, token: season
82
+ # score in query: 0.7560, score in document: 0.6726, token: 2020
83
+ # score in query: 0.7222, score in document: 0.6292, token: summer
84
+ # score in query: 0.6888, score in document: 0.6419, token: nina
85
+ # score in query: 0.6451, score in document: 0.8200, token: storm
86
+ # score in query: 0.4698, score in document: 0.7635, token: brooklyn
87
+ # score in query: 0.4562, score in document: 0.1208, token: julian
88
+ # score in query: 0.3484, score in document: 0.3903, token: wow
89
+ # score in query: 0.3439, score in document: 0.4160, token: usa
90
+ # score in query: 0.2751, score in document: 0.8260, token: manhattan
91
+ # score in query: 0.2013, score in document: 0.7735, token: fog
92
+ # score in query: 0.1989, score in document: 0.2961, token: mood
93
+ # score in query: 0.1653, score in document: 0.3437, token: climate
94
+ # score in query: 0.1191, score in document: 0.1533, token: nature
95
+ # score in query: 0.0665, score in document: 0.0600, token: temperature
96
+ # score in query: 0.0552, score in document: 0.3396, token: windy
97
+
98
+ ```
99
+
100
+ The above code sample shows an example of neural sparse search. Although there is no overlap token in original query and document, but this model performs a good match.
101
+
102
+ ## Performance
103
+ This model is trained on MS MARCO dataset. The search relevance score of it can be found here (Neural sparse search bi-encoder) https://opensearch.org/blog/improving-document-retrieval-with-sparse-semantic-encoders/.