--- language: en license: apache-2.0 tags: - learned sparse - opensearch - transformers - retrieval - passage-retrieval - query-expansion - document-expansion - bag-of-words --- # opensearch-neural-sparse-encoding-v1 This is a learned sparse retrieval model. It encodes the queries and documents to 30522 dimensional **sparse vectors**. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token. OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API. ## Usage (HuggingFace) This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API. ```python import itertools import torch from transformers import AutoModelForMaskedLM, AutoTokenizer # get sparse vector from dense vectors with shape batch_size * seq_len * vocab_size def get_sparse_vector(feature, output): values, _ = torch.max(output*feature["attention_mask"].unsqueeze(-1), dim=1) values = torch.log(1 + torch.relu(values)) values[:,special_token_ids] = 0 return values # transform the sparse vector to a dict of (token, weight) def transform_sparse_vector_to_dict(sparse_vector): sample_indices,token_indices=torch.nonzero(sparse_vector,as_tuple=True) non_zero_values = sparse_vector[(sample_indices,token_indices)].tolist() number_of_tokens_for_each_sample = torch.bincount(sample_indices).cpu().tolist() tokens = [transform_sparse_vector_to_dict.id_to_token[_id] for _id in token_indices.tolist()] output = [] end_idxs = list(itertools.accumulate([0]+number_of_tokens_for_each_sample)) for i in range(len(end_idxs)-1): token_strings = tokens[end_idxs[i]:end_idxs[i+1]] weights = non_zero_values[end_idxs[i]:end_idxs[i+1]] output.append(dict(zip(token_strings, weights))) return output # load the model model = AutoModelForMaskedLM.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-v1") tokenizer = AutoTokenizer.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-v1") # set the special tokens and id_to_token transform for post-process special_token_ids = [tokenizer.vocab[token] for token in tokenizer.special_tokens_map.values()] get_sparse_vector.special_token_ids = special_token_ids id_to_token = ["" for i in range(tokenizer.vocab_size)] for token, _id in tokenizer.vocab.items(): id_to_token[_id] = token transform_sparse_vector_to_dict.id_to_token = id_to_token query = "What's the weather in ny now?" document = "Currently New York is rainy." # encode the query & document feature = tokenizer([query, document], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False) output = model(**feature)[0] sparse_vector = get_sparse_vector(feature, output) # get similarity score sim_score = torch.matmul(sparse_vector[0],sparse_vector[1]) print(sim_score) # tensor(22.3299, grad_fn=) query_token_weight, document_query_token_weight = transform_sparse_vector_to_dict(sparse_vector) for token in sorted(query_token_weight, key=lambda x:query_token_weight[x], reverse=True): if token in document_query_token_weight: print("score in query: %.4f, score in document: %.4f, token: %s"%(query_token_weight[token],document_query_token_weight[token],token)) # result: # score in query: 2.9262, score in document: 2.1335, token: ny # score in query: 2.5206, score in document: 1.5277, token: weather # score in query: 2.0373, score in document: 2.3489, token: york # score in query: 1.5786, score in document: 0.8752, token: cool # score in query: 1.4636, score in document: 1.5132, token: current # score in query: 0.7761, score in document: 0.8860, token: season # score in query: 0.7560, score in document: 0.6726, token: 2020 # score in query: 0.7222, score in document: 0.6292, token: summer # score in query: 0.6888, score in document: 0.6419, token: nina # score in query: 0.6451, score in document: 0.8200, token: storm # score in query: 0.4698, score in document: 0.7635, token: brooklyn # score in query: 0.4562, score in document: 0.1208, token: julian # score in query: 0.3484, score in document: 0.3903, token: wow # score in query: 0.3439, score in document: 0.4160, token: usa # score in query: 0.2751, score in document: 0.8260, token: manhattan # score in query: 0.2013, score in document: 0.7735, token: fog # score in query: 0.1989, score in document: 0.2961, token: mood # score in query: 0.1653, score in document: 0.3437, token: climate # score in query: 0.1191, score in document: 0.1533, token: nature # score in query: 0.0665, score in document: 0.0600, token: temperature # score in query: 0.0552, score in document: 0.3396, token: windy ``` The above code sample shows an example of neural sparse search. Although there is no overlap token in original query and document, but this model performs a good match. ## Performance This model is trained on MS MARCO dataset. The search relevance score of it can be found here (Neural sparse search bi-encoder) https://opensearch.org/blog/improving-document-retrieval-with-sparse-semantic-encoders/.