nreimers commited on
Commit
887c72d
1 Parent(s): daea9a4
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+ ---
9
+
10
+ # dense_encoder-msmarco-distilbert-word2vec256k-MLM_210k
11
+
12
+ This model is based on [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_445k](https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_445k) with a 256k sized vocabulary initialized with word2vec that has been trained with MLM for 445k steps with frozen embedding matrix on the MSMARCO corpus.
13
+
14
+ It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py) with a frozen embedding matrix. See the train_script.py in this repository.
15
+
16
+
17
+ Performance:
18
+ - MS MARCO dev: (running) (MRR@10)
19
+ - TREC-DL 2019: 66.72 (nDCG@10)
20
+ - TREC-DL 2020: 69.14 (nDCG@10)
21
+
22
+
23
+
24
+ ## Usage (Sentence-Transformers)
25
+
26
+
27
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
28
+
29
+
30
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
31
+
32
+ ```
33
+ pip install -U sentence-transformers
34
+ ```
35
+
36
+ Then you can use the model like this:
37
+
38
+ ```python
39
+ from sentence_transformers import SentenceTransformer
40
+ sentences = ["This is an example sentence", "Each sentence is converted"]
41
+
42
+ model = SentenceTransformer('{MODEL_NAME}')
43
+ embeddings = model.encode(sentences)
44
+ print(embeddings)
45
+ ```
46
+
47
+
48
+
49
+ ## Usage (HuggingFace Transformers)
50
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
51
+
52
+ ```python
53
+ from transformers import AutoTokenizer, AutoModel
54
+ import torch
55
+
56
+
57
+ #Mean Pooling - Take attention mask into account for correct averaging
58
+ def mean_pooling(model_output, attention_mask):
59
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
60
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
61
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
62
+
63
+
64
+ # Sentences we want sentence embeddings for
65
+ sentences = ['This is an example sentence', 'Each sentence is converted']
66
+
67
+ # Load model from HuggingFace Hub
68
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
69
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
70
+
71
+ # Tokenize sentences
72
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
73
+
74
+ # Compute token embeddings
75
+ with torch.no_grad():
76
+ model_output = model(**encoded_input)
77
+
78
+ # Perform pooling. In this case, mean pooling.
79
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
80
+
81
+ print("Sentence embeddings:")
82
+ print(sentence_embeddings)
83
+ ```
84
+
85
+
86
+
87
+ ## Evaluation Results
88
+
89
+ <!--- Describe how your model was evaluated -->
90
+
91
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
92
+
93
+
94
+ ## Training
95
+ The model was trained with the parameters:
96
+
97
+ **DataLoader**:
98
+
99
+ `torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
100
+ ```
101
+ {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
102
+ ```
103
+
104
+ **Loss**:
105
+
106
+ `sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
107
+
108
+ Parameters of the fit()-Method:
109
+ ```
110
+ {
111
+ "epochs": 30,
112
+ "evaluation_steps": 0,
113
+ "evaluator": "NoneType",
114
+ "max_grad_norm": 1,
115
+ "optimizer_class": "<class 'transformers.optimization.AdamW'>",
116
+ "optimizer_params": {
117
+ "lr": 2e-05
118
+ },
119
+ "scheduler": "WarmupLinear",
120
+ "steps_per_epoch": null,
121
+ "warmup_steps": 1000,
122
+ "weight_decay": 0.01
123
+ }
124
+ ```
125
+
126
+
127
+ ## Full Model Architecture
128
+ ```
129
+ SentenceTransformer(
130
+ (0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
131
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
132
+ )
133
+ ```
134
+
135
+ ## Citing & Authors
136
+
137
+ <!--- Describe where people can find more information -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/word2vec-bert/models/distilbert-word2vec256k-MLM_445k/",
3
+ "activation": "gelu",
4
+ "architectures": [
5
+ "DistilBertModel"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "hidden_dim": 3072,
11
+ "initializer_range": 0.02,
12
+ "max_position_embeddings": 512,
13
+ "model_type": "distilbert",
14
+ "n_heads": 12,
15
+ "n_layers": 6,
16
+ "pad_token_id": 0,
17
+ "qa_dropout": 0.1,
18
+ "seq_classif_dropout": 0.2,
19
+ "sinusoidal_pos_embds": false,
20
+ "tie_weights_": true,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.16.2",
23
+ "vocab_size": 256000
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.0",
4
+ "transformers": "4.16.2",
5
+ "pytorch": "1.10.2"
6
+ }
7
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f3949a0afdc8861a68f84f55cf30eb4434d68c12a086db809b9248d4a37cb78
3
+ size 958156601
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 250,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "model_input_names": ["input_ids", "attention_mask"], "special_tokens_map_file": "/home/ukp-reimers/.cache/huggingface/transformers/fe09c361189d8238b9e387f10a088e93f70620bfe74b82036baff1fed512a153.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d", "name_or_path": "/home/word2vec-bert/models/distilbert-word2vec256k-MLM_445k/", "tokenizer_class": "PreTrainedTokenizerFast"}
train_script.py ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import json
3
+ from torch.utils.data import DataLoader
4
+ from sentence_transformers import SentenceTransformer, LoggingHandler, util, models, evaluation, losses, InputExample
5
+ import logging
6
+ from datetime import datetime
7
+ import gzip
8
+ import os
9
+ import tarfile
10
+ import tqdm
11
+ from torch.utils.data import Dataset
12
+ import random
13
+ from shutil import copyfile
14
+ import pickle
15
+ import argparse
16
+
17
+ #### Just some code to print debug information to stdout
18
+ logging.basicConfig(format='%(asctime)s - %(message)s',
19
+ datefmt='%Y-%m-%d %H:%M:%S',
20
+ level=logging.INFO,
21
+ handlers=[LoggingHandler()])
22
+ #### /print debug information to stdout
23
+
24
+
25
+ parser = argparse.ArgumentParser()
26
+ parser.add_argument("--train_batch_size", default=64, type=int)
27
+ parser.add_argument("--max_seq_length", default=250, type=int)
28
+ parser.add_argument("--model_name", default="nicoladecao/msmarco-word2vec256000-distilbert-base-uncased")
29
+ parser.add_argument("--max_passages", default=0, type=int)
30
+ parser.add_argument("--epochs", default=30, type=int)
31
+ parser.add_argument("--pooling", default="mean")
32
+ parser.add_argument("--negs_to_use", default=None, help="From which systems should negatives be used? Multiple systems seperated by comma. None = all")
33
+ parser.add_argument("--warmup_steps", default=1000, type=int)
34
+ parser.add_argument("--lr", default=2e-5, type=float)
35
+ parser.add_argument("--num_negs_per_system", default=5, type=int)
36
+ parser.add_argument("--use_all_queries", default=False, action="store_true")
37
+ args = parser.parse_args()
38
+
39
+ logging.info(str(args))
40
+
41
+
42
+
43
+ # The model we want to fine-tune
44
+ train_batch_size = args.train_batch_size #Increasing the train batch size improves the model performance, but requires more GPU memory
45
+ model_name = args.model_name
46
+ max_passages = args.max_passages
47
+ max_seq_length = args.max_seq_length #Max length for passages. Increasing it, requires more GPU memory
48
+
49
+ num_negs_per_system = args.num_negs_per_system # We used different systems to mine hard negatives. Number of hard negatives to add from each system
50
+ num_epochs = args.epochs # Number of epochs we want to train
51
+
52
+ # Load our embedding model
53
+
54
+ logging.info("Create new SBERT model")
55
+ word_embedding_model = models.Transformer(model_name, max_seq_length=max_seq_length)
56
+ pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), args.pooling)
57
+ model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
58
+
59
+ #Freeze embedding layer
60
+ word_embedding_model.auto_model.embeddings.requires_grad = False
61
+
62
+ model_save_path = f'output/train_bi-encoder-margin_mse-word2vec-{model_name.replace("/", "-")}-batch_size_{train_batch_size}-{datetime.now().strftime("%Y-%m-%d_%H-%M-%S")}'
63
+
64
+
65
+ # Write self to path
66
+ os.makedirs(model_save_path, exist_ok=True)
67
+
68
+ train_script_path = os.path.join(model_save_path, 'train_script.py')
69
+ copyfile(__file__, train_script_path)
70
+ with open(train_script_path, 'a') as fOut:
71
+ fOut.write("\n\n# Script was called via:\n#python " + " ".join(sys.argv))
72
+
73
+
74
+ ### Now we read the MS Marco dataset
75
+ data_folder = 'msmarco-data'
76
+
77
+ #### Read the corpus files, that contain all the passages. Store them in the corpus dict
78
+ corpus = {} #dict in the format: passage_id -> passage. Stores all existent passages
79
+ collection_filepath = os.path.join(data_folder, 'collection.tsv')
80
+ if not os.path.exists(collection_filepath):
81
+ tar_filepath = os.path.join(data_folder, 'collection.tar.gz')
82
+ if not os.path.exists(tar_filepath):
83
+ logging.info("Download collection.tar.gz")
84
+ util.http_get('https://msmarco.blob.core.windows.net/msmarcoranking/collection.tar.gz', tar_filepath)
85
+
86
+ with tarfile.open(tar_filepath, "r:gz") as tar:
87
+ tar.extractall(path=data_folder)
88
+
89
+ logging.info("Read corpus: collection.tsv")
90
+ with open(collection_filepath, 'r', encoding='utf8') as fIn:
91
+ for line in fIn:
92
+ pid, passage = line.strip().split("\t")
93
+ pid = int(pid)
94
+ corpus[pid] = passage
95
+
96
+
97
+ ### Read the train queries, store in queries dict
98
+ queries = {} #dict in the format: query_id -> query. Stores all training queries
99
+ queries_filepath = os.path.join(data_folder, 'queries.train.tsv')
100
+ if not os.path.exists(queries_filepath):
101
+ tar_filepath = os.path.join(data_folder, 'queries.tar.gz')
102
+ if not os.path.exists(tar_filepath):
103
+ logging.info("Download queries.tar.gz")
104
+ util.http_get('https://msmarco.blob.core.windows.net/msmarcoranking/queries.tar.gz', tar_filepath)
105
+
106
+ with tarfile.open(tar_filepath, "r:gz") as tar:
107
+ tar.extractall(path=data_folder)
108
+
109
+
110
+ with open(queries_filepath, 'r', encoding='utf8') as fIn:
111
+ for line in fIn:
112
+ qid, query = line.strip().split("\t")
113
+ qid = int(qid)
114
+ queries[qid] = query
115
+
116
+
117
+ # Load a dict (qid, pid) -> ce_score that maps query-ids (qid) and paragraph-ids (pid)
118
+ # to the CrossEncoder score computed by the cross-encoder/ms-marco-MiniLM-L-6-v2 model
119
+ ce_scores_file = os.path.join(data_folder, 'cross-encoder-ms-marco-MiniLM-L-6-v2-scores.pkl.gz')
120
+ if not os.path.exists(ce_scores_file):
121
+ logging.info("Download cross-encoder scores file")
122
+ util.http_get('https://huggingface.co/datasets/sentence-transformers/msmarco-hard-negatives/resolve/main/cross-encoder-ms-marco-MiniLM-L-6-v2-scores.pkl.gz', ce_scores_file)
123
+
124
+ logging.info("Load CrossEncoder scores dict")
125
+ with gzip.open(ce_scores_file, 'rb') as fIn:
126
+ ce_scores = pickle.load(fIn)
127
+
128
+ # As training data we use hard-negatives that have been mined using various systems
129
+ hard_negatives_filepath = os.path.join(data_folder, 'msmarco-hard-negatives.jsonl.gz')
130
+ if not os.path.exists(hard_negatives_filepath):
131
+ logging.info("Download cross-encoder scores file")
132
+ util.http_get('https://huggingface.co/datasets/sentence-transformers/msmarco-hard-negatives/resolve/main/msmarco-hard-negatives.jsonl.gz', hard_negatives_filepath)
133
+
134
+
135
+ logging.info("Read hard negatives train file")
136
+ train_queries = {}
137
+ negs_to_use = None
138
+ with gzip.open(hard_negatives_filepath, 'rt') as fIn:
139
+ for line in tqdm.tqdm(fIn):
140
+ if max_passages > 0 and len(train_queries) >= max_passages:
141
+ break
142
+ data = json.loads(line)
143
+
144
+ #Get the positive passage ids
145
+ pos_pids = data['pos']
146
+
147
+ #Get the hard negatives
148
+ neg_pids = set()
149
+ if negs_to_use is None:
150
+ if args.negs_to_use is not None: #Use specific system for negatives
151
+ negs_to_use = args.negs_to_use.split(",")
152
+ else: #Use all systems
153
+ negs_to_use = list(data['neg'].keys())
154
+ logging.info("Using negatives from the following systems: {}".format(", ".join(negs_to_use)))
155
+
156
+ for system_name in negs_to_use:
157
+ if system_name not in data['neg']:
158
+ continue
159
+
160
+ system_negs = data['neg'][system_name]
161
+ negs_added = 0
162
+ for pid in system_negs:
163
+ if pid not in neg_pids:
164
+ neg_pids.add(pid)
165
+ negs_added += 1
166
+ if negs_added >= num_negs_per_system:
167
+ break
168
+
169
+ if args.use_all_queries or (len(pos_pids) > 0 and len(neg_pids) > 0):
170
+ train_queries[data['qid']] = {'qid': data['qid'], 'query': queries[data['qid']], 'pos': pos_pids, 'neg': neg_pids}
171
+
172
+ logging.info("Train queries: {}".format(len(train_queries)))
173
+
174
+ # We create a custom MSMARCO dataset that returns triplets (query, positive, negative)
175
+ # on-the-fly based on the information from the mined-hard-negatives jsonl file.
176
+ class MSMARCODataset(Dataset):
177
+ def __init__(self, queries, corpus, ce_scores):
178
+ self.queries = queries
179
+ self.queries_ids = list(queries.keys())
180
+ self.corpus = corpus
181
+ self.ce_scores = ce_scores
182
+
183
+ for qid in self.queries:
184
+ self.queries[qid]['pos'] = list(self.queries[qid]['pos'])
185
+ self.queries[qid]['neg'] = list(self.queries[qid]['neg'])
186
+ random.shuffle(self.queries[qid]['neg'])
187
+
188
+ def __getitem__(self, item):
189
+ query = self.queries[self.queries_ids[item]]
190
+ query_text = query['query']
191
+ qid = query['qid']
192
+
193
+ if len(query['pos']) > 0:
194
+ pos_id = query['pos'].pop(0) #Pop positive and add at end
195
+ pos_text = self.corpus[pos_id]
196
+ query['pos'].append(pos_id)
197
+ else: #We only have negatives, use two negs
198
+ pos_id = query['neg'].pop(0) #Pop negative and add at end
199
+ pos_text = self.corpus[pos_id]
200
+ query['neg'].append(pos_id)
201
+
202
+ #Get a negative passage
203
+ neg_id = query['neg'].pop(0) #Pop negative and add at end
204
+ neg_text = self.corpus[neg_id]
205
+ query['neg'].append(neg_id)
206
+
207
+ pos_score = self.ce_scores[qid][pos_id]
208
+ neg_score = self.ce_scores[qid][neg_id]
209
+
210
+ return InputExample(texts=[query_text, pos_text, neg_text], label=pos_score-neg_score)
211
+
212
+ def __len__(self):
213
+ return len(self.queries)
214
+
215
+ # For training the SentenceTransformer model, we need a dataset, a dataloader, and a loss used for training.
216
+ train_dataset = MSMARCODataset(queries=train_queries, corpus=corpus, ce_scores=ce_scores)
217
+ train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=train_batch_size, drop_last=True)
218
+ train_loss = losses.MarginMSELoss(model=model)
219
+
220
+ # Train the model
221
+ model.fit(train_objectives=[(train_dataloader, train_loss)],
222
+ epochs=num_epochs,
223
+ warmup_steps=args.warmup_steps,
224
+ use_amp=True,
225
+ checkpoint_path=model_save_path,
226
+ checkpoint_save_steps=10000,
227
+ optimizer_params = {'lr': args.lr},
228
+ )
229
+
230
+ # Train latest model
231
+ model.save(model_save_path)
232
+
233
+ # Script was called via:
234
+ #python train_bi-encoder_margin-mse_word2vec.py --model /home/word2vec-bert/models/distilbert-word2vec256k-MLM_445k/