famube commited on
Commit
bc2677d
1 Parent(s): d41cb46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md CHANGED
@@ -1,2 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  Sentence Transformers trained using a triplet loss function on a 100k sample of the MS MARCO dataset
2
  PLM used: roberta-base adapted to MSMARCO: famube/roberta-base-msmarco
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+
9
+ ---
10
+
11
+
12
+ # MSMARCO SENTENCE SIMILARITY
13
+
14
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
15
+
16
+ Sentence Transformers trained using a triplet loss function on a 100k sample of the MS MARCO dataset
17
+ PLM used: roberta-base adapted to MSMARCO: famube/roberta-base-msmarco
18
+
19
+ ## Usage (Sentence-Transformers)
20
+
21
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
22
+
23
+ ```
24
+ pip install -U sentence-transformers
25
+ ```
26
+
27
+ Then you can use the model like this:
28
+
29
+ ```python
30
+ from sentence_transformers import SentenceTransformer
31
+ sentences = ["This is an example sentence", "Each sentence is converted"]
32
+
33
+ model = SentenceTransformer('{MODEL_NAME}')
34
+ embeddings = model.encode(sentences)
35
+ print(embeddings)
36
+ ```
37
+
38
+
39
+
40
+ ## Usage (HuggingFace Transformers)
41
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
42
+
43
+ ```python
44
+ from transformers import AutoTokenizer, AutoModel
45
+ import torch
46
+
47
+
48
+ #Mean Pooling - Take attention mask into account for correct averaging
49
+ def mean_pooling(model_output, attention_mask):
50
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
51
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
52
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
53
+
54
+
55
+ # Sentences we want sentence embeddings for
56
+ sentences = ['This is an example sentence', 'Each sentence is converted']
57
+
58
+ # Load model from HuggingFace Hub
59
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
60
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
61
+
62
+ # Tokenize sentences
63
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
64
+
65
+ # Compute token embeddings
66
+ with torch.no_grad():
67
+ model_output = model(**encoded_input)
68
+
69
+ # Perform pooling. In this case, mean pooling.
70
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
71
+
72
+ print("Sentence embeddings:")
73
+ print(sentence_embeddings)
74
+ ```
75
+
76
+
77
+
78
+ ## Evaluation Results
79
+
80
+ <!--- Describe how your model was evaluated -->
81
+
82
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
83
+
84
+
85
+ ## Training
86
+ The model was trained with the parameters:
87
+
88
+ **DataLoader**:
89
+
90
+ `torch.utils.data.dataloader.DataLoader` of length 6250 with parameters:
91
+ ```
92
+ {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
93
+ ```
94
+
95
+ **Loss**:
96
+
97
+ `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
98
+ ```
99
+ {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
100
+ ```
101
+
102
+ Parameters of the fit()-Method:
103
+ ```
104
+ {
105
+ "epochs": 4,
106
+ "evaluation_steps": 10000,
107
+ "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
108
+ "max_grad_norm": 1,
109
+ "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
110
+ "optimizer_params": {
111
+ "lr": 2e-05
112
+ },
113
+ "scheduler": "WarmupLinear",
114
+ "steps_per_epoch": null,
115
+ "warmup_steps": 100,
116
+ "weight_decay": 0.01
117
+ }
118
+ ```
119
+
120
+
121
+ ## Full Model Architecture
122
+ ```
123
+ SentenceTransformer(
124
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
125
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
126
+ )
127
+ ```
128
+
129
+ ## Citing & Authors
130
+
131
+ <!--- Describe where people can find more information -->
132
+
133
  Sentence Transformers trained using a triplet loss function on a 100k sample of the MS MARCO dataset
134
  PLM used: roberta-base adapted to MSMARCO: famube/roberta-base-msmarco