nreimers commited on
Commit
5630066
1 Parent(s): 54bce0f
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -98,6 +98,17 @@ for doc, score in doc_score_pairs:
98
  print(score, doc)
99
  ```
100
 
 
 
 
 
 
 
 
 
 
 
 
101
  ----
102
 
103
 
@@ -127,11 +138,13 @@ The full training script is accessible in this current repository: `train_script
127
 
128
  We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
129
 
130
- #### Training data
131
 
132
  We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
133
  We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
134
 
 
 
135
 
136
 
137
 
98
  print(score, doc)
99
  ```
100
 
101
+ ## Technical Details
102
+
103
+ In the following some technical details how this model must be used:
104
+
105
+ | Setting | Value |
106
+ | --- | :---: |
107
+ | Dimensions | 384 |
108
+ | Produces normalized embeddings | No |
109
+ | Pooling-Method | CLS pooling |
110
+ | Suitable score functions | dot-product (e.g. `util.dot_score`) |
111
+
112
  ----
113
 
114
 
138
 
139
  We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
140
 
141
+ #### Training
142
 
143
  We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
144
  We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
145
 
146
+ The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using CLS-pooling, dot-product as similarity function, and a scale of 1.
147
+
148
 
149
 
150