Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
dense
Generated from Trainer
dataset_size:647236
loss:MultipleNegativesSymmetricRankingLoss
Eval Results (legacy)
text-embeddings-inference
Instructions to use LamaDiab/MiniLM-V22Data-256ConstantBATCH-SemanticEngine with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use LamaDiab/MiniLM-V22Data-256ConstantBATCH-SemanticEngine with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("LamaDiab/MiniLM-V22Data-256ConstantBATCH-SemanticEngine") sentences = [ "essence multi task concealer 15 natural nude", "pure oxygen 20 vol", "essence", "face make-up" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
Training in progress, epoch 4
Browse files- eval/triplet_evaluation_results.csv +3 -0
- model.safetensors +1 -1
eval/triplet_evaluation_results.csv
CHANGED
|
@@ -6,3 +6,6 @@ epoch,steps,accuracy_cosine
|
|
| 6 |
1.9762939549585146,5000,0.9638237357139587
|
| 7 |
2.3713947056499407,6000,0.9671889543533325
|
| 8 |
2.7664954563413673,7000,0.9656115174293518
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
1.9762939549585146,5000,0.9638237357139587
|
| 7 |
2.3713947056499407,6000,0.9671889543533325
|
| 8 |
2.7664954563413673,7000,0.9656115174293518
|
| 9 |
+
3.1615962070327934,8000,0.9663476943969727
|
| 10 |
+
3.5566969577242196,9000,0.9673992991447449
|
| 11 |
+
3.951797708415646,10000,0.9675044417381287
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 90864192
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a7e78f05d1b909a1bb3e88f3ed9dcb2e26eedc64fdd8089b0fd3331f531184ea
|
| 3 |
size 90864192
|