Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
dense
Generated from Trainer
dataset_size:556626
loss:MultipleNegativesSymmetricRankingLoss
Eval Results (legacy)
text-embeddings-inference
Instructions to use LamaDiab/MiniLM-V7-128BATCH-V6Data-SemanticEngine with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use LamaDiab/MiniLM-V7-128BATCH-V6Data-SemanticEngine with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("LamaDiab/MiniLM-V7-128BATCH-V6Data-SemanticEngine") sentences = [ "dimlaj orchid printed finest durable glass terkish tea set", "v3 pro purple", "glass tea set", "easy cleaning beanbag" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
| { | |
| "word_embedding_dimension": 384, | |
| "pooling_mode_cls_token": false, | |
| "pooling_mode_mean_tokens": true, | |
| "pooling_mode_max_tokens": false, | |
| "pooling_mode_mean_sqrt_len_tokens": false, | |
| "pooling_mode_weightedmean_tokens": false, | |
| "pooling_mode_lasttoken": false, | |
| "include_prompt": true | |
| } |