Sentence Similarity
sentence-transformers
Safetensors
Transformers
PyTorch
English
gemma3_text
feature-extraction
mteb
Eval Results (legacy)
text-embeddings-inference
Instructions to use Surpem/Supertron-embedding-300M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use Surpem/Supertron-embedding-300M with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("Surpem/Supertron-embedding-300M") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use Surpem/Supertron-embedding-300M with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Surpem/Supertron-embedding-300M") model = AutoModel.from_pretrained("Surpem/Supertron-embedding-300M") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 3da13d8f188661475587e054752ca6650fbccb689d0c01f547814f439c1873a2
- Size of remote file:
- 33.4 MB
- SHA256:
- 574b6e5f174a8b4b060b4ab002cfc78dd2620ea384940b7150d7e49571c25865
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.