|
# Run Model this way: |
|
|
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
model = SentenceTransformer("ClovenDoug/small_128_all-MiniLM-L6-v2") |
|
|
|
sentence_one = "I like cats" |
|
|
|
embedding = model.encode(sentence_one) |
|
print(embedding) |
|
``` |
|
|
|
|
|
|
|
# small_128_all-MiniLM-L6-v2 |
|
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search. |
|
|
|
This gives it faster similarity comparison time although inference time will remain about the same. |
|
|
|
This model was made using knowledge distillation techniques on the original 384 dimensional all-MiniLM-L6-v2 model. |
|
|
|
|
|
The script for distilling this model into various sizes can be found here: |
|
|
|
https://github.com/dorenwick/sentence_encoder_distillation |
|
|
|
## Usage (Sentence-Transformers) |
|
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
|
|
|
|
|
|