This is a SCT model: It maps sentences to a dense vector space and can be used for tasks like semantic search.
Usage
Using this model becomes easy when you have SCT installed:
pip install -U git+https://github.com/mrpeerat/SCT
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mrp/SCT_BERT_Large')
embeddings = model.encode(sentences)
print(embeddings)
Evaluation Results
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: Semantic Textual Similarity
Citing & Authors
@article{limkonchotiwat-etal-2023-sct,
title = "An Efficient Self-Supervised Cross-View Training For Sentence Embedding",
author = "Limkonchotiwat, Peerat and
Ponwitayarat, Wuttikorn and
Lowphansirikul, Lalita and
Udomcharoenchaikit, Can and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
journal = "Transactions of the Association for Computational Linguistics",
year = "2023",
address = "Cambridge, MA",
publisher = "MIT Press",
}
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.