Transformers
PyTorch
code
custom_code
Inference Endpoints

Add Sentence Transformers support

#3
by tomaarsen HF staff - opened

Hello!

Pull Request overview

  • Add Sentence Transformers support.

Details

With this PR, it will become possible to load this model with Sentence Transformers without triggering any warnings. The embeddings match that of the transformers approach, but the Sentence Transformers project simplifies some of the steps for the end user (e.g. no need to worry about tokenization, output shapes, etc.).

If you want to experiment with this PR before merging it, then you can use the revision option like so:

from sentence_transformers import SentenceTransformer

checkpoint = r"codesage/codesage-small"
device = "cuda"  # for GPU usage or "cpu" for CPU usage
revision = "refs/pr/3"

model = SentenceTransformer(checkpoint, device=device, trust_remote_code=True, revision=revision)

embedding = model.encode("def print_hello_world():\tprint('Hello World!')")
print(f'Dimension of the embedding: {embedding.size}')
# print(f'Dimension of the embedding: 1024')

after doing

pip install -U sentence-transformers
  • Tom Aarsen
tomaarsen changed pull request status to open
Cannot merge
This branch has merge conflicts in the following files:
  • 1_Pooling/config.json
  • modules.json

Sign up or log in to comment