aari1995 commited on
Commit
d932401
1 Parent(s): f100bc7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -286,6 +286,7 @@ The successor of German_Semantic_STS_V2 is here!
286
  - **Matryoshka Embeddings:** The model is trained for embedding sizes from 1024 down to 64, allowing you to store much smaller embeddings with little quality loss.
287
  - **License:** Apache 2.0
288
  - **German only:** This model is German-only, causing the model to learn more efficient and deal better with shorter queries.
 
289
 
290
  ## Usage:
291
 
@@ -296,7 +297,9 @@ from sentence_transformers import SentenceTransformer
296
  matryoshka_dim = 1024 # How big your embeddings should be, choose from: 64, 128, 256, 512, 1024
297
  model = SentenceTransformer("aari1995/German_Semantic_V3", trust_remote_code=True, truncate_dim=matryoshka_dim)
298
 
299
- #model.max_seq_length = 512 #optionally, set your maximum sequence length lower if your hardware is limited
 
 
300
  # Run inference
301
  sentences = [
302
  'Eine Flagge weht.',
@@ -307,6 +310,8 @@ embeddings = model.encode(sentences)
307
 
308
  # Get the similarity scores for the embeddings
309
  similarities = model.similarity(embeddings, embeddings)
 
 
310
  ```
311
 
312
 
 
286
  - **Matryoshka Embeddings:** The model is trained for embedding sizes from 1024 down to 64, allowing you to store much smaller embeddings with little quality loss.
287
  - **License:** Apache 2.0
288
  - **German only:** This model is German-only, causing the model to learn more efficient and deal better with shorter queries.
289
+ - **Flexibility:** Trained with flexible sequence-length and embedding truncation, flexibility is a core feature of the model, while improving on V2-performance.
290
 
291
  ## Usage:
292
 
 
297
  matryoshka_dim = 1024 # How big your embeddings should be, choose from: 64, 128, 256, 512, 1024
298
  model = SentenceTransformer("aari1995/German_Semantic_V3", trust_remote_code=True, truncate_dim=matryoshka_dim)
299
 
300
+ # model.truncate_dim = 64 # truncation dimensions can also be changed after loading
301
+ # model.max_seq_length = 512 #optionally, set your maximum sequence length lower if your hardware is limited
302
+
303
  # Run inference
304
  sentences = [
305
  'Eine Flagge weht.',
 
310
 
311
  # Get the similarity scores for the embeddings
312
  similarities = model.similarity(embeddings, embeddings)
313
+
314
+
315
  ```
316
 
317