pszemraj commited on
Commit
d04a219
1 Parent(s): aa57866

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -20,13 +20,11 @@ language:
20
 
21
  <img src="https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/38Yc1IgU4bH92Wyb43J2I.png" alt="image/png" style="max-width: 75%;">
22
 
23
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
24
-
25
- - this model's primary use case is meant to be long-document similarity, i.e. computing embeddings of long documents and comparing those.
26
- - check out the training dataset `pszemraj/synthetic-text-similarity` for details
27
- - pretrained & finetuned at context length 16384
28
- - This model is a "v1" and we may make improved versions in the future. Or, we may not.
29
-
30
 
31
  ## Usage
32
 
 
20
 
21
  <img src="https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/38Yc1IgU4bH92Wyb43J2I.png" alt="image/png" style="max-width: 75%;">
22
 
23
+ This [Sentence Transformer Model] (https://www.SBERT.net) converts sentences and paragraphs into a 768-dimensional vector space suitable for tasks such as clustering and semantic search.
24
+ - This model focuses on the similarity of long documents; use it for comparing embeddings of long text documents
25
+ - For more info, see the `pszemraj/synthetic-text-similarity` dataset used for training
26
+ - Pre-trained and tuned for a context length of 16,384
27
+ - This initial version may be updated in the future.
 
 
28
 
29
  ## Usage
30