Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ language:
|
|
30 |
|
31 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
32 |
|
33 |
-
- This model has been further trained from [BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka](https://hf.co/BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka) on
|
34 |
- Intended for use in comparing the cosine similarity of longer document embeddings and/or clustering them.
|
35 |
- Matryoshka dims: [768, 512, 256, 128, 64]
|
36 |
|
@@ -58,36 +58,53 @@ print(embeddings)
|
|
58 |
|
59 |
|
60 |
## Usage (HuggingFace Transformers)
|
|
|
61 |
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
62 |
|
63 |
```python
|
64 |
-
from transformers import AutoTokenizer, AutoModel
|
65 |
import torch
|
|
|
66 |
|
67 |
|
68 |
-
#Mean Pooling - Take attention mask into account for correct averaging
|
69 |
def mean_pooling(model_output, attention_mask):
|
70 |
-
token_embeddings = model_output[
|
71 |
-
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
|
75 |
# Sentences we want sentence embeddings for
|
76 |
-
sentences = [
|
77 |
|
78 |
# Load model from HuggingFace Hub
|
79 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
80 |
-
|
|
|
|
|
81 |
|
82 |
# Tokenize sentences
|
83 |
-
encoded_input = tokenizer(
|
|
|
|
|
|
|
|
|
|
|
84 |
|
85 |
# Compute token embeddings
|
86 |
with torch.no_grad():
|
87 |
model_output = model(**encoded_input)
|
88 |
|
89 |
# Perform pooling. In this case, mean pooling.
|
90 |
-
sentence_embeddings = mean_pooling(
|
|
|
|
|
|
|
91 |
|
92 |
print("Sentence embeddings:")
|
93 |
print(sentence_embeddings)
|
|
|
30 |
|
31 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
32 |
|
33 |
+
- This model has been further trained from [BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka](https://hf.co/BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka) on `v3.0` of the `synthetic text similarity` dataset.
|
34 |
- Intended for use in comparing the cosine similarity of longer document embeddings and/or clustering them.
|
35 |
- Matryoshka dims: [768, 512, 256, 128, 64]
|
36 |
|
|
|
58 |
|
59 |
|
60 |
## Usage (HuggingFace Transformers)
|
61 |
+
|
62 |
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
63 |
|
64 |
```python
|
|
|
65 |
import torch
|
66 |
+
from transformers import AutoModel, AutoTokenizer
|
67 |
|
68 |
|
69 |
+
# Mean Pooling - Take attention mask into account for correct averaging
|
70 |
def mean_pooling(model_output, attention_mask):
|
71 |
+
token_embeddings = model_output[
|
72 |
+
0
|
73 |
+
] # First element of model_output contains all token embeddings
|
74 |
+
input_mask_expanded = (
|
75 |
+
attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
76 |
+
)
|
77 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(
|
78 |
+
input_mask_expanded.sum(1), min=1e-9
|
79 |
+
)
|
80 |
|
81 |
|
82 |
# Sentences we want sentence embeddings for
|
83 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
84 |
|
85 |
# Load model from HuggingFace Hub
|
86 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
87 |
+
"BEE-spoke-data/bert-plus-L8-v1.0-synthSTSv3-4k"
|
88 |
+
)
|
89 |
+
model = AutoModel.from_pretrained("BEE-spoke-data/bert-plus-L8-v1.0-synthSTSv3-4k")
|
90 |
|
91 |
# Tokenize sentences
|
92 |
+
encoded_input = tokenizer(
|
93 |
+
sentences,
|
94 |
+
padding=True,
|
95 |
+
truncation=True,
|
96 |
+
return_tensors="pt",
|
97 |
+
)
|
98 |
|
99 |
# Compute token embeddings
|
100 |
with torch.no_grad():
|
101 |
model_output = model(**encoded_input)
|
102 |
|
103 |
# Perform pooling. In this case, mean pooling.
|
104 |
+
sentence_embeddings = mean_pooling(
|
105 |
+
model_output,
|
106 |
+
encoded_input["attention_mask"],
|
107 |
+
)
|
108 |
|
109 |
print("Sentence embeddings:")
|
110 |
print(sentence_embeddings)
|