Integrate with Sentence Transformers
#1
by
tomaarsen
HF staff
- opened
- 1_Pooling/config.json +9 -0
- README.md +21 -1
- config_sentence_transformers.json +7 -0
- modules.json +20 -0
- sentence_bert_config.json +4 -0
1_Pooling/config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
+
"pooling_mode_weightedmean_tokens": false,
|
8 |
+
"pooling_mode_lasttoken": false
|
9 |
+
}
|
README.md
CHANGED
@@ -7,6 +7,8 @@ tags:
|
|
7 |
- String Matching
|
8 |
- Fuzzy Join
|
9 |
- Entity Retrieval
|
|
|
|
|
10 |
---
|
11 |
## PEARL-base
|
12 |
|
@@ -45,7 +47,25 @@ Cost comparison of FastText and PEARL. The estimated memory is calculated by the
|
|
45 |
|
46 |
## Usage
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
```python
|
51 |
import torch.nn.functional as F
|
|
|
7 |
- String Matching
|
8 |
- Fuzzy Join
|
9 |
- Entity Retrieval
|
10 |
+
- transformers
|
11 |
+
- sentence-transformers
|
12 |
---
|
13 |
## PEARL-base
|
14 |
|
|
|
47 |
|
48 |
## Usage
|
49 |
|
50 |
+
### Sentence Transformers
|
51 |
+
PEARL is integrated with the Sentence Transformers library, and can be used like so:
|
52 |
+
|
53 |
+
```python
|
54 |
+
from sentence_transformers import SentenceTransformer, util
|
55 |
+
|
56 |
+
query_texts = ["The New York Times"]
|
57 |
+
doc_texts = [ "NYTimes", "New York Post", "New York"]
|
58 |
+
input_texts = query_texts + doc_texts
|
59 |
+
|
60 |
+
model = SentenceTransformer("Lihuchen/pearl_base")
|
61 |
+
embeddings = model.encode(input_texts)
|
62 |
+
scores = util.cos_sim(embeddings[0], embeddings[1:]) * 100
|
63 |
+
print(scores.tolist())
|
64 |
+
# [[85.61601257324219, 73.65623474121094, 70.36174774169922]]
|
65 |
+
```
|
66 |
+
|
67 |
+
### Transformers
|
68 |
+
You can also use `transformers` to use PEARL. Below is an example of entity retrieval, and we reuse the code from E5.
|
69 |
|
70 |
```python
|
71 |
import torch.nn.functional as F
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.3.1",
|
4 |
+
"transformers": "4.37.0",
|
5 |
+
"pytorch": "2.1.0+cu121"
|
6 |
+
}
|
7 |
+
}
|
modules.json
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"idx": 2,
|
16 |
+
"name": "2",
|
17 |
+
"path": "2_Normalize",
|
18 |
+
"type": "sentence_transformers.models.Normalize"
|
19 |
+
}
|
20 |
+
]
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 512,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|