File size: 1,014 Bytes
18f03d1
 
6a03b61
 
 
 
951ae4e
6bf841b
 
 
f6436b2
 
6bf841b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f6436b2
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: mit
language:
- en
tags:
- NLP
pipeline_tag: feature-extraction
---

# Usage
```python
from transformers import AutoTokenizer
from model import (
    BERTContrastiveLearning_simcse,
    BERTContrastiveLearning_simcse_w,
    BERTContrastiveLearning_samp,
    BERTContrastiveLearning_samp_w,
)

str_list = data["string"].tolist()  # Your list of strings here
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
tokenized_inputs = tokenizer(
    str_list, padding=True, max_length=50, truncation=True, return_tensors="pt"
)
input_ids = tokenized_inputs["input_ids"]
attention_mask = tokenized_inputs["attention_mask"]

model1 = BERTContrastiveLearning_simcse.load_from_checkpoint(ckpt1).eval()
model2 = BERTContrastiveLearning_simcse_w.load_from_checkpoint(ckpt2).eval()
model3 = BERTContrastiveLearning_samp.load_from_checkpoint(ckpt3).eval()
model4 = BERTContrastiveLearning_samp_w.load_from_checkpoint(ckpt4).eval()

cls, _ = model(input_ids, attention_mask)  # embeddings
```