File size: 1,301 Bytes
b215f80
 
 
 
 
 
 
 
 
 
 
 
a903737
1f889db
 
b215f80
1f889db
 
b215f80
1f889db
b215f80
 
 
1f889db
b215f80
1f889db
 
 
 
2fd65d8
b215f80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
pipeline_tag: sentence-similarity
language: en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- onnx
---

# ONNX convert all-MiniLM-L12-v2
## Conversion of [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2)
This is a [sentence-transformers](https://www.SBERT.net) ONNX model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. This custom model takes `last_hidden_state` and `pooler_output` whereas the sentence-transformers exported with default ONNX config only contains `last_hidden_state` as output.

## Usage (HuggingFace Optimum)
Using this model becomes easy when you have [optimum](https://github.com/huggingface/optimum) installed:
```
python -m pip install optimum
```
Then you can use the model like this:
```python
from optimum.onnxruntime.modeling_ort import ORTModelForCustomTasks

model = ORTModelForCustomTasks.from_pretrained("vamsibanda/sbert-all-MiniLM-L12-with-pooler")
tokenizer = AutoTokenizer.from_pretrained("vamsibanda/sbert-all-MiniLM-L12-with-pooler")
inputs = tokenizer("I love burritos!", return_tensors="pt")
pred = model(**inputs)
embedding = pred['pooler_output']
```