File size: 1,532 Bytes
b9e51ec
c569252
 
 
 
b9e51ec
 
c569252
75a71a1
 
3a02dd4
 
 
 
 
 
 
 
c569252
 
 
 
 
 
 
 
 
 
 
2fbbcea
 
c569252
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
language:
- en
tags:
- formality
license: cc-by-nc-sa-4.0
---

This model represents an ONNX-optimized version of the original [roberta-base-formality-ranker](https://huggingface.co/s-nlp/roberta-base-formality-ranker) model.
It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs.

## Dependencies

Please install the following dependency before you begin working with the model:
```sh
pip install optimum[onnxruntime-gpu]
```

## How to use
```python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline

# load tokenizer and model weights
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/roberta_base_formality_ranker_onnx')
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/roberta_base_formality_ranker_onnx')

# prepare the pipeline and generate inferences
user_inputs = ["I hope this email finds you well", "I hope this email find you swell", "What's up doc?"]
pip = pipeline(task='text-classification', model=model, tokenizer=tokenizer, device=device, accelerator="ort")
res = pip(user_inputs, batch_size=64, truncation="only_first")

```

## Licensing Information

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].

[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]

[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png