--- language: - en tags: - formality license: cc-by-nc-sa-4.0 --- This model represents an ONNX-optimized version of the original [roberta-base-formality-ranker](https://huggingface.co/s-nlp/roberta-base-formality-ranker) model. It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs. ## Dependencies Please install the following dependency before you begin working with the model: ```sh pip install optimum[onnxruntime-gpu] ``` ## How to use ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForSequenceClassification from optimum.pipelines import pipeline # load tokenizer and model weights tokenizer = AutoTokenizer.from_pretrained('Deepchecks/roberta_base_formality_ranker_onnx') model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/roberta_base_formality_ranker_onnx') # prepare the pipeline and generate inferences user_inputs = ["I hope this email finds you well", "I hope this email find you swell", "What's up doc?"] pip = pipeline(task='text-classification', model=model, tokenizer=tokenizer, device=device, accelerator="ort") res = pip(user_inputs, batch_size=64, truncation="only_first") ``` ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png