File size: 1,480 Bytes
1315eff
 
 
 
 
 
 
 
 
 
 
 
522ff9e
 
1315eff
 
 
 
 
522ff9e
1315eff
 
cec52bf
522ff9e
cec52bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---

# BERT base model (uncased)

## Model description

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.

## Original implementation

Follow [this link](https://huggingface.co/bert-base-uncased) to see the original implementation.

## How to use

Download the model by cloning the repository via `git clone https://huggingface.co/OWG/bert-base-uncased`.

Then you can use the model with the following code:

```python
from onnxruntime import InferenceSession, SessionOptions, GraphOptimizationLevel
from transformers import BertTokenizer


tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")

options = SessionOptions()
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL

session = InferenceSession("path/to/model.onnx", sess_options=options)
session.disable_fallback()

text = "Replace me by any text you want to encode."
input_ids = tokenizer(text, return_tensors="pt", return_attention_mask=True)

inputs = {k: v.cpu().detach().numpy() for k, v in input_ids.items()}
outputs_name = session.get_outputs()[0].name

outputs = session.run(output_names=[outputs_name], input_feed=inputs)
```