Edit model card

csarron/bert-base-uncased-squad-v1

This is the csarron/bert-base-uncased-squad-v1 model converted to OpenVINO, for accellerated inference.

An example of how to do inference on this model:

from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline

# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/csarron-bert-base-uncased-squad-v1-ov-fp32"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
Downloads last month
40
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.