Edit model card

mistralai/Mistral-7B-Instruct-v0.2

This is the mistralai/Mistral-7B-Instruct-v0.2 model converted to OpenVINO, for accelerated inference.

An example of how to do inference on this model:

from optimum.intel import OVModelForCausalLM
from transformers import AutoTokenizer, pipeline

# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/mistralai-Mistral-7B-Instruct-v0.2-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = pipe("hello world")
print(result)
Downloads last month
43
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using helenai/mistralai-Mistral-7B-Instruct-v0.2-ov 1