This model was converted to OpenVINO from Qwen/Qwen2.5-1.5B-Instruct
using optimum-intel
via the export space.
First make sure you have optimum-intel installed:
pip install optimum[openvino]
To load your model you can do as follows:
from optimum.intel import OVModelForCausalLM
model_id = "HelloSun/Qwen2.5-1.5B-Instruct-openvino"
model = OVModelForCausalLM.from_pretrained(model_id)
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.