Edit model card

TinyLlama-1.1B-Chat-v1.0-fp16-ov

Description

This is TinyLlama/TinyLlama-1.1B-Chat-v1.0 model converted to the OpenVINO™ IR (Intermediate Representation) format.

Compatibility

The provided OpenVINO™ IR model is compatible with:

  • OpenVINO version 2024.1.0 and higher
  • Optimum Intel 1.16.0 and higher

Running Model Inference with Optimum Intel

  1. Install packages required for using Optimum Intel integration with the OpenVINO backend:
pip install optimum[openvino]
  1. Run model inference:
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM

model_id = "OpenVINO/TinyLlama-1.1B-Chat-v1.0-fp16-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)

inputs = tokenizer("What is OpenVINO?", return_tensors="pt")

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

For more examples and possible optimizations, refer to the OpenVINO Large Language Model Inference Guide.

Running Model Inference with OpenVINO GenAI

  1. Install packages required for using OpenVINO GenAI.
pip install openvino-genai huggingface_hub
  1. Download model from HuggingFace Hub
import huggingface_hub as hf_hub

model_id = "OpenVINO/TinyLlama-1.1B-Chat-v1.0-fp16-ov"
model_path = "TinyLlama-1.1B-Chat-v1.0-fp16-ov"

hf_hub.snapshot_download(model_id, local_dir=model_path)
  1. Run model inference:
import openvino_genai as ov_genai

device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?"))

More GenAI usage examples can be found in OpenVINO GenAI library docs and samples

Legal information

The original model is distributed under apache-2.0 license. More details can be found in TinyLlama/TinyLlama-1.1B-Chat-v1.0.

Disclaimer

Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Downloads last month
20
Inference API
Input a message to start chatting with OpenVINO/TinyLlama-1.1B-Chat-v1.0-fp16-ov.
This model can be loaded on Inference API (serverless).

Collection including OpenVINO/TinyLlama-1.1B-Chat-v1.0-fp16-ov