Edit model card

TinyLlama-1.1B-Chat-v0.4-q4f32_1-MLC

This is the TinyLlama-1.1B-Chat-v0.4 model in MLC format q4f32_1. The model can be used for projects MLC-LLM and WebLLM.

Example Usage

Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the installation documentation.

Chat

In command line, run

mlc_llm chat HF://mlc-ai/TinyLlama-1.1B-Chat-v0.4-q4f32_1-MLC

REST Server

In command line, run

mlc_llm serve HF://mlc-ai/TinyLlama-1.1B-Chat-v0.4-q4f32_1-MLC

Python API

from mlc_llm import MLCEngine

# Create engine
model = "HF://mlc-ai/TinyLlama-1.1B-Chat-v0.4-q4f32_1-MLC"
engine = MLCEngine(model)

# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
    messages=[{"role": "user", "content": "What is the meaning of life?"}],
    model=model,
    stream=True,
):
    for choice in response.choices:
        print(choice.delta.content, end="", flush=True)
print("\n")

engine.terminate()

Documentation

For more information on MLC LLM project, please visit our documentation and GitHub repo.

Downloads last month
1,530
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mlc-ai/TinyLlama-1.1B-Chat-v0.4-q4f32_1-MLC

Quantized
(7)
this model