metadata
			library_name: mlx
license: llama3.2
base_model: scb10x/llama3.2-typhoon2-t1-3b-research-preview
language:
  - en
  - th
datasets:
  - scb10x/typhoon-t1-3b-research-preview-data
pipeline_tag: text-generation
tags:
  - mlx
model-index:
  - name: llama3.2-typhoon2-3b-instruct
    results: []
scb10x/llama3.2-typhoon2-t1-3b-research-preview-mlx-4bit
This model scb10x/llama3.2-typhoon2-t1-3b-research-preview-mlx-4bit was converted to MLX format from scb10x/llama3.2-typhoon2-t1-3b-research-preview using mlx-lm version 0.25.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("scb10x/llama3.2-typhoon2-t1-3b-research-preview-mlx-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
response = generate(model, tokenizer, prompt=prompt, verbose=True)

