ivanfioravanti's picture
a0b4e24cf7a3496399ce9b92e7b24529bcf2c47bfc0df99e7fe1899627bdeee7
ee8ff4c verified
|
raw
history blame
1.47 kB
metadata
language:
  - en
license: llama3
tags:
  - Llama-3
  - instruct
  - finetune
  - chatml
  - gpt4
  - synthetic data
  - distillation
  - function calling
  - json mode
  - axolotl
  - roleplaying
  - chat
  - mlx
base_model: NousResearch/Hermes-3-Llama-3.2-3B
widget:
  - example_title: Hermes 3
    messages:
      - role: system
        content: >-
          You are a sentient, superintelligent artificial general intelligence,
          here to teach and assist me.
      - role: user
        content: >-
          Write a short story about Goku discovering kirby has teamed up with
          Majin Buu to destroy the world.
library_name: transformers
model-index:
  - name: Hermes-3-Llama-3.1-405B
    results: []

mlx-community/Hermes-3-Llama-3.2-3B-4bit

The Model mlx-community/Hermes-3-Llama-3.2-3B-4bit was converted to MLX format from NousResearch/Hermes-3-Llama-3.2-3B using mlx-lm version 0.20.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Hermes-3-Llama-3.2-3B-4bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)