zen4-coder-flash / README.md
zeekay's picture
Update model card: add zen/zenlm tags, fix branding
d875c18 verified
metadata
language: en
license: apache-2.0
tags:
  - text-generation
  - zen
  - zenlm
  - hanzo
  - zen4
  - code
  - coding
  - fast
pipeline_tag: text-generation
library_name: transformers

Zen4 Coder Flash

Ultra-fast Zen4 code generation model for real-time completions and low-latency coding.

Overview

Built on Zen MoDE (Mixture of Distilled Experts) architecture with 8B parameters and 64K context window.

Developed by Hanzo AI and the Zoo Labs Foundation.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "zenlm/zen4-coder-flash"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

API Access

curl https://api.hanzo.ai/v1/chat/completions \
  -H "Authorization: Bearer $HANZO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "zen4-coder-flash", "messages": [{"role": "user", "content": "Hello"}]}'

Get your API key at console.hanzo.ai — $5 free credit on signup.

Model Details

Attribute Value
Parameters 8B
Architecture Zen MoDE
Context 64K tokens
License Apache 2.0

License

Apache 2.0