bobig's picture
Upload README.md with huggingface_hub
3cd7c56 verified
metadata
library_name: transformers
tags:
  - Uncensored
  - Abliterated
  - Cubed Reasoning
  - QwQ-32B
  - reasoning
  - thinking
  - r1
  - cot
  - deepseek
  - Qwen2.5
  - Hermes
  - DeepHermes
  - DeepSeek
  - DeepSeek-R1-Distill
  - 128k context
  - merge
  - mlx
  - mlx-my-repo
base_model: DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored

bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit

The Model bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit was converted to MLX format from DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored using mlx-lm version 0.21.5.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)