prefix-einstein / README.md
dgonier's picture
Upload folder using huggingface_hub
d7efe8e verified
metadata
base_model: Qwen/Qwen3-30B-A3B
library_name: peft
pipeline_tag: text-generation
tags:
  - prefix-tuning
  - persona
  - einstein
  - philosophy
  - debate

Einstein Prefix Adapter

Prefix-tuned adapter that teaches the model to embody Albert Einstein's reasoning patterns, voice, and philosophical positions.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-30B-A3B")
model = PeftModel.from_pretrained(base_model, "debaterhub/prefix-einstein")

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-30B-A3B")

Training Details

  • Method: PEFT Prefix-Tuning (40 virtual tokens)
  • Base Model: Qwen/Qwen3-30B-A3B (30B MoE, 3B active)
  • Dataset: 447 examples of Einstein-style debate responses
  • Epochs: 3
  • Hardware: 8x A100-40GB

Evaluation

Evaluated using LLM-as-Judge (Claude Opus 4.5) on 5 dimensions:

  • Ideational Fidelity (35%)
  • Reasoning Pattern (25%)
  • Voice Authenticity (20%)
  • Engagement Quality (15%)
  • Anti-Patterns (5%)

Baseline: 3.3/5.0 Trained: 3.4/5.0

Key improvement: Reduced meta-roleplay anti-patterns, more direct in-character responses.

Framework Versions

  • PEFT 0.18.0
  • Transformers 4.46.0