π§ ThoughtSwitch V1 1.7B Instruct β A Mode-Adaptive Reasoning Language Model
Model ID:
BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct
Architecture: Decoder-only transformer (GPT-style)
Parameters: 1.7 Billion
Capabilities: Dynamic "Thinking" vs. "Non-Thinking" mode-switching
Fine-Tuned for: Instruction-following
π Overview
ThoughtSwitch V1 is a next-generation instruction-tuned language model that brings a new paradigm to text generation: Autonomous Cognitive Mode Switching.
It is capable of interpreting user prompts and switching between two distinct modes of behavior:
- π§ Thinking Mode: Deep reasoning, logical step-by-step solutions, slow but deliberate outputs.
- π¬ Non-Thinking Mode: Quick completions, casual replies, storytelling, and chat-like fluency.
Whether you're building reasoning agents, fast assistants, or multi-modal chains-of-thought applications, ThoughtSwitch adapts intelligentlyβso you donβt have to force the prompt.
π§ Key Features
β Autonomous Mode Switching
Understands when to think deeply and when to generate fluently, based on prompt phrasing.β Instruction Tuned
Trained to follow human-like instructions and align closely with user intent.β 1.7B Parameters
Small enough for efficient inference, yet powerful for sophisticated reasoning.β Open Weights
Fully accessible under a permissive license (specify in HF model card).
β¨ Example Prompts
Prompt (Thinking Mode): "Think step by step to solve this math problem: What is 17 multiplied by 23?"
β Reasoned output with intermediate steps and justification.
Prompt (Non-Thinking Mode): "Write a quick sci-fi story about a robot discovering love."
β Smooth, creative storytelling without unnecessary reasoning.
π§ Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct") model = AutoModelForCausalLM.from_pretrained("BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct")
prompt = "Think step by step: Why does ice float on water?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=150) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π§ͺ Intended Use Cases
- π§ Reasoning Agents β For multi-hop question answering, logical puzzles, or decision support.
- π Tutoring & Education β Adaptive explanations that vary depth based on student prompts.
- π€ Conversational AI β More natural and flexible interactions with variable "thinking effort".
- βοΈ Creative Writing β Generate stories, poems, and ideas with or without deep context.
β οΈ Limitations
- Like all LLMs, it may hallucinate or generate biased content.
- Mode switching is probabilistic, not guaranteedβprompt clearly for best results.
- Performance may vary outside of English or unfamiliar domains.
π Performance (Unofficial Benchmarks)
Task | Performance |
---|---|
Commonsense Reasoning | β Strong |
Instruction Following | β Strong |
Fast Casual Generation | β Very Strong |
Math (Step-by-Step) | β οΈ Moderate |
Factual QA | β οΈ May hallucinate |
π οΈ Model Details
- Architecture: GPT-style decoder (causal LM)
- Training: Custom pretraining with hybrid reasoning/non-reasoning dataset
- Instruction Fine-Tuning: Yes, using curated prompt-response pairs
- Token Limit: 2048 tokens (extendable with rope scaling)
π Quantized Version
Looking for fast inference?
Check out the GGUF-quantized version (by @mradermacher) for compatibility with llama.cpp, KoboldAI, and other lightweight runtimes.
π Citation
If you use this model in your research or application, please cite it as:
@misc{thoughswitch2025, title={ThoughtSwitch V1 1.7B Instruct: A Mode-Adaptive Reasoning Language Model}, author={BrainWave-ML}, year={2025}, howpublished={\url{https://huggingface.co/BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct}} }
π¬ Contact
For issues, feedback, or collaboration:
- π€ Hugging Face Page: https://huggingface.co/BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct
- π§ Email: [YourContact@domain.com]
- π Website: [https://brainwave-ml.ai] (optional)
- π¬ Discord or Community: Coming Soon
π Acknowledgments
Developed by the team at BrainWave-ML. Inspired by the question:
βWhat if language models could choose when to think?β
ThoughtSwitch: Think when you need to. Generate when you don't.