Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

🎭 Cakrawala-123B

Where Worlds Converge and Adventures Begin!

🌟 What's Special About This Model?

Cakrawala-123B is a fine-tuned variant of the Mistral-Large-Instruct-2411 model, specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended interactions.

πŸ§ͺ The Secret Sauce

Training Diet:

  • Fed with NarrativAI/CakrawalaRP dataset
  • Conversation pairs with detailed interactions
  • Focused on maintaining character consistency and rich descriptions

Tech Wizardry:

  • Base Model: Mistral-Large-Instruct-2411
  • Fine-tuned using QLoRA
  • Trained over 2 epochs

Training Parameters

  • Gradient Accumulation Steps: 1
  • Micro Batch Size: 4
  • Learning Rate: 0.000015
  • Optimizer: AdamW
  • Scheduler: Cosine
  • Mixed Precision: BF16 & FP16 with TF32 support

πŸ”§ Under the Hood

  • LoRA Configuration:
    • Rank (r): 32
    • Alpha: 64
    • Dropout: 0.1
  • Sequence Length: 2048
  • Gradient Checkpointing: Enabled
  • Flash Attention: Enabled

🎬 License & Credits

  • Licensed under MIT
  • Based on mistralai/Mistral-Large-Instruct-2411

Built with ❀️ for roleplayers, by roleplayers

Downloads last month
10
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for FluffyKaeloky/Cakrawala-123B-exl2-3.5bpw

Quantized
(3)
this model

Dataset used to train FluffyKaeloky/Cakrawala-123B-exl2-3.5bpw