Model Visualization

Hamanasu 4B

🌌 Overview

This is a model designed to replicate the prose quality of the Claude 3 series of models. specifically Sonnet and Opus - Made with a prototype magnum V5 datamix.

The model is suited for traditional RP, All thanks to Tav for funding the train.

Support me and my finetunes on Ko-Fi https://ko-fi.com/deltavector

⚔️ Hardware

  • 8x H100s
  • Epochs: 4
  • Base: Delta-Vector/Hamanasu-4B-Instruct-KTO-V2

💰 Prompting

This model uses ChatML formatting

<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
Take off your helmet.<|im_end|>
<|im_start|>No i shall not. This is the way.

🎲 Recommended Sampler Preset

Temp: 1.0
Min-P: 0.1
System prompt: Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n\n\n\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n\n\nFollow the instructions in , avoiding the items listed in .

Axolotl Config ꒰(˶• ᴗ •˶)꒱

base_model: NewEden/Hamanasu-KTO-V2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false

plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true

datasets:
  - path: PocketDoc/Dans-Personamaxx-Logs
    type: dan-chat-advanced
  - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
    type: dan-chat-advanced
  - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
    type: dan-chat-advanced
  - path: anthracite-org/nopm_claude_writing_fixed
    type: dan-chat-advanced
  - path: anthracite-org/kalo_opus_misc_240827
    type: dan-chat-advanced
  - path: anthracite-org/kalo_misc_part2
    type: dan-chat-advanced
  - path: NewEden/Claude-Instruct-5K
    type: dan-chat-advanced
  - path: NewEden/Claude-Instruct-2.7K
    type: dan-chat-advanced

val_set_size: 0.01
output_dir: ./outputs/out

adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:

sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

wandb_project: tavbussy
wandb_entity:
wandb_watch:
wandb_name: magnum-attempt-02
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.02
max_grad_norm: 0.2

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 40
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1

debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:

special_tokens:
  pad_token: <|finetune_right_pad_id|>      

⚡ Credits


Made by
Delta-Vector
Downloads last month
10
Safetensors
Model size
4.51B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Hamanasu-Magnum-4B

Datasets used to train Delta-Vector/Hamanasu-Magnum-4B

Collection including Delta-Vector/Hamanasu-Magnum-4B