Model Visualization

Hamanasu 4B

🌌 Overview

This model is a finetune of Hamanasu-4B-PT that has been trained with Instruct data.

A generalist model that's quick to adapt to any type of roleplay.

All thanks to Tav for funding the train.

⚔️ Hardware

  • 8x H100s
  • Epochs: 2
  • Base: Delta-Vector/Hamanasu-4B-PT

💰 Prompting

This model uses ChatML formatting

<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
Take off your helmet.<|im_end|>
<|im_start|>No i shall not. This is the way.

🎲 Recommended Sampler Preset

temperature: 1.5
min_p: 0.2
System_Prompt: Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.

Axolotl Config ꒰(˶• ᴗ •˶)꒱

base_model: NewEden_4B-PT
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

  #hub_model_id: NewEden/4B-Inst
  #hub_strategy: "all_checkpoints"
  #push_dataset_to_hub:
  #hf_use_auth_token: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: NewEden/Helpsteer-3-Filtered
    type: dan-chat-advanced
  - path: NewEden/GSM8K-R1-filtered
    type: dan-chat-advanced
  - path: NewEden/Hydrus-R1-Thinking-Sharegpt
    type: dan-chat-advanced
  - path: NewEden/Hydrus-SonnetOrca
    type: dan-chat-advanced
  - path: NewEden/Hydrus-HelpSteer2
    type: dan-chat-advanced
  - path: NewEden/Claude-Instruct-5K
    type: dan-chat-advanced
  - path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
    type: dan-chat-advanced
  - path: Nitral-AI/ARES-ShareGPT
    type: dan-chat-advanced 
  - path: NewEden/Hydrus-Instruct-SmolTalk
    type: dan-chat-advanced
  - path: NewEden/Hydrus-Chat_error-Pure-Dove-sharegpt
    type: dan-chat-advanced  
  - path: NewEden/Claude-Instruct-2.7K
    type: dan-chat-advanced 
  - path: PocketDoc/Dans-Assistantmaxx-Tulu3-IF
    type: dan-chat-advanced  
dataset_prepared_path: prepared_data
val_set_size: 0.0
output_dir: ./4b-inst-r2

sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true


wandb_project: 4B-mng
wandb_entity:
wandb_watch:
wandb_name: attempt-2
wandb_log_model:

gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 5e-6
max_grad_norm: 0.2

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 40
saves_per_epoch: 2
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.02
fsdp:
fsdp_config:
special_tokens:
  pad_token: <|finetune_right_pad_id|>

⚡ Credits


Made by
Delta-Vector
Downloads last month
81
Safetensors
Model size
4.51B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Hamanasu-4B-Instruct

Datasets used to train Delta-Vector/Hamanasu-4B-Instruct

Collection including Delta-Vector/Hamanasu-4B-Instruct