Built with Axolotl

See axolotl config

axolotl version: 0.8.0

## model
base_model: hardlyworking/Noodles-Merge-12B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
## upload
hub_model_id: hardlyworking/Beef-12B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
## qlora COPE
load_in_8bit: false
load_in_4bit: false
strict: false

## data 
datasets:
  - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
    type: dan-chat-advanced
  - path: ResplendentAI/bluemoon
    type: dan-chat-advanced
  - path: hardlyworking/openerotica-freedomrp-sharegpt-system
    type: dan-chat-advanced
  - path: MinervaAI/Aesir-Preview
    type: dan-chat-advanced
  - path: anthracite-core/c2_logs_32k_v1.1
    type: dan-chat-advanced
  - path: Nitral-AI/Creative_Writing-ShareGPT
    type: dan-chat-advanced
  - path: PJMixers/lodrick-the-lafted_OpusStories-Story2Prompt-ShareGPT
    type: dan-chat-advanced

shuffle_merged_datasets: true
dataset_prepared_path: dataset_prepared
val_set_size: 0.01
output_dir: outputs/out

## LIGER & CCE
plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: false

## CTX settings
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

## Lora 
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
  - embed_tokens
  - lm_head

## WandB
wandb_project: JoeyBoy
wandb_entity:
wandb_watch:
wandb_name: 
wandb_log_model:

## evals
evals_per_epoch: 8
eval_table_size:
eval_max_new_tokens: 128

## hoe params
gradient_accumulation_steps: 2
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:

warmup_steps: 40
saves_per_epoch: 2
debug:
## for ademiamix 
deepspeed: ./deepspeed_configs/zero3_bf16.json
## for adamw
## deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
   pad_token: <pad>

Beef-12B

This model is a fine-tuned version of hardlyworking/Noodles-Merge-12B on the Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned, the ResplendentAI/bluemoon, the hardlyworking/openerotica-freedomrp-sharegpt-system, the MinervaAI/Aesir-Preview, the anthracite-core/c2_logs_32k_v1.1, the Nitral-AI/Creative_Writing-ShareGPT and the PJMixers/lodrick-the-lafted_OpusStories-Story2Prompt-ShareGPT datasets. It achieves the following results on the evaluation set:

  • Loss: 1.5655

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • total_eval_batch_size: 16
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 40
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss
1.7653 0.0028 1 1.7865
1.3533 0.1255 45 1.6828
1.2807 0.2510 90 1.6545
1.3957 0.3766 135 1.6300
1.2727 0.5021 180 1.6176
1.2438 0.6276 225 1.6074
1.3147 0.7531 270 1.5958
1.2466 0.8787 315 1.5905
1.3144 1.0028 360 1.5844
1.1868 1.1283 405 1.5784
1.3102 1.2538 450 1.5750
1.2746 1.3794 495 1.5734
1.1794 1.5049 540 1.5692
1.2141 1.6304 585 1.5671
1.1795 1.7559 630 1.5660
1.4297 1.8815 675 1.5655

Framework versions

  • PEFT 0.15.1
  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hardlyworking/Beef-12B

Adapter
(1)
this model
Merges
1 model

Datasets used to train hardlyworking/Beef-12B