Built with Axolotl

See axolotl config

axolotl version: 0.6.0

base_model: mrcuddle/tiny-darkllama
bf16: auto
datasets:
- chat_template: tokenizer_default
  field_messages: conversations
  message_field_content: value
  message_field_role: from
  path: Nitral-AI/Reddit-NSFW-Writing_Prompts_ShareGPT
  split: train
  type: chat_template
debug: null
deepspeed: null
early_stopping_patience: null
eval_sample_packing: true
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
learning_rate: 2e-5
load_in_4bit: false
load_in_8bit: false
logging_steps: 1
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
sdp_attention: true
sequence_len: 2048
special_tokens:
  pad_token: <|end_of_text|>
strict: false
tf32: false
train_on_inputs: false
warmup_steps: 1
weight_decay: 0.0
xformers_attention: null

model-out

This model is a fine-tuned version of mrcuddle/tiny-darkllama on the Nitral-AI/Reddit-NSFW-Writing_Prompts_ShareGPT dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • total_eval_batch_size: 2
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 2
  • training_steps: 25

Training results

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.21.0
Downloads last month
24
Safetensors
Model size
1.1B params
Tensor type
F32
·
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mrcuddle/tiny-darkllama-dpo

Quantizations
1 model

Dataset used to train mrcuddle/tiny-darkllama-dpo

Space using mrcuddle/tiny-darkllama-dpo 1