Magnum-v5-70B-SFT-Alpha-LoRA
This is an experimental model finetuned from meta-llama/Llama-3.3-70B-Instruct as an rsLoRA adapter. The prototype v5 SFT dataset expands on the v4 dataset with additional data and a custom prompt strategy.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
This model performs best with a prefill and with all settings to prepend character names disabled, otherwise it can be a bit more finnicky to work with than L3.3-70B-Magnum-v4-SE. There seems to be a very strong markdown/asterisk style bias when character names are prepended. Feedback is appreciated!
Intended uses and limitations
This model is intended for creative writing and roleplay purposes. It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model. All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
Training procedure
See axolotl config
axolotl version: 0.6.0
base_model: meta-llama/Llama-3.3-70B-Instruct
base_model_ignore_patterns: "*/*"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: Doctor-Shotgun/magnum-v5-sft-prototype-70b-lora
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: AquaV/c1-sharegpt-advanced-prefills-filtered
type: dan-chat-advanced-llama3
- path: AquaV/c2-sharegpt-advanced-prefills-filtered
type: dan-chat-advanced-llama3
- path: AquaV/rainy-sharegpt-advanced-prefills-filtered
type: dan-chat-advanced-llama3
- path: anthracite-core/Gryphe-Opus-Charcard-Roleplay
type: dan-chat-advanced-llama3
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: dan-chat-advanced-llama3
- path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
type: dan-chat-advanced-llama3
- path: anthracite-org/nopm_claude_writing_fixed
type: dan-chat-advanced-llama3
- path: anthracite-org/kalo_opus_misc_240827
type: dan-chat-advanced-llama3
- path: anthracite-org/kalo_misc_part2
type: dan-chat-advanced-llama3
- path: NewEden/Claude-Instruct-5K
type: dan-chat-advanced-llama3
- path: NewEden/Claude-Instruct-2.7K
type: dan-chat-advanced-llama3
shuffle_merged_datasets: true
dataset_prepared_path: /home/docshotgun/data/magnum-70b-data
val_set_size: 0.0
output_dir: /home/docshotgun/data/70b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 70b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 4.0e-5
max_grad_norm: 3.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use paged_ademamix_8bit and the args are: No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0
Framework versions
- PEFT 0.14.0
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 14
Model tree for Doctor-Shotgun/Magnum-v5-70B-SFT-Alpha-LoRA
Base model
meta-llama/Llama-3.1-70B