See axolotl config
axolotl version: 0.8.0.dev0
# beta11
adapter: lora
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
dataset_processes: 32
chat_template: jinja
chat_template_jinja: "{%- set today = strftime_now(\"%Y-%m-%d\") %}\n{%- set default_system_message = \"You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\\nYour knowledge base was last updated on 2023-10-01. The current date is \" + today + \".\\n\\nWhen you're not sure about some information, you say that you don't have the information and don't make up anything.\\nIf the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \\\"What are some good restaurants around me?\\\" => \\\"Where are you?\\\" or \\\"When is the next flight to Tokyo\\\" => \\\"Where do you travel from?\\\")\" %}\n\n{{- bos_token }}\n\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] %}\n {%- set loop_messages = messages[1:] %}\n{%- else %}\n {%- set system_message = default_system_message %}\n {%- set loop_messages = messages %}\n{%- endif %}\n{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}\n\n{%- for message in loop_messages %}\n {%- if message['role'] == 'user' %}\n\t {%- if message['content'] is string %}\n {{- '[INST]' + message['content'] + '[/INST]' }}\n\t {%- else %}\n\t\t {{- '[INST]' }}\n\t\t {%- for block in message['content'] %}\n\t\t\t {%- if block['type'] == 'text' %}\n\t\t\t\t {{- block['text'] }}\n\t\t\t {%- elif block['type'] == 'image' or block['type'] == 'image_url' %}\n\t\t\t\t {{- '[IMG]' }}\n\t\t\t\t{%- else %}\n\t\t\t\t {{- raise_exception('Only text and image blocks are supported in message content!') }}\n\t\t\t\t{%- endif %}\n\t\t\t{%- endfor %}\n\t\t {{- '[/INST]' }}\n\t\t{%- endif %}\n {%- elif message['role'] == 'system' %}\n {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}\n {%- elif message['role'] == 'assistant' %}\n {{- message['content'] + eos_token }}\n {%- else %}\n {{- raise_exception('Only user, system and assistant roles are supported!') }}\n {%- endif %}\n{%- endfor %}"
dataset_prepared_path: ./last_run_prepared
datasets:
- message_property_mappings:
content: content
role: role
path: ZeroAgency/ru-instruct-conversation-v3.1-small
trust_remote_code: false
field_messages: messages
type: chat_template
test_datasets:
- message_property_mappings:
content: content
role: role
path: ZeroAgency/ru-instruct-conversation-v3.1-small
trust_remote_code: false
field_messages: messages
type: chat_template
split: test
#dataset_exact_deduplication: true
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
#learning_rate: 0.0001
learning_rate: 2e-5
lisa_layers_attribute: model.layers
#is_mistral_derived_model: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
lora_alpha: 256
lora_dropout: 0.00
# lora_target_linear: true
lora_r: 256
# lora_mlp_kernel: true
# lora_qkv_kernel: true
# lora_o_kernel: true
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
#loraplus_lr_embedding: 1.0e-06
lr_scheduler: cosine
#max_prompt_len: 8192
mean_resizing_embeddings: false
micro_batch_size: 8
num_epochs: 1.0
optimizer: adamw_torch_fused
output_dir: ./outputs/zero-mistral-beta11
sample_packing_bin_size: 200
sample_packing_group_size: 100000
save_only_model: false
save_safetensors: true
sequence_len: 8192
min_sample_len: 64
shuffle_merged_datasets: true
skip_prepare_dataset: false
strict: false
train_on_inputs: false
val_set_size: 0.0
weight_decay: 0.01
wandb_project: Zero-Mistral
wandb_name: Zero-Mistral-Small-3.1-beta11
bf16: true
fp16: false
tf32: false
flash_attention: false
save_strategy: epoch
eval_strategry: epoch
logging_steps: 1
save_total_limit: 5
warmup_steps: 0
sample_packing: true
pad_to_sequence_len: true
#group_by_length: true
seed: 42
data_seed: 42
deepspeed: deepspeed_configs/zero1.json
log_with: wandb
trust_remote_code: true
use_fast_tokenizer: true
special_tokens:
pad_token: "<pad>"
outputs/zero-mistral-beta11
This model is a fine-tuned version of mistralai/Mistral-Small-3.1-24B-Instruct-2503 on the ZeroAgency/ru-instruct-conversation-v3.1-small dataset. It achieves the following results on the evaluation set:
- Loss: 0.6220
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- total_train_batch_size: 48
- total_eval_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.6302 | 1.0 | 420 | 0.6220 |
Framework versions
- PEFT 0.15.0
- Transformers 4.50.0
- Pytorch 2.5.1+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for ZeroAgency/zero-mistral-beta11
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503