Edit model card

opus-samantha-phi-3-4k

Axolotl Config

base_model: microsoft/Phi-3-mini-4k-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: True
load_in_8bit: true
load_in_4bit: false
strict: false
sequence_len: 4096
bf16: auto
fp16:
tf32: false
flash_attention: true
# Data

datasets:
  - path: macadeliccc/opus_samantha
    type: sharegpt
    conversation: chatml

# Iterations
num_epochs: 3

# Evaluation
val_set_size: 0.05
evals_per_epoch: 5
eval_table_size:
eval_max_new_tokens: 128
eval_sample_packing: false
eval_batch_size: 1

# LoRA
output_dir: ./lora-out
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:

lora_modules_to_save:
    - embed_tokens
    - lm_head

# Sampling
sample_packing: false
pad_to_sequence_len: false

# Batching
gradient_accumulation_steps: 4
micro_batch_size: 4
gradient_checkpointing: true

# wandb
wandb_project:

# Optimizer
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002

# Misc
train_on_inputs: false
group_by_length: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
debug:
deepspeed:
weight_decay: 0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"
tokens: # these are delimiters
  - "<|im_start|>"
  - "<|im_end|>"

Built with Axolotl

Downloads last month
4
Safetensors
Model size
3.82B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for macadeliccc/opus-samantha-phi-3-4k

Finetuned
this model