See axolotl config
axolotl version: 0.3.0
base_model: chargoddard/internlm2-20b-llama
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: ARB/arb_law.json
ds_type: json
type: alpaca
conversation: chatml
- path: ARB/arb_math.json
ds_type: json
type: alpaca
conversation: chatml
- path: ARB/arb_mcat_reading.json
ds_type: json
type: alpaca
conversation: chatml
- path: ARB/arb_mcat_science.json
ds_type: json
type: alpaca
conversation: chatml
- path: ARB/arb_physics.json
ds_type: json
type: alpaca
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./Weyaxi-test
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 512
lora_alpha: 256
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: huggingface
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
hub_model_id: Weyaxi/Weyaxi-test
gradient_accumulation_steps: 4 # change
micro_batch_size: 2 # change
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
save_steps: 20
save_total_limit: 5
debug:
#deepspeed: deepspeed/zero3_bf16.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
tokens:
- "<|im_start|>"
Weyaxi-test
This model is a fine-tuned version of chargoddard/internlm2-20b-llama on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
Training results
Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Weyaxi/Stellaris-internlm2-20b-r512
Base model
internlm/internlm2-20b
Finetuned
chargoddard/internlm2-20b-llama