--- pipeline_tag: text-generation license: apache-2.0 language: - en tags: - Open-platypus-Commercial base_model: liminerity/M7-7b datasets: - kyujinpy/Open-platypus-Commercial model-index: - name: T3Q-Platypus-Mistral7B results: [] --- Update @ 2024.03.07 ## T3Q-Platypus-MistralM7-7B This model is a fine-tuned version of liminerity/M7-7b **Model Developers** Chihoon Lee(chlee10), T3Q ## Training hyperparameters The following hyperparameters were used during training: ```python # 데이터셋과 훈련 횟수와 관련된 하이퍼 파라미터 batch_size = 16 num_epochs = 1 micro_batch = 1 gradient_accumulation_steps = batch_size // micro_batch # 훈련 방법에 대한 하이퍼 파라미터 cutoff_len = 4096 lr_scheduler = 'cosine' warmup_ratio = 0.06 # warmup_steps = 100 learning_rate = 4e-4 optimizer = 'adamw_torch' weight_decay = 0.01 max_grad_norm = 1.0 # Q-LoRA config lora_r = 16 lora_alpha = 16 lora_dropout = 0.05 lora_target_modules = ["gate_proj", "down_proj", "up_proj"] # Tokenizer에서 나오는 input값 설정 옵션 train_on_inputs = False add_eos_token = False # NEFTune params noise_alpha: int = 5 ```