---
license: apache-2.0
base_model: NovoCode/Novocode7b-v2
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
[](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config
axolotl version: `0.4.0`
```yaml
base_model: NovoCode/Novocode7b-v2
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: Intel/orca_dpo_pairs
type:
system_prompt: ""
field_system: system
field_instruction: question
field_output: chosen
field_output: rejected
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: ""
eos_token: ""
unk_token: ""
```
# out
This model is a fine-tuned version of [NovoCode/Novocode7b-v2](https://huggingface.co/NovoCode/Novocode7b-v2) on the Intel/orca_dpo_pairs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7565 | 0.01 | 1 | 0.8244 |
| 0.4845 | 0.26 | 24 | 0.4685 |
| 0.4594 | 0.51 | 48 | 0.4435 |
| 0.4399 | 0.77 | 72 | 0.4284 |
| 0.3115 | 1.01 | 96 | 0.4221 |
| 0.2008 | 1.26 | 120 | 0.4614 |
| 0.2212 | 1.52 | 144 | 0.4552 |
| 0.2101 | 1.78 | 168 | 0.4516 |
| 0.119 | 2.02 | 192 | 0.4547 |
| 0.0925 | 2.27 | 216 | 0.5502 |
| 0.096 | 2.53 | 240 | 0.5751 |
| 0.0967 | 2.78 | 264 | 0.5774 |
| 0.0537 | 3.02 | 288 | 0.5765 |
| 0.0576 | 3.28 | 312 | 0.6687 |
| 0.0526 | 3.54 | 336 | 0.6786 |
| 0.0492 | 3.79 | 360 | 0.6792 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0