Edit model card

More information about previous Neuronovo/neuronovo-9B-v0.2 version available here: πŸ”—Don't stop DPOptimizing!

Author: Jan KocoΕ„     πŸ”—LinkedIn     πŸ”—Google Scholar     πŸ”—ResearchGate

Changes concerning Neuronovo/neuronovo-9B-v0.2:

  1. Training Dataset: In addition to the Intel/orca_dpo_pairs dataset, this version incorporates a mlabonne/chatml_dpo_pairs. The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.

  2. Tokenizer and Formatting: The tokenizer now originates directly from the Neuronovo/neuronovo-9B-v0.2 model.

  3. Training Configuration: The training approach has shifted from using max_steps=200 to num_train_epochs=1. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.

  4. Learning Rate: The learning rate has been reduced to a smaller value of 5e-6. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.

Downloads last month
2,065
Safetensors
Model size
8.99B params
Tensor type
F32
Β·

Datasets used to train Neuronovo/neuronovo-9B-v0.3

Spaces using Neuronovo/neuronovo-9B-v0.3 6