More information about previous Neuronovo/neuronovo-7B-v0.2 version available here: 🔗Don't stop DPOptimizing!

Author: Jan KocoÅ„     🔗LinkedIn     🔗Google Scholar     🔗ResearchGate

Changes concerning Neuronovo/neuronovo-7B-v0.2:

  1. Training Dataset: In addition to the Intel/orca_dpo_pairs dataset, this version incorporates a mlabonne/chatml_dpo_pairs. The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.

  2. Tokenizer and Formatting: The tokenizer now originates directly from the Neuronovo/neuronovo-7B-v0.2 model.

  3. Training Configuration: The training approach has shifted from using max_steps=200 to num_train_epochs=1. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.

  4. Learning Rate: The learning rate has been reduced to a smaller value of 5e-6. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train LoneStriker/neuronovo-7B-v0.3-3.0bpw-h6-exl2