Text Generation
Transformers
English
Inference Endpoints
File size: 1,804 Bytes
ae1e914
 
 
 
 
 
 
 
 
1a6a9a9
ae1e914
1a6a9a9
ae1e914
1a6a9a9
ae1e914
1a6a9a9
ae1e914
1a6a9a9
ae1e914
1a6a9a9
ae1e914
1a6a9a9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
- mlabonne/chatml_dpo_pairs
language:
- en
library_name: transformers
---
More information about previous [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) version available here: ๐Ÿ”—[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf)

Author: Jan Kocoล„     ๐Ÿ”—[LinkedIn](https://www.linkedin.com/in/jankocon/)     ๐Ÿ”—[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao)     ๐Ÿ”—[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2)

Changes concerning [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2):

1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.

2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) model.

3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.

4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-6`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.