KoconJan commited on
Commit
6f5c9f2
β€’
1 Parent(s): db4cdbb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -1,3 +1,12 @@
 
 
 
 
 
 
 
 
 
1
  More information about previous [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) version available here: πŸ”—[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf)
2
 
3
  Author: Jan KocoΕ„     πŸ”—[LinkedIn](https://www.linkedin.com/in/jankocon/)     πŸ”—[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao)     πŸ”—[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2)
@@ -10,4 +19,4 @@ Changes concerning [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronov
10
 
11
  3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.
12
 
13
- 4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-6`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Intel/orca_dpo_pairs
5
+ - mlabonne/chatml_dpo_pairs
6
+ language:
7
+ - en
8
+ library_name: transformers
9
+ ---
10
  More information about previous [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) version available here: πŸ”—[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf)
11
 
12
  Author: Jan KocoΕ„     πŸ”—[LinkedIn](https://www.linkedin.com/in/jankocon/)     πŸ”—[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao)     πŸ”—[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2)
 
19
 
20
  3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.
21
 
22
+ 4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-6`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.