File size: 1,342 Bytes
8703458
89c4ef8
 
8703458
89c4ef8
 
 
 
 
 
 
8703458
 
89c4ef8
8703458
89c4ef8
 
8703458
89c4ef8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
datasets:
- garage-bAInd/Open-Platypus
library_name: peft
tags:
- meta-llama/Llama-2-7b-hf
- code
- instruct
- instruct-code
- logical-reasoning
- Platypus2
---

We finetuned Meta-Llama/Llama-2-7b-hf on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 5 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).

#### About OpenPlatypus Dataset
OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0.

The finetuning session got completed in 1 hour and 30 minutes and costed us only `$15` for the entire finetuning run!

#### Hyperparameters & Run details:
- Model Path: meta-llama/Llama-2-7b-hf
- Dataset: garage-bAInd/Open-Platypus
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1

Loss metrics:
![training loss](train-loss.png "Training loss")

---
license: apache-2.0
---