Zangs3011's picture
Update README.md
89c4ef8
|
raw
history blame
1.34 kB
metadata
datasets:
  - garage-bAInd/Open-Platypus
library_name: peft
tags:
  - meta-llama/Llama-2-7b-hf
  - code
  - instruct
  - instruct-code
  - logical-reasoning
  - Platypus2

We finetuned Meta-Llama/Llama-2-7b-hf on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 5 epochs using MonsterAPI no-code LLM finetuner.

About OpenPlatypus Dataset

OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0.

The finetuning session got completed in 1 hour and 30 minutes and costed us only $15 for the entire finetuning run!

Hyperparameters & Run details:

  • Model Path: meta-llama/Llama-2-7b-hf
  • Dataset: garage-bAInd/Open-Platypus
  • Learning rate: 0.0003
  • Number of epochs: 5
  • Data split: Training: 90% / Validation: 10%
  • Gradient accumulation steps: 1

Loss metrics: training loss


license: apache-2.0