OpenHermes-13B / README.md
teknium's picture
Update README.md
e3fe64b
|
raw
history blame
No virus
1.96 kB
metadata
base_model: NousResearch/Llama-2-13b-hf
tags:
  - llama-2
  - instruct
  - finetune
  - alpaca
  - gpt4
  - synthetic data
  - distillation
model-index:
  - name: openhermes-13b
    results: []
license: mit
language:
  - en

OpenHermes-13B

Model description

OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset!

OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:

  • GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
  • WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
  • Airoboros GPT-4 (v1.0), by JonDurbin
  • Camel-AI's domain expert datasets, by the Camel-AI Team
  • CodeAlpaca, by Sahil2801
  • GPT4-LLM and Unnatural Instructions, by Microsoft

Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more

The base dataset mix the model was trained on is identical to Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.

The WANDB Project is public and can be examined at this link: https://wandb.ai/teknium1/openhermes/runs/openhermes-v2-fullft-13b

Benchmark Information

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 300
  • num_epochs: 3

Framework versions

  • Transformers 4.34.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3