Edit model card

Stablelm2- (1.6b) fine-tuned on OpenHermes

logo

Base model info

Stable LM 2 1.6B is a 1.6 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.

Dataset info

The OpenHermes dataset is composed of 242,000 entries of primarily GPT-4 generated data from open datasets across the AI landscape, including:

OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset!

OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data from open datasets across the AI landscape, including:

  • GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
  • WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
  • Airoboros GPT-4 (v1.0), by JonDurbin
  • Camel-AI's domain expert datasets, by the Camel-AI Team
  • CodeAlpaca, by Sahil2801
  • GPT4-LLM and Unnatural Instructions, by Microsoft Filtering included the removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more The base dataset mix is identical to the original Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.

Usage

WIP

Evaluations

WIP

Downloads last month
4
Safetensors
Model size
1.64B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train mrm8488/stablelm2-1.6b-ft-openhermes