Suparious's picture
Adding AWQ quant links
a00730c verified
metadata
language:
  - en
license: apache-2.0
tags:
  - open-source
  - code
  - math
  - chemistry
  - biology
  - text-generation
  - question-answering
datasets:
  - Locutusque/OpenCerebrum-dpo
pipeline_tag: text-generation

OpenCerebrum-1.0-7B-DPO

OpenCerebrum-1.0-7B-DPO is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of Aether Research's proprietary Cerebrum model.

The model was fine-tuned on approximately 21,000 examples across 6 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.

I used the ChatML prompt format to train this model.

Model Details

  • Base Model: alpindale/Mistral-7B-v0.2-hf
  • Parameters: 7 billion
  • Fine-Tuning Dataset Size: ~21,000 examples
  • Fine-Tuning Data: Amalgamation of 6 public datasets
  • Language: English
  • License: Apache 2.0

Quants

Intended Use

OpenCerebrum-1.0-7B-DPO is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.

However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.

Limitations and Biases

  • The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.
  • With 21,000 training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data.
  • As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.

Training Details

The model was fine-tuned on the 6 datasets listed in the Datasets section, totaling approximately 21,000 examples. In the future, the fine-tuning dataset may be condensed to more closely match the ~500 example dataset reputedly used for the original Cerebrum model.