Orca-2.0-Tau-1.8B / README.md
Locutusque's picture
Update README.md
adc9fad verified
|
raw
history blame
No virus
2.2 kB
metadata
library_name: transformers
license: other
datasets:
  - Open-Orca/SlimOrca
  - m-a-p/Code-Feedback
  - MaziyarPanahi/WizardLM_evol_instruct_V2_196k
  - camel-ai/math
  - camel-ai/physics
  - camel-ai/biology
  - camel-ai/chemistry
  - LDJnr/Capybara
  - jondurbin/airoboros-3.2
  - microsoft/orca-math-word-problems-200k
language:
  - en
inference:
  parameters:
    do_sample: true
    temperature: 0.8
    top_p: 0.95
    top_k: 40
    min_p: 0.8
    max_new_tokens: 250
    repetition_penalty: 1.1

Hercules-Mini-1.8B

We fine-tuned tau-1.8B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon.

Model Details

Model Description

This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.

  • Developed by: M4-ai
  • Language(s) (NLP): English and maybe Chinese
  • License: tongyi-qianwen license
  • Finetuned from model: tau-1.8B

Uses

General purpose assistant, question answering, chain-of-thought, etc..

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Evaluation

Coming soon

Training Details

Training Data

  • Open-Orca/SlimOrca
  • m-a-p/Code-Feedback
  • MaziyarPanahi/WizardLM_evol_instruct_V2_196k
  • camel-ai/math
  • camel-ai/physics
  • camel-ai/biology
  • camel-ai/chemistry
  • LDJnr/Capybara
  • jondurbin/airoboros-3.2
  • microsoft/orca-math-word-problems-200k

Training Hyperparameters

  • Training regime: bf16 non-mixed precision

Technical Specifications

Hardware

We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.