Text Generation
Transformers
Safetensors
English
qwen2
conversational
Inference Endpoints
text-generation-inference
Edit model card

neural-chat-mini-v2.2-1.8B

We fine-tuned tau-1.8B using SFT and DPOP on a high quality mix for general-purpose assistants.

Model Details

Model Description

This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.

  • Developed by: M4-ai
  • Language(s) (NLP): English and maybe Chinese
  • License: tongyi-qianwen license
  • Finetuned from model: tau-1.8B

Uses

General purpose assistant, question answering, chain-of-thought, etc..

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Training Details

Training Data

  • Open-Orca/SlimOrca
  • m-a-p/Code-Feedback
  • MaziyarPanahi/WizardLM_evol_instruct_V2_196k
  • camel-ai/math
  • camel-ai/physics
  • camel-ai/biology
  • camel-ai/chemistry
  • LDJnr/Capybara
  • jondurbin/airoboros-3.2
  • microsoft/orca-math-word-problems-200k
  • mlabonne/orpo-dpo-mix-40k

Evaluations

coming soon

Training Hyperparameters

  • Training regime: bf16 non-mixed precision

Technical Specifications

Hardware

We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.

Downloads last month
1,168
Safetensors
Model size
1.84B params
Tensor type
BF16
·

Datasets used to train M4-ai/neural-chat-mini-v2.2-1.8B