|
--- |
|
language: |
|
- en |
|
license: other |
|
library_name: transformers |
|
datasets: |
|
- Open-Orca/SlimOrca |
|
- m-a-p/Code-Feedback |
|
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k |
|
- camel-ai/math |
|
- camel-ai/physics |
|
- camel-ai/biology |
|
- camel-ai/chemistry |
|
- LDJnr/Capybara |
|
- jondurbin/airoboros-3.2 |
|
- microsoft/orca-math-word-problems-200k |
|
inference: |
|
parameters: |
|
do_sample: true |
|
temperature: 0.8 |
|
top_p: 0.95 |
|
top_k: 40 |
|
max_new_tokens: 250 |
|
repetition_penalty: 1.1 |
|
|
|
--- |
|
|
|
# neural-chat-mini-v2.2-1.8B |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
We fine-tuned tau-1.8B using SFT and DPOP on a high quality mix for general-purpose assistants. |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants. |
|
|
|
- **Developed by:** M4-ai |
|
- **Language(s) (NLP):** English and maybe Chinese |
|
- **License:** tongyi-qianwen license |
|
- **Finetuned from model:** [tau-1.8B](https://huggingface.co/M4-ai/tau-1.8B) |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
General purpose assistant, question answering, chain-of-thought, etc.. |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
- Open-Orca/SlimOrca |
|
- m-a-p/Code-Feedback |
|
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k |
|
- camel-ai/math |
|
- camel-ai/physics |
|
- camel-ai/biology |
|
- camel-ai/chemistry |
|
- LDJnr/Capybara |
|
- jondurbin/airoboros-3.2 |
|
- microsoft/orca-math-word-problems-200k |
|
- mlabonne/orpo-dpo-mix-40k |
|
|
|
## Evaluations |
|
|
|
coming soon |
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** bf16 non-mixed precision |
|
## Technical Specifications |
|
|
|
#### Hardware |
|
|
|
We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048. |
|
|
|
|