File size: 2,196 Bytes
dc5e0a9
 
adc9fad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc5e0a9
 
adc9fad
dc5e0a9
 
adc9fad
dc5e0a9
 
 
 
 
 
 
 
adc9fad
dc5e0a9
adc9fad
 
 
 
dc5e0a9
 
 
 
 
adc9fad
dc5e0a9
 
 
 
 
 
 
adc9fad
 
dc5e0a9
 
 
 
 
 
adc9fad
 
 
 
 
 
 
 
 
 
dc5e0a9
 
 
 
adc9fad
 
dc5e0a9
 
 
adc9fad
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
library_name: transformers
license: other
datasets:
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
language:
- en
inference:
  parameters:
    do_sample: true
    temperature: 0.8
    top_p: 0.95
    top_k: 40
    min_p: 0.8
    max_new_tokens: 250
    repetition_penalty: 1.1
---

# Hercules-Mini-1.8B 

<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned tau-1.8B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.

- **Developed by:** M4-ai
- **Language(s) (NLP):** English and maybe Chinese
- **License:** tongyi-qianwen license
- **Finetuned from model:** [tau-1.8B](https://huggingface.co/M4-ai/tau-1.8B)

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

General purpose assistant, question answering, chain-of-thought, etc..

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## Evaluation
Coming soon


## Training Details

### Training Data

- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k


#### Training Hyperparameters

- **Training regime:** bf16 non-mixed precision
## Technical Specifications

#### Hardware

We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.