File size: 3,453 Bytes
bb4e8d3
 
 
 
9888b5b
bb4e8d3
b64fd41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20d9383
b64fd41
 
 
 
 
 
 
 
 
 
 
 
dc41833
b64fd41
 
 
 
 
 
 
efee38e
b64fd41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ccaa49
b64fd41
 
 
 
 
 
 
 
 
 
bb4e8d3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
license: llama3.1
language:
- en
pipeline_tag: text-generation
---
# Deepthought-8B

Deepthought-8B is a small and capable reasoning model built on LLaMA-3.1 8B, designed to make AI reasoning more transparent and controllable. Despite its relatively small size, it achieves sophisticated reasoning capabilities that rival much larger models.

## Model Description

Deepthought-8B is designed with a unique approach to problem-solving, breaking down its thinking into clear, distinct, documented steps. The model outputs its reasoning process in a structured JSON format, making it easier to understand and validate its decision-making process.

### Key Features

- **Transparent Reasoning**: Step-by-step documentation of the thought process
- **Programmable Approach**: Customizable reasoning patterns without model retraining
- **Test-time Compute Scaling**: Flexible reasoning depth based on task complexity
- **Efficient Scale**: Runs on 16GB+ VRAM
- **Structured Output**: JSON-formatted reasoning chains for easy integration

Try out Deepthought-8B on our Ruliad interface: https://chat.ruliad.co

## Technical Requirements

- Python 3.6+
- PyTorch
- Transformers library
- 16GB+ VRAM
- Optional: Flash Attention 2 for improved performance

## Installation

```bash
pip install torch transformers
# Optional: Install Flash Attention 2 for better performance
pip install flash-attn
```

## Usage

1. First, set your HuggingFace token as an environment variable:
```bash
export HF_TOKEN=your_token_here
export HF_HUB_ENABLE_HF_TRANSFER=1
```

2. Use the model in your Python code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Initialize the model
model_name = "ruliad/deepthought-8b-llama-v0.01-alpha"
tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    add_bos_token=False,
    trust_remote_code=True,
    padding="left",
    torch_dtype=torch.bfloat16,
)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    attn_implementation="flash_attention_2",  # Use "eager" (or omit) if flash_attn is not installed
    use_cache=True,
    trust_remote_code=True,
)
```

3. Run the provided example script:
```bash
python deepthought_inference.py
```

## Example Output

The model provides structured reasoning in JSON format:

```json
{
  "step": 1,
  "type": "problem_understanding",
  "thought": "Understanding the user's objective for the task."
}
```

Each reasoning chain includes multiple steps:
1. Problem understanding
2. Data gathering
3. Analysis
4. Calculation (when applicable)
5. Verification
6. Conclusion drawing
7. Implementation

## Performance

Deepthought-8B demonstrates strong performance across various benchmarks:
- Step-by-step problem-solving
- Coding and mathematical tasks
- Instruction following with transparent reasoning
- Scalable performance with test-time compute

## Limitations

Current known limitations include:
- Complex mathematical reasoning
- Long-context processing
- Edge case handling

## License

The model is available under a commercial license for enterprise use.

## Citation

If you use this model in your research, please cite:

```bibtex
@misc{Deepthought2024,
  author = {Ruliad},
  title = {Deepthought-8B: A Small and Capable Reasoning Model},
  year = {2024},
  publisher = {Ruliad}
}
```

## Support

For questions and feedback:
- Twitter: @ruliad_ai
- Email: team@ruliad.co