File size: 2,845 Bytes
82f85af
6ef240c
 
82f85af
 
 
 
 
 
 
 
 
 
 
fe775a0
82f85af
694b18c
6ef240c
82f85af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9df9f31
561824f
82f85af
2b2b80b
946dda7
 
7afa6bb
82f85af
 
8a34251
82f85af
 
 
 
7e25181
 
 
 
82f85af
7e25181
 
82f85af
96a3bdc
6f9b47a
82f85af
eb0f531
82f85af
6ef240c
41ebf69
82f85af
 
 
eb0f531
 
 
 
 
 
82f85af
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
base_model:
- Qwen/Qwen2.5-72B
tags:
- roleplay
- storywriting
- qwen2.5
- finetune
- transformers
- pytorch
---

# Zeus Labs ~ Chronos-Platinum-72B

![image/png](https://cdn-uploads.huggingface.co/production/uploads/630417380907b9a115c6aa9f/G05mAhqcp4S_WBfE2vBLl.png)

Qwen 2.5 72B base model, trained for two epochs on the Chronos Divergence dataset using ChatML. It works well for roleplaying and storywriting as well as general assistant tasks.

## Instruct Template

This model uses `ChatML` - below is an example. It is a preset in many frontends.

```
<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello there!<|im_end|>
<|im_start|>assistant
Hi! I'm an AI assistant, designed to help people like you with all sorts of tasks. Is there anything you need help with?<|im_end|>
<|im_start|>user
I was wondering how transformers work?<|im_end|>
<|im_start|>assistant
```

## Quantizations
Please note that we tested this model with a 5.0bpw EXL2 quant. Results are not expected to be the same when going below this quanitzation. Thanks to our model quanters!

#### LlamaCPP (GGUF)
[bartowski](https://huggingface.co/bartowski/Chronos-Platinum-72B-GGUF)  

[mradermacher](https://huggingface.co/mradermacher/Chronos-Platinum-72B-i1-GGUF)

#### Exllama2
[bartowski](https://huggingface.co/bartowski/Chronos-Platinum-72B-exl2)

## Sampling Settings
Here are some settings that work well with this model:
```
Temp -> 0.7 - 1.2
Min P -> 0.025 - 0.05 [temp in order, not last]
Presence Penalty -> 1.0
Repetition Penalty range -> 4000
```
Higher temp gives more uniqueness and less repetition. Please do not take these settings as the "best" - your system prompt matters significantly, and if you're roleplaying
use the Basic system prompt in SillyTavern. You can also try other samplers like Top P.

**Note that Presence Penalty works with Repetition Penalty Range.**

## Credit
Thank you to my team consisting of [@ToastyPigeon](https://huggingface.co/ToastyPigeon), [@Fizzarolli](https://huggingface.co/Fizzarolli), and myself [@elinas](https://huggingface.co/elinas).

Additional thanks to [@AlpinDale](https://huggingface.co/AlpinDale) and the rest of the PygmalionAI team for graciously providing the compute to finetune this model!
Thank you to [anthracite-org](https://huggingface.co/anthracite-org) as well for sponsoring this model.

## Additional Details 

We used a combination of provided logs and WizardLM evol both cleaned up and de-slopped.

Thanks to Anthropic and OpenAI for the models used to generate synthetic and partially synthetic data to train this model.

Thanks Elon Musk for being based enough to train AI that compares to the top models.

If you have any questions or concerns, please post in the community tab.

DISCLAIMER: Outputs generated by the model are not reflective of our views.