File size: 2,039 Bytes
0ee1ecd
 
 
 
 
 
 
 
 
 
 
 
 
 
b82d1f8
0ee1ecd
 
 
 
 
b82d1f8
0ee1ecd
b82d1f8
b158bc3
26e2487
b158bc3
0ee1ecd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
base_model:
- grimjim/magnum-consolidatum-v1-12b
- grimjim/mistralai-Mistral-Nemo-Instruct-2407
library_name: transformers
tags:
- mergekit
- merge
pipeline_tag: text-generation
license: apache-2.0
---
# Magnolia-v1-12B

This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The base is a merge of two models trained for variety in text generation.
Instruct was added in at low weight in order to increase the steerability of the model;
safety has consequently been reinforced.

Tested at temperature 0.7 and minP 0.01, with ChatML prompting.

Mistral Nemo models tend to have repetition issues in general. For this model at least, various issues can be mitigated somewhat with additional sysprompting, e.g.:
```
No passage shall exceed 10 lines of text, with turns limited to a maximum of 5 lines per speaker to ensure snappy and engaging dialog and action.
Ensure that all punctuation rules are adhered to without the introduction of spurious intervening spaces.
Avoid redundant phrasing and maintain forward narrative progression by utilizing varied sentence structure, alternative word choices, and active voice.
Employ descriptive details judiciously, ensuring they serve a purpose in advancing the story or revealing character or touching upon setting.
```

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [grimjim/magnum-consolidatum-v1-12b](https://huggingface.co/grimjim/magnum-consolidatum-v1-12b)
* [grimjim/mistralai-Mistral-Nemo-Instruct-2407](https://huggingface.co/grimjim/mistralai-Mistral-Nemo-Instruct-2407)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
  - model: grimjim/magnum-consolidatum-v1-12b
merge_method: slerp
base_model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
parameters:
  t:
   - value: 0.1
dtype: bfloat16
```