File size: 4,824 Bytes
37bd2ba
 
 
620be8d
 
 
 
 
 
37bd2ba
 
 
 
2349254
37bd2ba
620be8d
 
 
 
 
3f7c543
b05510a
620be8d
 
 
 
 
 
 
f6e2cf9
be5ddaf
620be8d
 
 
 
37bd2ba
620be8d
 
 
 
 
 
 
 
 
 
 
 
37bd2ba
 
620be8d
37bd2ba
 
 
 
 
 
 
620be8d
 
37bd2ba
 
 
 
 
620be8d
 
37bd2ba
 
 
 
 
 
 
 
 
 
 
 
 
 
620be8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2349254
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
base_model:
- 152334H/miqu-1-70b-sf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# miqu-1-103b

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/LxO9j7OykuabKLYQHIodG.jpeg)

- HF: wolfram/miqu-1-103b
- GGUF: [wolfram/miqu-1-103b-GGUF](https://huggingface.co/wolfram/miqu-1-103b-GGUF) | mradermacher's [static quants](https://huggingface.co/mradermacher/miqu-1-103b-GGUF) | [weighted/imatrix quants](https://huggingface.co/mradermacher/miqu-1-103b-i1-GGUF)
- EXL2: [wolfram/miqu-1-103b-5.0bpw-h6-exl2](https://huggingface.co/wolfram/miqu-1-103b-5.0bpw-h6-exl2) | LoneStriker's [2.4bpw](https://huggingface.co/LoneStriker/miqu-1-103b-2.4bpw-h6-exl2) | [3.0bpw](https://huggingface.co/LoneStriker/miqu-1-103b-3.0bpw-h6-exl2) | [3.5bpw](https://huggingface.co/LoneStriker/miqu-1-103b-3.5bpw-h6-exl2)

This is a 103b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).

Inspired by [Midnight-Rose-103B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-103B-v2.0.3).

Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.

Thanks for the quants, [Michael Radermacher](https://huggingface.co/mradermacher) and [Lone Striker](https://huggingface.co/LoneStriker)!

Also available:

- [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) – Miqu's older, bigger twin sister; same Miqu, inflated to 120B.
- [miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0) – Miqu's younger, fresher sister; a new and improved Goliath-like merge of Miqu and lzlv.

## Model Details

- Max Context: 32768 tokens
- Layers: 120

### Prompt template: Mistral

```
<s>[INST] {prompt} [/INST]
```

See also: [🐺🐦‍⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)

## Merge Details

### Merge Method

This model was merged using the passthrough merge method.

### Models Merged

The following models were included in the merge:

- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)

### Configuration

The following YAML configuration was used to produce this model:

<details><summary>mergekit_config.yml</summary>

```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 40]
    model: 152334H/miqu-1-70b-sf
- sources:
  - layer_range: [20, 60]
    model: 152334H/miqu-1-70b-sf
- sources:
  - layer_range: [40, 80]
    model: 152334H/miqu-1-70b-sf
```

</details>

## Credits & Special Thanks

- original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
  - ⭐⭐⭐ **[Use their newer, better, official models here!](https://console.mistral.ai/)** ⭐⭐⭐
- leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
- f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
- mergekit_config.yml: [sophosympatheia/Midnight-Rose-103B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-103B-v2.0.3)

### Support

- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!

## Disclaimer

*This model contains leaked weights and due to its content it should not be used by anyone.* 😜

But seriously:

### License

**What I *know*:** [Weights produced by a machine are not copyrightable](https://www.reddit.com/r/LocalLLaMA/comments/1amc080/psa_if_you_use_miqu_or_a_derivative_please_keep/kpmamte/) so there is no copyright owner who could grant permission or a license to use, or restrict usage, once you have acquired the files.

### Ethics

**What I *believe*:** All generative AI, including LLMs, only exists because it is trained mostly on human data (both public domain and copyright-protected, most likely acquired without express consent) and possibly synthetic data (which is ultimately derived from human data, too). It is only fair if something that is based on everyone's knowledge and data is also freely accessible to the public, the actual creators of the underlying content. Fair use, fair AI!