wolfram commited on
Commit
b05510a
1 Parent(s): f6e2cf9

Update README.md

Browse files

[wolfram/miqu-1-103b-5.0bpw-h6-exl2 · Hugging Face](https://huggingface.co/wolfram/miqu-1-103b-5.0bpw-h6-exl2)

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
 
19
  - HF: wolfram/miqu-1-103b
20
  - GGUF: mradermacher's [static quants](https://huggingface.co/mradermacher/miqu-1-103b-GGUF) | [weighted/imatrix quants](https://huggingface.co/mradermacher/miqu-1-103b-i1-GGUF)
21
- - EXL2: LoneStriker's [2.4bpw](https://huggingface.co/LoneStriker/miqu-1-103b-2.4bpw-h6-exl2) | [3.0bpw](https://huggingface.co/LoneStriker/miqu-1-103b-3.0bpw-h6-exl2) | [3.5bpw](https://huggingface.co/LoneStriker/miqu-1-103b-3.5bpw-h6-exl2)
22
 
23
  This is a 103b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
24
 
 
18
 
19
  - HF: wolfram/miqu-1-103b
20
  - GGUF: mradermacher's [static quants](https://huggingface.co/mradermacher/miqu-1-103b-GGUF) | [weighted/imatrix quants](https://huggingface.co/mradermacher/miqu-1-103b-i1-GGUF)
21
+ - EXL2: [wolfram/miqu-1-103b-5.0bpw-h6-exl2](https://huggingface.co/wolfram/miqu-1-103b-5.0bpw-h6-exl2) | LoneStriker's [2.4bpw](https://huggingface.co/LoneStriker/miqu-1-103b-2.4bpw-h6-exl2) | [3.0bpw](https://huggingface.co/LoneStriker/miqu-1-103b-3.0bpw-h6-exl2) | [3.5bpw](https://huggingface.co/LoneStriker/miqu-1-103b-3.5bpw-h6-exl2)
22
 
23
  This is a 103b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
24