File size: 840 Bytes
2361543
 
 
 
 
 
 
1fd06e2
 
2361543
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
tags:
- merge
---
# Miquella 120B GGUF
GGUF quantized weights for [miquella-120b](https://huggingface.co/alpindale/miquella-120b). Contains *all* quants.

I used Importance Matrices generated from Q8_0 quant of the model. The dataset used for that was random junk
for optimal quality.

Due to the limitations of HF's file size, the larger files were split into multiple chunks. Instructions below.

## Linux
Example uses Q3_K_L. Replace the names appropriately for your quant of choice.
```sh
cat miquella-120b.Q3_K_L.gguf_part_* > miquella-120b.Q3_K_L.gguf && rm miquella-120b.Q3_K_L.gguf_part_*
```

## Windows
Example uses Q3_K_L. Replace the names appropriately for your quant of choice.
```sh
COPY /B  miquella-120b.Q3_K_L.gguf_part_aa +  miquella-120b.Q3_K_L.gguf_part_ab  miquella-120b.Q3_K_L.gguf
```
Then delete the two splits.