File size: 1,061 Bytes
05bccc2
 
 
 
 
 
 
 
 
 
 
 
 
3e54463
229f20a
d3bc3b7
 
ffe3026
 
 
d3bc3b7
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
language:
- en
- ar
- fr
pipeline_tag: text-generation
tags:
- gguf
- q4_k_m
- Q8_0
- ayoubkirouane/Mistral-Depth-UP-Scaled-9B
- llama.cpp
---

## Mistral-Depth-UP-Scaled-9B-AlpacaInstruct-gguf

- [**q8_0** , **F16**] Quantized version of [**Mistral-Depth-UP-Scaled-9B**](https://huggingface.co/ayoubkirouane/Mistral-Depth-UP-Scaled-9B) 

# GGUF

GGUF is a file format for storing models for inference with GGML and executors based on GGML. GGUF is a binary format that is designed for fast loading and saving of models, and for ease of reading. Models are traditionally developed using PyTorch or another framework, and then converted to GGUF for use in GGML.

It is a successor file format to GGML, GGMF and GGJT, and is designed to be unambiguous by containing all the information needed to load a model. It is also designed to be extensible, so that new information can be added to models without breaking compatibility.

- [for More info ](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)