File size: 392 Bytes
66231e6
a47bcc1
 
 
 
 
 
 
 
2c65e7a
a47bcc1
 
 
 
 
 
 
 
 
66231e6
a47bcc1
 
 
66231e6
995c21c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- fp16
- GGUF
- transformers
- pytorch
- yi
- text-generation
- conversational
- endpoints_compatible
- text-generation-inference
- text-generation
license: apache-2.0
library_name: transformers
inference: false
pipeline_tag: text-generation
---
The gguf quantization of [Fi-9B](https://huggingface.co/wenbopan/Fi-9B-200K)