File size: 438 Bytes
f559664
 
 
26ea47d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: unknown
---

This is TinyLlama/TinyLlama-1.1B-Chat-v1.0 quantized with AutoGPTQ in GPTQ 4-bit Marlin format.

**Quantize config:**    
"bits": 4,    
"group_size": 128,    
"damp_percent": 0.005,    
"desc_act": false,    
"static_groups": false,    
"sym": true,    
"true_sequential": true,    
"model_name_or_path": null,    
"model_file_base_name": null,    
"checkpoint_format": "marlin",    
"quant_method": "gptq"