lrl-modelcloud commited on
Commit
0cab85d
1 Parent(s): 66f1bfd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This model has been quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
2
+
3
+ - **Bits**: 4
4
+ - **Group Size**: 128
5
+ - **Desc Act**: true
6
+ - **Static Groups**: false
7
+ - **Sym**: true
8
+ - **LM Head**: false
9
+ - **Damp Percent**: 0.01
10
+ - **True Sequential**: true
11
+ - **Model Name or Path**:
12
+ - **Model File Base Name**: model
13
+ - **Quant Method**: auto_round
14
+ - **Checkpoint Format**: gptq
15
+ - **Metadata**
16
+ - **Quantizer**: gptqmodel:0.9.8-dev0
17
+ - **Enable Full Range**: false
18
+ - **Batch Size**: 1
19
+ - **AMP**: true
20
+ - **LR Scheduler**: null
21
+ - **Enable Quanted Input**: true
22
+ - **Enable Minmax Tuning**: true
23
+ - **Learning Rate (LR)**: null
24
+ - **Minmax LR**: null
25
+ - **Low GPU Memory Usage**: true
26
+ - **Iterations (Iters)**: 200
27
+ - **Sequence Length (Seqlen)**: 2048
28
+ - **Number of Samples (Nsamples)**: 512
29
+ - **Sampler**: rand
30
+ - **Seed**: 42
31
+ - **Number of Blocks (Nblocks)**: 1
32
+ - **Gradient Accumulate Steps**: 1
33
+ - **Not Use Best MSE**: false
34
+ - **Dynamic Max Gap**: -1
35
+ - **Data Type**: int
36
+ - **Scale Data Type (Scale Dtype)**: fp16