File size: 738 Bytes
5a4b70d
 
8070070
 
 
5a4b70d
 
8070070
5a4b70d
eef6182
 
 
 
 
 
 
 
8070070
5a4b70d
8070070
 
 
 
 
 
 
 
5a4b70d
ce34cf3
 
5a4b70d
 
8070070
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
library_name: transformers
tags:
- internlm
- custom_code
---

# InterLM2-Chat NF4 Quant

## Usage

As of 2024/1/17, Transformers must be installed from source and bitsandbytes >=0.42.0 is required in order to load serialized 4-bit quants.

```bash
pip install -U git+https://github.com/huggingface/transformers bitsandbytes
```

## Quantization config

```python
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
)
```

Not necessary for inference, just load the model without specifying any quantization/`load_in_*bit`.

## Model Details

- **Repository:** https://huggingface.co/internlm/internlm2-chat-20b