Update README.md
Browse files
README.md
CHANGED
@@ -7,14 +7,20 @@ tags:
|
|
7 |
- gpt3
|
8 |
- transformers
|
9 |
---
|
|
|
10 |
# ruGPT-13B-4bit
|
11 |
This files are GPTQ model files for sberbank [ruGPT-3.5-13B](https://huggingface.co/ai-forever/ruGPT-3.5-13B) model.
|
|
|
12 |
## Technical details
|
13 |
-
Model was quantized to 4-bit
|
|
|
14 |
## Examples of usage
|
15 |
-
First make sure you have AutoGPTQ installed:
|
|
|
16 |
GITHUB_ACTIONS=true pip install auto-gptq
|
|
|
17 |
Then try the following example code:
|
|
|
18 |
```python
|
19 |
from transformers import AutoTokenizer, TextGenerationPipeline
|
20 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
|
|
7 |
- gpt3
|
8 |
- transformers
|
9 |
---
|
10 |
+
|
11 |
# ruGPT-13B-4bit
|
12 |
This files are GPTQ model files for sberbank [ruGPT-3.5-13B](https://huggingface.co/ai-forever/ruGPT-3.5-13B) model.
|
13 |
+
|
14 |
## Technical details
|
15 |
+
Model was quantized to 4-bit with [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) library
|
16 |
+
|
17 |
## Examples of usage
|
18 |
+
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
|
19 |
+
|
20 |
GITHUB_ACTIONS=true pip install auto-gptq
|
21 |
+
|
22 |
Then try the following example code:
|
23 |
+
|
24 |
```python
|
25 |
from transformers import AutoTokenizer, TextGenerationPipeline
|
26 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|