johnrachwanpruna
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -51,7 +51,7 @@ Detailed efficiency metrics coming soon!
|
|
51 |
|
52 |
You can run the smashed model with these steps:
|
53 |
|
54 |
-
0. Check requirements from the original repo
|
55 |
1. Make sure that you have installed quantization related packages.
|
56 |
```bash
|
57 |
pip install transformers accelerate bitsandbytes>0.37.0
|
@@ -75,7 +75,7 @@ The configuration info are in `smash_config.json`.
|
|
75 |
|
76 |
## Credits & License
|
77 |
|
78 |
-
The license of the smashed model follows the license of the original model. Please check the license of the original model
|
79 |
|
80 |
## Want to compress other models?
|
81 |
|
|
|
51 |
|
52 |
You can run the smashed model with these steps:
|
53 |
|
54 |
+
0. Check requirements from the original repo Mixtral-8x22B-v0.1 installed. In particular, check python, cuda, and transformers versions.
|
55 |
1. Make sure that you have installed quantization related packages.
|
56 |
```bash
|
57 |
pip install transformers accelerate bitsandbytes>0.37.0
|
|
|
75 |
|
76 |
## Credits & License
|
77 |
|
78 |
+
The license of the smashed model follows the license of the original model. Please check the license of the original model Mixtral-8x22B-v0.1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
|
79 |
|
80 |
## Want to compress other models?
|
81 |
|