Upload folder using huggingface_hub
#3
by
sharpenb
- opened
- README.md +10 -14
- results.json +15 -30
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
|
3 |
-
base_model: beomi
|
4 |
metrics:
|
5 |
- memory_disk
|
6 |
- memory_inference
|
@@ -38,9 +38,9 @@ tags:
|
|
38 |
![image info](./plots.png)
|
39 |
|
40 |
**Frequently Asked Questions**
|
41 |
-
- ***How does the compression work?*** The model is compressed with
|
42 |
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
|
43 |
-
- ***How is the model efficiency evaluated?*** These results were obtained on
|
44 |
- ***What is the model format?*** We use safetensors.
|
45 |
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
|
46 |
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
|
@@ -52,22 +52,18 @@ tags:
|
|
52 |
|
53 |
You can run the smashed model with these steps:
|
54 |
|
55 |
-
0. Check requirements from the original repo beomi
|
56 |
1. Make sure that you have installed quantization related packages.
|
57 |
```bash
|
58 |
-
|
59 |
```
|
60 |
2. Load & run the model.
|
61 |
```python
|
62 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
model = HQQModelForCausalLM.from_quantized("PrunaAI/beomi-Llama-3-Open-Ko-8B-HQQ-4bit-smashed", device_map='auto')
|
68 |
-
except:
|
69 |
-
model = AutoHQQHFModel.from_quantized("PrunaAI/beomi-Llama-3-Open-Ko-8B-HQQ-4bit-smashed")
|
70 |
-
tokenizer = AutoTokenizer.from_pretrained("beomi/Llama-3-Open-Ko-8B")
|
71 |
|
72 |
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
|
73 |
|
@@ -81,7 +77,7 @@ The configuration info are in `smash_config.json`.
|
|
81 |
|
82 |
## Credits & License
|
83 |
|
84 |
-
The license of the smashed model follows the license of the original model. Please check the license of the original model beomi
|
85 |
|
86 |
## Want to compress other models?
|
87 |
|
|
|
1 |
---
|
2 |
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
|
3 |
+
base_model: PrunaAI/beomi-Llama-3-Open-Ko-8B-HQQ-4bit-smashed
|
4 |
metrics:
|
5 |
- memory_disk
|
6 |
- memory_inference
|
|
|
38 |
![image info](./plots.png)
|
39 |
|
40 |
**Frequently Asked Questions**
|
41 |
+
- ***How does the compression work?*** The model is compressed with [.
|
42 |
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
|
43 |
+
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
|
44 |
- ***What is the model format?*** We use safetensors.
|
45 |
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
|
46 |
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
|
|
|
52 |
|
53 |
You can run the smashed model with these steps:
|
54 |
|
55 |
+
0. Check requirements from the original repo PrunaAI/beomi-Llama-3-Open-Ko-8B-HQQ-4bit-smashed installed. In particular, check python, cuda, and transformers versions.
|
56 |
1. Make sure that you have installed quantization related packages.
|
57 |
```bash
|
58 |
+
REQUIREMENTS_INSTRUCTIONS
|
59 |
```
|
60 |
2. Load & run the model.
|
61 |
```python
|
62 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
63 |
+
IMPORTS
|
64 |
+
|
65 |
+
MODEL_LOAD
|
66 |
+
tokenizer = AutoTokenizer.from_pretrained("PrunaAI/beomi-Llama-3-Open-Ko-8B-HQQ-4bit-smashed")
|
|
|
|
|
|
|
|
|
67 |
|
68 |
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
|
69 |
|
|
|
77 |
|
78 |
## Credits & License
|
79 |
|
80 |
+
The license of the smashed model follows the license of the original model. Please check the license of the original model PrunaAI/beomi-Llama-3-Open-Ko-8B-HQQ-4bit-smashed before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
|
81 |
|
82 |
## Want to compress other models?
|
83 |
|
results.json
CHANGED
@@ -1,32 +1,17 @@
|
|
1 |
{
|
2 |
-
"
|
3 |
-
"
|
4 |
-
"
|
5 |
-
"
|
6 |
-
"
|
7 |
-
"
|
8 |
-
"
|
9 |
-
"
|
10 |
-
"
|
11 |
-
"
|
12 |
-
"
|
13 |
-
"
|
14 |
-
"
|
15 |
-
"
|
16 |
-
"
|
17 |
-
"smashed_current_gpu_type": "NVIDIA A100-PCIE-40GB",
|
18 |
-
"smashed_current_gpu_total_memory": 40339.3125,
|
19 |
-
"smashed_perplexity": 11.702524185180664,
|
20 |
-
"smashed_token_generation_latency_sync": 166.81119079589843,
|
21 |
-
"smashed_token_generation_latency_async": 166.94455239921808,
|
22 |
-
"smashed_token_generation_throughput_sync": 0.005994801639079169,
|
23 |
-
"smashed_token_generation_throughput_async": 0.0059900127654880205,
|
24 |
-
"smashed_token_generation_CO2_emissions": null,
|
25 |
-
"smashed_token_generation_energy_consumption": null,
|
26 |
-
"smashed_inference_latency_sync": 265.7383438110352,
|
27 |
-
"smashed_inference_latency_async": 196.71142101287842,
|
28 |
-
"smashed_inference_throughput_sync": 0.003763100144520708,
|
29 |
-
"smashed_inference_throughput_async": 0.005083588918482427,
|
30 |
-
"smashed_inference_CO2_emissions": null,
|
31 |
-
"smashed_inference_energy_consumption": null
|
32 |
}
|
|
|
1 |
{
|
2 |
+
"current_gpu_type": "NVIDIA A100-PCIE-40GB",
|
3 |
+
"current_gpu_total_memory": 40339.3125,
|
4 |
+
"perplexity": 11.702524185180664,
|
5 |
+
"token_generation_latency_sync": 164.7708755493164,
|
6 |
+
"token_generation_latency_async": 165.18039368093014,
|
7 |
+
"token_generation_throughput_sync": 0.00606903372131865,
|
8 |
+
"token_generation_throughput_async": 0.006053987266380082,
|
9 |
+
"token_generation_CO2_emissions": null,
|
10 |
+
"token_generation_energy_consumption": null,
|
11 |
+
"inference_latency_sync": 264.18524169921875,
|
12 |
+
"inference_latency_async": 195.43848037719727,
|
13 |
+
"inference_throughput_sync": 0.0037852227988515877,
|
14 |
+
"inference_throughput_async": 0.005116699628803882,
|
15 |
+
"inference_CO2_emissions": null,
|
16 |
+
"inference_energy_consumption": null
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
}
|