johnrachwanpruna
commited on
Commit
•
1eb4bf5
1
Parent(s):
caca4ba
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: pruna-engine
|
3 |
+
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
|
4 |
+
metrics:
|
5 |
+
- memory_disk
|
6 |
+
- memory_inference
|
7 |
+
- inference_latency
|
8 |
+
- inference_throughput
|
9 |
+
- inference_CO2_emissions
|
10 |
+
- inference_energy_consumption
|
11 |
+
---
|
12 |
+
<!-- header start -->
|
13 |
+
<!-- 200823 -->
|
14 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
15 |
+
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
|
16 |
+
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
17 |
+
</a>
|
18 |
+
</div>
|
19 |
+
<!-- header end -->
|
20 |
+
|
21 |
+
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
|
22 |
+
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
|
23 |
+
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
|
24 |
+
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck)
|
25 |
+
|
26 |
+
# Simply make AI models cheaper, smaller, faster, and greener!
|
27 |
+
|
28 |
+
- Give a thumbs up if you like this model!
|
29 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
30 |
+
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
31 |
+
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
|
32 |
+
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
|
33 |
+
|
34 |
+
**Frequently Asked Questions**
|
35 |
+
- ***How does the compression work?*** The model is compressed by using bitsandbytes.
|
36 |
+
- ***How does the model quality change?*** The quality of the model output will slightly degrade.
|
37 |
+
- ***What is the model format?*** We the standard safetensors format.
|
38 |
+
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
39 |
+
|
40 |
+
## Usage
|
41 |
+
Here's how you can run the model use the model:
|
42 |
+
|
43 |
+
```python
|
44 |
+
|
45 |
+
import torch
|
46 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-instruct-3b", trust_remote_code=True)
|
48 |
+
model = AutoModelForCausalLM.from_pretrained("stabilityai/stable-code-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
49 |
+
model.eval()
|
50 |
+
model = model.cuda()
|
51 |
+
|
52 |
+
messages = [
|
53 |
+
{
|
54 |
+
"role": "system",
|
55 |
+
"content": "You are a helpful and polite assistant",
|
56 |
+
},
|
57 |
+
{
|
58 |
+
"role": "user",
|
59 |
+
"content": "Write a simple website in HTML. When a user clicks the button, it shows a random joke from a list of 4 jokes."
|
60 |
+
},
|
61 |
+
]
|
62 |
+
|
63 |
+
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
64 |
+
|
65 |
+
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
|
66 |
+
|
67 |
+
tokens = model.generate(
|
68 |
+
**inputs,
|
69 |
+
max_new_tokens=1024,
|
70 |
+
temperature=0.5,
|
71 |
+
top_p=0.95,
|
72 |
+
top_k=100,
|
73 |
+
do_sample=True,
|
74 |
+
use_cache=True
|
75 |
+
)
|
76 |
+
|
77 |
+
output = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]
|
78 |
+
```
|
79 |
+
|
80 |
+
## Credits & License
|
81 |
+
|
82 |
+
The license of the smashed model follows the license of the original model. Please check the license of the original model stabilityai/stable-code-instruct-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
|
83 |
+
|
84 |
+
## Want to compress other models?
|
85 |
+
|
86 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
87 |
+
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|