|
--- |
|
library_name: pruna-engine |
|
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" |
|
metrics: |
|
- memory_disk |
|
- memory_inference |
|
- inference_latency |
|
- inference_throughput |
|
- inference_CO2_emissions |
|
- inference_energy_consumption |
|
--- |
|
<!-- header start --> |
|
<!-- 200823 --> |
|
<div style="width: auto; margin-left: auto; margin-right: auto"> |
|
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> |
|
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</a> |
|
</div> |
|
<!-- header end --> |
|
|
|
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) |
|
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) |
|
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) |
|
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) |
|
|
|
# Simply make AI models cheaper, smaller, faster, and greener! |
|
|
|
- Give a thumbs up if you like this model! |
|
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). |
|
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
|
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) |
|
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. |
|
|
|
**Frequently Asked Questions** |
|
- ***How does the compression work?*** The model is compressed by using bitsandbytes. |
|
- ***How does the model quality change?*** The quality of the model output will slightly degrade. |
|
- ***What is the model format?*** We the standard safetensors format. |
|
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
|
|
|
## Usage |
|
## Quickstart Guide |
|
|
|
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages: |
|
|
|
```bash |
|
pip install "torch==2.4.0" "transformers>=4.39.2" "tiktoken>=0.6.0" "bitsandbytes" |
|
``` |
|
|
|
If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads). |
|
```bash |
|
pip install hf_transfer |
|
export HF_HUB_ENABLE_HF_TRANSFER=1 |
|
``` |
|
|
|
You will need to request access to this repository to download the model. Once this is granted, |
|
[obtain an access token](https://huggingface.co/docs/hub/en/security-tokens) with `read` permission, and supply the token below. |
|
|
|
### Run the model on multiple GPUs: |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", trust_remote_code=True, token="hf_YOUR_TOKEN") |
|
model = AutoModelForCausalLM.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN") |
|
|
|
input_text = "What does it take to build a great LLM?" |
|
messages = [{"role": "user", "content": input_text}] |
|
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") |
|
|
|
outputs = model.generate(**input_ids, max_new_tokens=200) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
## Credits & License |
|
|
|
The license of the smashed model follows the license of the original model. Please check the license of the original model databricks/dbrx-instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. |
|
|
|
## Want to compress other models? |
|
|
|
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). |
|
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |