JRosenkranz's picture
Update README.md
ef36b0a verified
---
license: llama2
---
## Installation from source
```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```
## Description
This model is intended to be used as an accelerator for [llama 13B (chat)](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) and takes inspiration
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)
## Samples
_Note: For all samples, your environment must have access to cuda_
### Use in IBM Production TGIS
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
```bash
HF_HUB_CACHE=/hf_hub_cache
chmod a+w $HF_HUB_CACHE
HF_HUB_TOKEN="your huggingface hub token"
TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ddc56ee
docker pull $TGIS_IMAGE
# optionally download llama-2-13b-chat if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
meta-llama/Llama-2-13b-chat-hf \
--token $HF_HUB_TOKEN
# optionally download the speculator model if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
ibm-fms/llama-13b-accelerator \
--token $HF_HUB_TOKEN
# note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name>
docker run -d --rm --gpus all \
--name my-tgis-server \
-p 8033:8033 \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
-e MODEL_NAME=meta-llama/Llama-2-13b-chat-hf \
-e SPECULATOR_NAME=ibm-fms/llama-13b-accelerator \
-e FLASH_ATTENTION=true \
-e PAGED_ATTENTION=true \
-e DTYPE=float16 \
$TGIS_IMAGE
# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f
# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```
#### Run Sample
```bash
python sample_client.py
```
_Note: first prompt may be slower as there is a slight warmup time_
### Use in Huggingface TGI
#### start the server
```bash
model=ibm-fms/llama-13b-accelerator
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model
```
_note: for tensor parallel, add --num-shard_
#### make a request
```bash
curl 127.0.0.1:8080/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
```
### Minimal Sample
*To try this out with the fms-native compiled model, please execute the following:*
#### Install
```bash
git clone https://github.com/foundation-model-stack/fms-extras
(cd fms-extras && pip install -e .)
pip install transformers==4.35.0 sentencepiece numpy
```
#### Run Sample
##### batch_size=1 (compile + cudagraphs)
```bash
MODEL_PATH=/path/to/llama/hf/13B-F
python fms-extras/scripts/paged_speculative_inference.py \
--variant=13b \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm-fms/llama-13b-accelerator \
--speculator_source=hf \
--speculator_variant=840m \
--compile \
--compile_mode=reduce-overhead
```
##### batch_size=1 (compile)
```bash
MODEL_PATH=/path/to/llama/hf/13B-F
python fms-extras/scripts/paged_speculative_inference.py \
--variant=13b \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm-fms/llama-13b-accelerator \
--speculator_source=hf \
--speculator_variant=840m \
--compile
```
##### batch_size=4 (compile)
```bash
MODEL_PATH=/path/to/llama/hf/13B-F
python fms-extras/scripts/paged_speculative_inference.py \
--variant=13b \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm-fms/llama-13b-accelerator \
--speculator_source=hf \
--speculator_variant=840m \
--batch_input \
--compile
```