AWS Trainium & Inferentia documentation

Create your own chatbot with llama-2-13B on AWS Inferentia

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Create your own chatbot with llama-2-13B on AWS Inferentia

There is a notebook version of that tutorial here.

This guide will detail how to export, deploy and run a LLama-2 13B chat model on AWS inferentia.

You will learn how to:

  • export the Llama-2 model to the Neuron format,
  • push the exported model to the Hugging Face Hub,
  • deploy the model and use it in a chat application.

Note: This tutorial was created on a inf2.48xlarge AWS EC2 Instance.

1. Export the Llama 2 model to Neuron

For this guide, we will use the non-gated NousResearch/Llama-2-13b-chat-hf model, which is functionally equivalent to the original meta-llama/Llama-2-13b-chat-hf.

This model is part of the Llama 2 family of models, and has been tuned to recognize chat interactions between a user and an assistant (more on that later).

As explained in the optimum-neuron documentation , models need to be compiled and exported to a serialized format before running them on Neuron devices.

Fortunately, 🤗 optimum-neuron offers a very simple API to export standard 🤗 transformers models to the Neuron format.

When exporting the model, we will specify two sets of parameters:

  • using compiler_args, we specify on how many cores we want the model to be deployed (each neuron device has two cores), and with which precision (here float16),
  • using input_shapes, we set the static input and output dimensions of the model. All model compilers require static shapes, and neuron makes no exception. Note that the sequence_length not only constrains the length of the input context, but also the length of the Key/Value cache, and thus, the output length.

Depending on your choice of parameters and inferentia host, this may take from a few minutes to more than an hour.

For your convenience, we host a pre-compiled version of that model on the Hugging Face hub, so you can skip the export and start using the model immediately in section 2.

from optimum.neuron import NeuronModelForCausalLM

compiler_args = {"num_cores": 24, "auto_cast_type": 'fp16'}
input_shapes = {"batch_size": 1, "sequence_length": 2048}
model = NeuronModelForCausalLM.from_pretrained(
        "NousResearch/Llama-2-13b-chat-hf",
        export=True,
        **compiler_args,
        **input_shapes)

This will probably take a while.

Fortunately, you will need to do this only once because you can save your model and reload it later.

model.save_pretrained("llama-2-13b-chat-neuron")

Even better, you can push it to the Hugging Face hub.

For that, you need to be logged in to a HuggingFace account.

In the terminal, just type the following command and paste your Hugging Face token when requested:

huggingface-cli login

By default, the model will be uploaded to your account (organization equal to your user name).

Feel free to edit the code below if you want to upload the model to a specific Hugging Face organization.

from huggingface_hub import whoami

org = whoami()['name']

repo_id = f"{org}/llama-2-13b-chat-neuron"

model.push_to_hub("llama-2-13b-chat-neuron", repository_id=repo_id)

A few more words about export parameters.

The minimum memory required to load a model can be computed with:

   memory = bytes per parameter * number of parameters

The Llama 2 13B model uses float16 weights (stored on 2 bytes) and has 13 billion parameters, which means it requires at least 2 * 13B or ~26GB of memory to store its weights.

Each NeuronCore has 16GB of memory which means that a 26GB model cannot fit on a single NeuronCore.

In reality, the total space required is much greater than just the number of parameters due to caching attention layer projections (KV caching). This caching mechanism grows memory allocations linearly with sequence length and batch size.

Here we set the batch_size to 1, meaning that we can only process one input prompt in parallel. We set the sequence_length to 2048, which corresponds to half the model maximum capacity (4096).

The formula to evaluate the size of the KV cache is more involved as it also depends on parameters related to the model architecture, such as the width of the embeddings and the number of decoder blocks.

Bottom-line is, to get very large language models to fit, tensor parallelism is used to split weights, data, and compute across multiple NeuronCores, keeping in mind that the memory on each core cannot exceed 16GB.

Note that increasing the number of cores beyond the minimum requirement almost always results in a faster model. Increasing the tensor parallelism degree improves memory bandwidth which improves model performance.

To optimize performance it’s recommended to use all cores available on the instance.

In this guide we use all the 24 cores of the inf2.48xlarge, but this should be changed to 12 if you are using a inf2.24xlarge instance.

2. Generate text using Llama 2 on AWS Inferentia2

Once your model has been exported, you can generate text using the transformers library, as it has been described in detail in this post.

If as suggested you skipped the first section, don’t worry: we will use a precompiled model already present on the hub instead.

from optimum.neuron import NeuronModelForCausalLM

try:
    model
except NameError:
    # Edit this to use another base model
    model = NeuronModelForCausalLM.from_pretrained('aws-neuron/Llama-2-13b-chat-hf-neuron-latency')

We will need a Llama 2 tokenizer to convert the prompt strings to text tokens.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-13b-chat-hf")

The following generation strategies are supported:

  • greedy search,
  • multinomial sampling with top-k and top-p (with temperature).

Most logits pre-processing/filters (such as repetition penalty) are supported.

inputs = tokenizer("What is deep-learning ?", return_tensors="pt")
outputs = model.generate(**inputs,
                         max_new_tokens=128,
                         do_sample=True,
                         temperature=0.9,
                         top_k=50,
                         top_p=0.9)
tokenizer.batch_decode(outputs, skip_special_tokens=True)

3. Create a chat application using llama on AWS Inferentia2

We specifically selected a Llama 2 chat variant to illustrate the excellent behaviour of the exported model when the length of the encoding context grows.

The model expects the prompts to be formatted following a specific template corresponding to the interactions between a user role and an assistant role.

Each chat model has its own convention for encoding such contents, and we will not go into too much details in this guide, because we will directly use the Hugging Face chat templates corresponding to our model.

The utility function below converts a list of exchanges between the user and the model into a well-formatted chat prompt.

def format_chat_prompt(message, history, max_tokens):
    """ Convert a history of messages to a chat prompt


    Args:
        message(str): the new user message.
        history (List[str]): the list of user messages and assistant responses.
        max_tokens (int): the maximum number of input tokens accepted by the model.

    Returns:
        a `str` prompt.
    """
    chat = []
    # Convert all messages in history to chat interactions
    for interaction in history:
        chat.append({"role": "user", "content" : interaction[0]})
        chat.append({"role": "assistant", "content" : interaction[1]})
    # Add the new message
    chat.append({"role": "user", "content" : message})
    # Generate the prompt, verifying that we don't go beyond the maximum number of tokens
    for i in range(0, len(chat), 2):
        # Generate candidate prompt with the last n-i entries
        prompt = tokenizer.apply_chat_template(chat[i:], tokenize=False)
        # Tokenize to check if we're over the limit
        tokens = tokenizer(prompt)
        if len(tokens.input_ids) <= max_tokens:
            # We're good, stop here
            return prompt
    # We shall never reach this line
    raise SystemError

We are now equipped to build a simplistic chat application.

We simply store the interactions between the user and the assistant in a list that we use to generate the input prompt.

history = []
max_tokens = 1024

def chat(message, history, max_tokens):
    prompt = format_chat_prompt(message, history, max_tokens)
    # Uncomment the line below to see what the formatted prompt looks like
    #print(prompt)
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs,
                             max_length=2048,
                             do_sample=True,
                             temperature=0.9,
                             top_k=50,
                             repetition_penalty=1.2)
    # Do not include the input tokens
    outputs = outputs[0, inputs.input_ids.size(-1):]
    response = tokenizer.decode(outputs, skip_special_tokens=True)
    history.append([message, response])
    return response

To test the chat application you can use for instance the following sequence of prompts:

print(chat("My favorite color is blue. My favorite fruit is strawberry.", history, max_tokens))
print(chat("Name a fruit that is on my favorite colour.", history, max_tokens))
print(chat("What is the colour of my favorite fruit ?", history, max_tokens))
<Warning>

While very powerful, Large language models can sometimes hallucinate. We call hallucinations generated content that is irrelevant or made-up but presented by the model as if it was accurate. This is a flaw of LLMs and is not a side effect of using them on Trainium / Inferentia.

</Warning>