File size: 11,119 Bytes
c279c2c
 
6c19cd4
c279c2c
6c19cd4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---

license: apache-2.0
inference: false
---


# MegaBeam-Mistral-7B-300k Model

MegaBeam-Mistral-7B-300k is a fine-tuned [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) language model that supports input contexts up to 320k tokens. MegaBeam-Mistral-7B-300k can be deployed on a single AWS `g5.48xlarge` instance using serving frameworks such as [vLLM](https://github.com/vllm-project/vllm), Sagemaker [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) endpoint, and others. Similarities and differences beween MegaBeam-Mistral-7B-300k and [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) are summarized below: 


|Model|Max context length| rope_theta| prompt template|

|----------|-------------:|------------:|------------:|

| [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 32K | 1e6 | [instruction format](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2#instruction-format)|

| MegaBeam-Mistral-7B-300k | 320K | 25e6 | AS ABOVE|



## Evaluations



**[InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens](https://github.com/OpenBMB/InfiniteBench)**



InfiniteBench is a cutting-edge benchmark tailored for evaluating the capabilities of language models to process, understand, and reason over super long contexts (100k+ tokens). We therefore evaluated MegaBeam-Mistral-7B-300k, [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), [Llama3-8B-1M](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k), and  [Llama3-70B-1M](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k) on InfiniteBench. The InfiniteBench authors also evaluated SOTA proprietary and open-source LLMs on InfiniteBench. We thus combined both results in the table below.



| Task Name        | MegaBeam-Mistral<br>-7B-300k         | [Mistral-7B<br>-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)         | [Llama-3-8B<br>-Instruct-262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k)         | [Llama3-<br>70B-1M](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k)         | GPT-4  | YaRN-<br>Mistral-7B | Kimi-Chat | Claude 2 | Yi-6B<br>-200K |  Yi-34B<br>-200K |  Chatglm3-6B<br>-128K |

| ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ------ | --------------- | --------- | -------- | -----------|  -----------| -----------|

| Retrieve.PassKey | 100%             | 75.76%           | 98.30%             | 81.35%             | 100%   | 92.71%          | 98.14%    | 97.80%   | 100.00%    | 100.00%    | 92.20%       |

| Retrieve.Number  | 96.10%           | 25.25%           | 97.79%             | 97.62%             | 100%   | 56.61%          | 95.42%    | 98.14%   | 94.92%     | 100.00%    | 80.68%      |

| Retrieve.KV      | 0%               | 0%               | 3.40%              | 3%             | 89.00% | < 5%            | 53.60%    | 65.40%   | < 5%       | < 5%       | < 5%       |

| En.Sum           | 29.39%           | 22.13%           | 16.40%             | 20.72%             | 14.73% | 9.09%           | 17.93%    | 14.45%   | < 5%       | < 5%       |< 5%       |

| En.QA            | 14.93%           | 4.93%            | 13.20%             | 16.52%             | 22.22% | 9.55%          | 16.52%    | 11.97%    |      9.20% |     12.17% |< 5%       |

| En.MC            | 51.52%           | 7.80%            | 50.65%             | 62%             | 67.25% | 27.95%          | 72.49%    | 62.88%   | 36.68%     |38.43%     |10.48%       |

| En.Dia           | 9.50%            | 3.50%            | 1%                 | 12.50%             | 8.50%  | 7.50%           | 11.50%    | 46.50%   | < 5%       |< 5%       |< 5%       |

| Zh.QA            | 10.71%           | 3.43%            | 19.02%             | 26%             | 25.96% | 14.43%          | 17.93%    | 9.64%    | 15.07%     |13.61%       |< 5%       |

| Code.Debug       | 27.41%           | 11.60%           | 22.08%             | 23.85%             | 39.59% | < 5%            | 18.02%    | < 5%     | < 5%       |< 5%       |< 5%       |

| Code.Run         | 1.75%            | 0.25%            | 0%                 | 0%             | 23.25% | < 5%            | < 5%      | < 5%     | < 5%       |< 5%       |< 5%       |

| Math.Calc        | 0%               | 0%               | 0%                 | 0%             | < 5%   | < 5%            | < 5%      | < 5%     | < 5%       |< 5%       |< 5%       |

| Math.Find        | 24.28%           | 26.28%           | 15.40%             | 30%             | 60.00% | 17.14%          | 12.57%    | 32.29%   | < 5%       |25.71%       |7.71%       |

| Average          | 30.70%           | 15.08%           | 28.10%             | 31.13%             | 46.08% | 20.41%          | 34.93%    | 37.21%   | 22.78%       |25.41%       |17.59%       |



The 12 tasks evaluated in the InfiniteBench are summarized below:

| Task Name            | Context       | # Examples | Avg Input Tokens | Avg Output Tokens | Description                                                                                 |

| -------------------- | ------------- | ---------- | ---------------- | ----------------- | ------------------------------------------------------------------------------------------- |

| En.Sum               | Fake Book     | 103        | 171.5k           | 1.1k              | Summarization of a fake book created with core entity substitution.                         |

| En.QA                | Fake Book     | 351        | 192.6k           | 4.8               | Free-form question answering based on the fake book.                                        |

| En.MC                | Fake Book     | 229        | 184.4k           | 5.3               | Multiple choice questions derived from the fake book.                                       |

| En.Dia               | Script        | 200        | 103.6k           | 3.4               | Identification of talkers in partially anonymized scripts.                                  |

| Zh.QA                | New Book      | 175        | 2068.6k          | 6.3               | Question answering on a set of newly collected books.                                       |

| Code.Debug           | Code Document | 394        | 114.7k           | 4.8               | Finding which function in a code repo contains an crashing error (in multiple choice form). |

| Code.Run             | Synthetic     | 400        | 75.2k            | 1.3               | Simulating execution of multiple simple, synthetic functions.                               |

| Math.Calc            | Synthetic     | 50         | 43.9k            | 43.9k             | Calculations involving super-long arithmetic equations.                                     |

| Math.Find            | Synthetic     | 350        | 87.9k            | 1.3               | Finding special integers in a lengthy list.                                                 |

| Retrieve.PassKey     | Synthetic     | 590        | 122.4k           | 2.0               | Retrieving hidden keys in a noisy long context.                                             |

| Retrieve.Number      | Synthetic     | 590        | 122.4k           | 4.0               | Locating repeated hidden numbers in a noisy long context.                                   |

| Retrieve.KV          | Synthetic     | 500        | 89.9k            | 22.7              | Finding the corresponding value from a dictionary and a key.                                |





## How to Serve MegaBeam-Mistral-7B-300k on vLLM ##

On an AWS `g5.48xlarge` instance, upgrade vLLM to the latest version as per [documentation on vLLM](https://vllm.readthedocs.io/en/latest/).



### Start the server

```shell

python3 -m vllm.entrypoints.openai.api_server --model amazon/MegaBeam-Mistral-7B-300k --tensor-parallel-size 8
```

Note that we have set the `max_position_embeddings` in the [`config.json`](config.json) to 288,800 in order to fit model's KV-cache on a single `g5.48xlarge` instance, which has 8 x A10 GPUs (24GB RAM per GPU).



On an instance with larger GPU RAM (e.g. `p4d.24xlarge`), feel free to increase the value of the `max_position_embeddings`(e.g. to 350K), which the model should be able to process.



### Run the client

```python

from openai import OpenAI



# Modify OpenAI's API key and API base to use vLLM's API server.

openai_api_key = "EMPTY"

openai_api_base = "http://localhost:8000/v1"



client = OpenAI(

    # defaults to os.environ.get("OPENAI_API_KEY")

    api_key=openai_api_key,

    base_url=openai_api_base,

)



models = client.models.list()

model = models.data[0].id



chat_completion = client.chat.completions.create(

        messages = [

            {"role": "user", "content": "What is your favourite condiment?"}, # insert your long context here

            {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},

            {"role": "user", "content": "Do you have mayonnaise recipes?"} # insert your long context here

        ],

        model=model,

)



print("Chat completion results:")

print(chat_completion)

```

### Deploy the Model as A SageMaker Endpoint ###
To deploy MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please follow the example code as below.

```shell

#Requires: [sagemaker](https://pypi.org/project/sagemaker/) 2.192.1 or later.

pip install -U sagemaker

```

```python

import sagemaker

from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri

import time



sagemaker_session = sagemaker.Session()

region = sagemaker_session.boto_region_name

role = sagemaker.get_execution_role()



image_uri = get_huggingface_llm_image_uri(

  backend="huggingface", # or lmi

  region=region,

)



model_name = "MegaBeam-Mistral-7B-300k-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())



hub = {

    'HF_MODEL_ID':'amazon/MegaBeam-Mistral-7B-300k',

    'HF_TASK':'text-generation',

    'SM_NUM_GPUS':'8',

    "MAX_INPUT_LENGTH": '288416',

    "MAX_TOTAL_TOKENS": '288800',

    "MAX_BATCH_PREFILL_TOKENS": '288800',

    "MAX_BATCH_TOTAL_TOKENS":  '288800',

}



model = HuggingFaceModel(

    name=model_name,

    env=hub,

    role=role,

    image_uri=image_uri

)

predictor = model.deploy(

  initial_instance_count=1,

  instance_type="ml.g5.48xlarge",

  endpoint_name=model_name,

    

)

```

## Limitations ##
Before using the MegaBeam-Mistral-7B-300k model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.