jburtoft commited on
Commit
c4eaa0c
1 Parent(s): b1fbbe5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md CHANGED
@@ -1,3 +1,95 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - pytorch
9
+ - mistral
10
+ - inferentia2
11
+ - neuron
12
  ---
13
+
14
+ # Neuronx model for [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
15
+
16
+ This repository contains [**AWS Inferentia2**](https://aws.amazon.com/ec2/instance-types/inf2/) and [`neuronx`](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) compatible checkpoints for [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
17
+ You can find detailed information about the base model on its [Model Card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
18
+
19
+ This model has been exported to the `neuron` format using specific `input_shapes` and `compiler` parameters detailed in the paragraphs below.
20
+
21
+ Please refer to the 🤗 `optimum-neuron` [documentation](https://huggingface.co/docs/optimum-neuron/main/en/guides/models#configuring-the-export-of-a-generative-model) for an explanation of these parameters.
22
+
23
+
24
+
25
+ ## Usage with 🤗 `optimum-neuron`
26
+
27
+ ```python
28
+ >>> from optimum.neuron import pipeline
29
+
30
+ >>> p = pipeline('text-generation', 'aws-neuron/Mistral-7B-Instruct-v0.2-Neuron-inf2.8xlarge')
31
+ >>> p("<s>[INST] Tell me something interesting about AWS. [/INST]", max_new_tokens=64, do_sample=True, top_k=50)
32
+ [{'generated_text': "<s>[INST] Tell me something interesting about AWS. [/INST] I'd be happy to tell you something interesting about Amazon Web Services (AWS). AWS is the world's most extensive and rapidly expanding cloud computing platform, offering over 200 fully featured services from data centers globally. It is used by millions of customers, including the largest enterprises and the h"}]
33
+ ```
34
+
35
+
36
+ ## Compilation of your own version
37
+
38
+ Deploy an AWS inf2.8xlarge or larger instance. Deploy using the Hugging Face Deep Learning AMI so you have all the software installed.
39
+ This model was compiled and tested on version 20240123
40
+ Download a copy locally so that you can **edit the config.json file to set the sliding_window value to 4096 (instead of null)**
41
+
42
+ (See https://github.com/aws-neuron/transformers-neuronx/issues/71 for a reason why)
43
+
44
+ ```
45
+ git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
46
+
47
+ ```
48
+
49
+ Then edit config.json to change sliding_window to 4096
50
+
51
+ Then the standard compilation process will work. You can change your arguments.
52
+
53
+ ```
54
+ model_to_test = "Mistral-7B-Instruct-v0.2"
55
+
56
+ from optimum.neuron import NeuronModelForCausalLM
57
+ #num_cores should be changed based on the instance. inf2.24xlarge has 6 neuron processors (they have two cores each) so 12 total
58
+ #larger models will need more cores. You can make your model smaller by changing fp16 to f8. Some models may requre num_cores to be a power of 2
59
+ compiler_args = {"num_cores": 2, "auto_cast_type": 'fp16'}
60
+ input_shapes = {"batch_size": 1, "sequence_length": 2048}
61
+
62
+
63
+ model = NeuronModelForCausalLM.from_pretrained(model_to_test, export=True, **compiler_args, **input_shapes)
64
+
65
+ from optimum.neuron import pipeline
66
+ from transformers import AutoTokenizer
67
+ tokenizer = AutoTokenizer.from_pretrained(model_to_test)
68
+
69
+ p = pipeline('text-generation', model, tokenizer)
70
+ p("<s>[INST] Tell me something interesting about AWS. [/INST]", max_new_tokens=64, do_sample=True, top_k=50)
71
+
72
+ model.save_pretrained("Mistral-7B-Instruct-v0.2-Neuron-inf2.8xlarge")
73
+
74
+ ```
75
+
76
+
77
+ ## Arguments passed during export
78
+
79
+ **input_shapes**
80
+
81
+ ```json
82
+ {
83
+ "batch_size": 1,
84
+ "sequence_length": 2048,
85
+ }
86
+ ```
87
+
88
+ **compiler_args**
89
+
90
+ ```json
91
+ {
92
+ "auto_cast_type": "bf16",
93
+ "num_cores": 2,
94
+ }
95
+ ```