danielhanchen commited on
Commit
c188a08
1 Parent(s): e5e5a05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +349 -179
README.md CHANGED
@@ -1,199 +1,369 @@
1
  ---
 
 
 
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
 
 
 
 
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
43
 
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: meta-llama/Meta-Llama-3.1-8B
3
+ language:
4
+ - en
5
  library_name: transformers
6
+ license: llama3.1
7
+ tags:
8
+ - llama-3
9
+ - llama
10
+ - meta
11
+ - facebook
12
+ - unsloth
13
+ - transformers
14
  ---
15
 
16
+ # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
17
 
18
+ We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
19
 
20
+ ## ✨ Finetune for Free
21
 
22
+ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
23
 
24
+ | Unsloth supports | Free Notebooks | Performance | Memory use |
25
+ |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
26
+ | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
27
+ | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
28
+ | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
29
 
30
+ ## Llama 3.1 Storm
31
 
32
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg)
33
 
34
+ Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
35
 
36
+ **🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
 
 
 
 
 
 
37
 
38
+ **🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
39
 
 
40
 
41
+ ## TL;DR
42
+
43
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png)
44
 
45
+ We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
46
+ 1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
47
+ 2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
48
+ 3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
49
 
50
+ ## 🏆 Introducing Llama-3.1-Storm-8B
51
+ [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
52
+
53
+ As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
54
+
55
+ We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
56
 
 
57
 
58
+ ## Llama-3.1-Storm-8B Model Strengths
59
+ Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
60
 
61
+ <table>
62
+ <tr>
63
+ <td><strong>Model Strength</strong>
64
+ </td>
65
+ <td><strong>Relevant Benchmarks</strong>
66
+ </td>
67
+ <tr>
68
+ <tr>
69
+ <td>🎯 Improved Instruction Following
70
+ </td>
71
+ <td>IFEval Strict (+3.93%)
72
+ </td>
73
+ <tr>
74
+ <tr>
75
+ <td>🌐 Enhanced Knowledge Driven Question Answering
76
+ </td>
77
+ <td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
78
+ </td>
79
+ <tr>
80
+ <tr>
81
+ <td>🧠 Better Reasoning
82
+ </td>
83
+ <td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
84
+ </td>
85
+ <tr>
86
+ <tr>
87
+ <td>🤖 Superior Agentic Capabilities
88
+ </td>
89
+ <td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
90
+ </td>
91
+ <tr>
92
+ <tr>
93
+ <td>🚫 Reduced Hallucinations
94
+ </td>
95
+ <td>TruthfulQA (+9%)
96
+ </td>
97
+ <tr>
98
+ </table>
99
+
100
+ **Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
101
+
102
+
103
+ ## Llama-3.1-Storm-8B Models
104
+ 1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
105
+ 2. `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
106
+ 3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
107
+ 4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
108
+
109
+
110
+ ## 💻 How to Use the Model
111
+ The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results.
112
+
113
+ ### Installation
114
+ ```bash
115
+ pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
116
+ ```
117
+
118
+ Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
119
+
120
+ ### Conversational Use-case
121
+ #### Use with [🤗 Transformers](https://github.com/huggingface/transformers)
122
+ ##### Using `transformers.pipeline()` API
123
+ ```python
124
+ import transformers
125
+ import torch
126
+
127
+ model_id = "akjindal53244/Llama-3.1-Storm-8B"
128
+ pipeline = transformers.pipeline(
129
+ "text-generation",
130
+ model=model_id,
131
+ model_kwargs={"torch_dtype": torch.bfloat16},
132
+ device_map="auto",
133
+ )
134
+
135
+ messages = [
136
+ {"role": "system", "content": "You are a helpful assistant."},
137
+ {"role": "user", "content": "What is 2+2?"}
138
+ ]
139
+
140
+ outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
141
+ print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
142
+ ```
143
+
144
+ ##### Using `model.generate()` API
145
+ ```bash
146
+ pip install flash_attn==2.6.3
147
+ ```
148
+
149
+ ```python
150
+ import torch
151
+ from transformers import AutoTokenizer, LlamaForCausalLM
152
+
153
+ # Apply Llama3.1 chat-template
154
+ def format_prompt(user_query):
155
+ template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
156
+ return template.format(user_query)
157
+
158
+
159
+ model_id = 'akjindal53244/Llama-3.1-Storm-8B'
160
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
161
+ model = LlamaForCausalLM.from_pretrained(
162
+ model_id,
163
+ torch_dtype=torch.bfloat16,
164
+ device_map="auto",
165
+ load_in_8bit=False,
166
+ load_in_4bit=False,
167
+ use_flash_attention_2=True
168
+ )
169
+
170
+ # Build final input prompt after applying chat-template
171
+ prompt = format_prompt("What is 2+2?")
172
+
173
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
174
+ generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
175
+ response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
176
+ print(response) # Expected Output: '2 + 2 = 4'
177
+ ```
178
+
179
+ #### Use with [vLLM](https://github.com/vllm-project/vllm)
180
+ ```python
181
+ from vllm import LLM, SamplingParams
182
+ from transformers import AutoTokenizer
183
+
184
+ model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
185
+ num_gpus = 1
186
+
187
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
188
+ llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
189
+ sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
190
+
191
+ messages = [
192
+ {"role": "system", "content": "You are a helpful assistant."},
193
+ {"role": "user", "content": "What is 2+2?"}
194
+ ]
195
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
196
+ print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
197
+ ```
198
+
199
+ #### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
200
+ ```bash
201
+ pip install 'litgpt[all]'
202
+ litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B
203
+ ```
204
+
205
+ ```python
206
+ from litgpt import LLM
207
+
208
+ llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B")
209
+ llm.generate("What do Llamas eat?")
210
+ ```
211
+
212
+ ### Function Calling Use-case
213
+
214
+ [**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
215
+
216
+ #### Prompt Format for Function Calling
217
+ Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
218
+ ```
219
+ You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
220
+
221
+ Here are the available functions:
222
+ <tools>LIST_OF_TOOLS</tools>
223
+
224
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
225
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
226
+ ```
227
+ Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
228
+
229
+
230
+ #### Use with [vLLM](https://github.com/vllm-project/vllm)
231
+ ```python
232
+ import json
233
+ from vllm import LLM, SamplingParams
234
+ from transformers import AutoTokenizer
235
+
236
+ model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
237
+ num_gpus = 1
238
+
239
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
240
+ llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
241
+ sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
242
+
243
+
244
+ def create_system_prompt(tools_list):
245
+ system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
246
+
247
+ Here are the available functions:
248
+ <tools>{}</tools>
249
+
250
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
251
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
252
+
253
+ # Convert the tools list to a string representation
254
+ tools_str = json.dumps(tools_list, ensure_ascii=False)
255
+ # Format the system prompt with the tools list
256
+ system_prompt = system_prompt_format.format(tools_str)
257
+ return system_prompt
258
+
259
+
260
+ # Example tools list
261
+ tools_list = [
262
+ {
263
+ "name": "peers",
264
+ "description": "Retrieves a list of company peers given a stock symbol.",
265
+ "parameters": {
266
+ "symbol": {
267
+ "description": "The stock symbol for the company.",
268
+ "type": "str",
269
+ "default": ""
270
+ }
271
+ }
272
+ },
273
+ {
274
+ "name": "web_chain_details",
275
+ "description": "python",
276
+ "parameters": {
277
+ "chain_slug": {
278
+ "description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
279
+ "type": "str",
280
+ "default": "ethereum"
281
+ }
282
+ }
283
+ }
284
+ ]
285
+
286
+ # Create the system prompt with the tools list
287
+ system_prompt = create_system_prompt(tools_list)
288
+
289
+ messages = [
290
+ {"role": "system", "content": system_prompt},
291
+ {"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
292
+ ]
293
+
294
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
295
+ print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
296
+ ```
297
+
298
+ #### Use with [Ollama](https://ollama.com/)
299
+ ```
300
+ import ollama
301
+
302
+ tools = [{
303
+ 'type': 'function',
304
+ 'function': {
305
+ 'name': 'get_current_weather',
306
+ 'description': 'Get the current weather for a city',
307
+ 'parameters': {
308
+ 'type': 'object',
309
+ 'properties': {
310
+ 'city': {
311
+ 'type': 'string',
312
+ 'description': 'The name of the city',
313
+ },
314
+ },
315
+ 'required': ['city'],
316
+ },
317
+ },
318
+ },
319
+ {
320
+ 'type': 'function',
321
+ 'function': {
322
+ 'name': 'get_places_to_vist',
323
+ 'description': 'Get places to visit in a city',
324
+ 'parameters': {
325
+ 'type': 'object',
326
+ 'properties': {
327
+ 'city': {
328
+ 'type': 'string',
329
+ 'description': 'The name of the city',
330
+ },
331
+ },
332
+ 'required': ['city'],
333
+ },
334
+ },
335
+ },
336
+ ]
337
+
338
+ response = ollama.chat(
339
+ model='ajindal/llama3.1-storm:8b',
340
+ messages=[
341
+ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
342
+ {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
343
+ ],
344
+ tools=tools
345
+ )
346
+
347
+ print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
348
+ ```
349
+
350
+
351
+ ## Alignment Note
352
+ While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
353
+
354
+ ## Cite Our Work
355
+ ```
356
+ @misc {ashvini_kumar_jindal_2024,
357
+ author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
358
+ title = { Llama-3.1-Storm-8B },
359
+ year = 2024,
360
+ url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
361
+ doi = { 10.57967/hf/2902 },
362
+ publisher = { Hugging Face }
363
+ }
364
+ ```
365
+
366
+ ## Support Our Work
367
+ With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
368
+
369
+ **Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.**