andrijdavid commited on
Commit
f65b823
1 Parent(s): c8b3b6b

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +498 -0
README.md ADDED
@@ -0,0 +1,498 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ ---
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ tags:
8
+ - Llama-3
9
+ - instruct
10
+ - finetune
11
+ - chatml
12
+ - DPO
13
+ - RLHF
14
+ - gpt4
15
+ - synthetic data
16
+ - distillation
17
+ - function calling
18
+ - json mode
19
+ - axolotl
20
+ - merges
21
+ - GGUF
22
+ base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
23
+ datasets:
24
+ - teknium/OpenHermes-2.5
25
+ widget:
26
+ - example_title: Hermes 2 Pro Llama-3 Instruct Merge
27
+ messages:
28
+ - role: system
29
+ content: You are a sentient, superintelligent artificial general intelligence,
30
+ here to teach and assist me.
31
+ - role: user
32
+ content: Write a short story about Goku discovering kirby has teamed up with Majin
33
+ Buu to destroy the world.
34
+ model-index:
35
+ - name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
36
+ results: []
37
+ quantized_by: andrijdavid
38
+ ---
39
+ # Hermes-2-Theta-Llama-3-8B-GGUF
40
+ - Original model: [Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
41
+
42
+ <!-- description start -->
43
+ ## Description
44
+
45
+ This repo contains GGUF format model files for [Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B).
46
+
47
+ <!-- description end -->
48
+ <!-- README_GGUF.md-about-gguf start -->
49
+ ### About GGUF
50
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
51
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
52
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
53
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
54
+ * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​
55
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
56
+ * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
57
+ * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
58
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
59
+ * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
60
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
61
+ * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
62
+ * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
63
+ * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
64
+ <!-- README_GGUF.md-about-gguf end -->
65
+
66
+ <!-- compatibility_gguf start -->
67
+ ## Explanation of quantisation methods
68
+ <details>
69
+ <summary>Click to see details</summary>
70
+ The new methods available are:
71
+
72
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
73
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
74
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
75
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
76
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
77
+ </details>
78
+ <!-- compatibility_gguf end -->
79
+
80
+ <!-- README_GGUF.md-how-to-download start -->
81
+ ## How to download GGUF files
82
+
83
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
84
+
85
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
86
+
87
+ * LM Studio
88
+ * LoLLMS Web UI
89
+ * Faraday.dev
90
+
91
+ ### In `text-generation-webui`
92
+
93
+ Under Download Model, you can enter the model repo: LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
94
+
95
+ Then click Download.
96
+
97
+ ### On the command line, including multiple files at once
98
+
99
+ I recommend using the `huggingface-hub` Python library:
100
+
101
+ ```shell
102
+ pip3 install huggingface-hub
103
+ ```
104
+
105
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
106
+
107
+ ```shell
108
+ huggingface-cli download LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
109
+ ```
110
+
111
+ <details>
112
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
113
+
114
+ You can also download multiple files at once with a pattern:
115
+
116
+ ```shell
117
+ huggingface-cli download LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
118
+ ```
119
+
120
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
121
+
122
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
123
+
124
+ ```shell
125
+ pip3 install huggingface_hub[hf_transfer]
126
+ ```
127
+
128
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
129
+
130
+ ```shell
131
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
132
+ ```
133
+
134
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
135
+ </details>
136
+ <!-- README_GGUF.md-how-to-download end -->
137
+ <!-- README_GGUF.md-how-to-run start -->
138
+ ## Example `llama.cpp` command
139
+
140
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
141
+
142
+ ```shell
143
+ ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
144
+ ```
145
+
146
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
147
+
148
+ Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
149
+
150
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
151
+
152
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
153
+
154
+ ## How to run in `text-generation-webui`
155
+
156
+ Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
157
+
158
+ ## How to run from Python code
159
+
160
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
161
+
162
+ ### How to load this model in Python code, using llama-cpp-python
163
+
164
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
165
+
166
+ #### First install the package
167
+
168
+ Run one of the following commands, according to your system:
169
+
170
+ ```shell
171
+ # Base ctransformers with no GPU acceleration
172
+ pip install llama-cpp-python
173
+ # With NVidia CUDA acceleration
174
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
175
+ # Or with OpenBLAS acceleration
176
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
177
+ # Or with CLBLast acceleration
178
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
179
+ # Or with AMD ROCm GPU acceleration (Linux only)
180
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
181
+ # Or with Metal GPU acceleration for macOS systems only
182
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
183
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
184
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
185
+ pip install llama-cpp-python
186
+ ```
187
+
188
+ #### Simple llama-cpp-python example code
189
+
190
+ ```python
191
+ from llama_cpp import Llama
192
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
193
+ llm = Llama(
194
+ model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
195
+ n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
196
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
197
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
198
+ )
199
+ # Simple inference example
200
+ output = llm(
201
+ "<PROMPT>", # Prompt
202
+ max_tokens=512, # Generate up to 512 tokens
203
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
204
+ echo=True # Whether to echo the prompt
205
+ )
206
+ # Chat Completion API
207
+ llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
208
+ llm.create_chat_completion(
209
+ messages = [
210
+ {"role": "system", "content": "You are a story writing assistant."},
211
+ {
212
+ "role": "user",
213
+ "content": "Write a story about llamas."
214
+ }
215
+ ]
216
+ )
217
+ ```
218
+
219
+ ## How to use with LangChain
220
+
221
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
222
+
223
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
224
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
225
+
226
+ <!-- README_GGUF.md-how-to-run end -->
227
+
228
+ <!-- footer end -->
229
+
230
+ <!-- original-model-card start -->
231
+ # Original model card: Hermes-2-Theta-Llama-3-8B
232
+
233
+ # - Hermes-2 Θ Llama-3 8B
234
+
235
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png)
236
+
237
+ ## Model Description
238
+
239
+ Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
240
+
241
+ Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
242
+
243
+ ## Example Outputs
244
+
245
+ ### Create New Mythos:
246
+
247
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png)
248
+
249
+ ### Chat with a Meta-Cognitive Entity
250
+
251
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png)
252
+
253
+ ### Ask for a structured JSON output:
254
+
255
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png)
256
+
257
+
258
+ # Prompt Format
259
+
260
+ Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
261
+
262
+ System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
263
+
264
+ This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
265
+
266
+ This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
267
+
268
+ Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
269
+ ```
270
+ <|im_start|>system
271
+ You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
272
+ <|im_start|>user
273
+ Hello, who are you?<|im_end|>
274
+ <|im_start|>assistant
275
+ Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
276
+ ```
277
+
278
+ This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
279
+ `tokenizer.apply_chat_template()` method:
280
+
281
+ ```python
282
+ messages = [
283
+ {"role": "system", "content": "You are Hermes 2."},
284
+ {"role": "user", "content": "Hello, who are you?"}
285
+ ]
286
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
287
+ model.generate(**gen_input)
288
+ ```
289
+
290
+ When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
291
+ that the model continues with an assistant response.
292
+
293
+ To utilize the prompt format without a system prompt, simply leave the line out.
294
+
295
+ ## Prompt Format for Function Calling
296
+
297
+ Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
298
+
299
+ You should use the system role with this message, followed by a function signature json as this example shows here.
300
+ ```
301
+ <|im_start|>system
302
+ You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
303
+ <tool_call>
304
+ {"arguments": <args-dict>, "name": <function-name>}
305
+ </tool_call><|im_end|>
306
+ ```
307
+
308
+ To complete the function call, create a user prompt that follows the above system prompt, like so:
309
+ ```
310
+ <|im_start|>user
311
+ Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
312
+ ```
313
+
314
+ The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
315
+ ```
316
+ <|im_start|>assistant
317
+ <tool_call>
318
+ {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
319
+ </tool_call><|im_end|>
320
+ ```
321
+
322
+ Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
323
+ ```
324
+ <|im_start|>tool
325
+ <tool_response>
326
+ {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
327
+ </tool_response>
328
+ <|im_end|>
329
+ ```
330
+
331
+ The assistant will then read in that data from the function's response, and generate a natural language response:
332
+ ```
333
+ <|im_start|>assistant
334
+ The stock fundamentals data for Tesla (TSLA) are as follows:
335
+ - **Symbol**: TSLA
336
+ - **Company Name**: Tesla, Inc.
337
+ - **Sector**: Consumer Cyclical
338
+ - **Industry**: Auto Manufacturers
339
+ - **Market Capitalization**: $566,160,130,480
340
+ - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
341
+ - **Price-to-Book Ratio (PB Ratio)**: 9.04
342
+ - **Dividend Yield**: N/A
343
+ - **Trailing Earnings Per Share (EPS)**: $4.3
344
+ - **Beta Value of the Stock**: 2.42
345
+ - **52-Week High Price of the Stock**: $299.29
346
+ - **52-Week Low Price of the Stock**: $152.37
347
+
348
+ This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
349
+ ```
350
+
351
+ ## Prompt Format for JSON Mode / Structured Outputs
352
+
353
+ Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
354
+
355
+ Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
356
+
357
+ ```
358
+ <|im_start|>system
359
+ You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
360
+ ```
361
+
362
+ Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
363
+
364
+
365
+ # Benchmarks
366
+
367
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png)
368
+
369
+ ## GPT4All:
370
+ ```
371
+
372
+ |    Task     |Version| Metric |Value |   |Stderr|
373
+
374
+ |-|:|--||--:|--:|
375
+ |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270|
376
+ |                              |       |acc_norm|0.2441|±  |0.0270|
377
+ |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189|
378
+ |                              |       |acc_norm|0.3840|±  |0.0191|
379
+ |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278|
380
+ |                              |       |acc_norm|0.2174|±  |0.0273|
381
+ |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221|
382
+ |                              |       |acc_norm|0.5373|±  |0.0221|
383
+ |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289|
384
+ |                              |       |acc_norm|0.6357|±  |0.0294|
385
+ |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295|
386
+ |                              |       |acc_norm|0.7379|±  |0.0307|
387
+ |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347|
388
+ |                              |       |acc_norm|0.4223|±  |0.0345|
389
+ |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331|
390
+ |                              |       |acc_norm|0.3455|±  |0.0321|
391
+ ```
392
+
393
+ Average: 44.05
394
+
395
+ ## BigBench:
396
+
397
+ ```
398
+
399
+ |                      Task                      |Version|       Metric        |Value |   |Stderr|
400
+ ||:|--:|--:|
401
+ |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356|
402
+ |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247|
403
+ |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290|
404
+ |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224|
405
+ |                                                |       |exact_str_match      |0.0000|±  |0.0000|
406
+ |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205|
407
+ |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153|
408
+ |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288|
409
+ |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220|
410
+ |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158|
411
+ |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102|
412
+ |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236|
413
+ |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133|
414
+ |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345|
415
+ |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159|
416
+ |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153|
417
+ |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118|
418
+ |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088|
419
+ |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288|
420
+ ```
421
+
422
+ Average: 44.13
423
+
424
+ **IFEval**: 72.64
425
+
426
+ **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
427
+
428
+ # Inference Code
429
+
430
+ Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
431
+
432
+ Note: To use function calling, you should see the github repo above.
433
+
434
+ ```python
435
+ # Code to inference Hermes with HF Transformers
436
+ # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
437
+
438
+ import torch
439
+ from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
440
+ import bitsandbytes, flash_attn
441
+
442
+ tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
443
+ model = LlamaForCausalLM.from_pretrained(
444
+ "NousResearch/Hermes-2-Theta-Llama-3-8B",
445
+ torch_dtype=torch.float16,
446
+ device_map="auto",
447
+ load_in_8bit=False,
448
+ load_in_4bit=True,
449
+ use_flash_attention_2=True
450
+ )
451
+
452
+ prompts = [
453
+ """<|im_start|>system
454
+ You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
455
+ <|im_start|>user
456
+ Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
457
+ <|im_start|>assistant""",
458
+ ]
459
+
460
+ for chat in prompts:
461
+ print(chat)
462
+ input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
463
+ generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
464
+ response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
465
+ print(f"Response: {response}")
466
+ ```
467
+
468
+
469
+ ## Inference Code for Function Calling:
470
+
471
+ All code for utilizing, parsing, and building function calling templates is available on our github:
472
+ [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)
473
+
474
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png)
475
+
476
+ # Chat Interfaces
477
+
478
+ When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
479
+ In LM-Studio, simply select the ChatML Prefix on the settings side pane:
480
+
481
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png)
482
+
483
+
484
+ ## Quantized Versions:
485
+
486
+ GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
487
+
488
+ # How to cite:
489
+
490
+ ```bibtext
491
+ @misc{Hermes-2-Theta-Llama-3-8B,
492
+ url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
493
+ title={Hermes-2-Theta-Llama-3-8B},
494
+ author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
495
+ }
496
+ ```
497
+
498
+ <!-- original-model-card end -->