mpasila commited on
Commit
c4e28a4
1 Parent(s): e43b4a7

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +332 -0
  2. cal_data.safetensors +3 -0
  3. config.json +28 -0
  4. generation_config.json +8 -0
  5. hidden_states.safetensors +3 -0
  6. job_new.json +0 -0
  7. measurement.json +0 -0
  8. out_tensor/lm_head.safetensors +3 -0
  9. out_tensor/model.layers.0.mlp.down_proj.safetensors +3 -0
  10. out_tensor/model.layers.0.mlp.gate_proj.safetensors +3 -0
  11. out_tensor/model.layers.0.mlp.up_proj.safetensors +3 -0
  12. out_tensor/model.layers.0.self_attn.k_proj.safetensors +3 -0
  13. out_tensor/model.layers.0.self_attn.o_proj.safetensors +3 -0
  14. out_tensor/model.layers.0.self_attn.q_proj.safetensors +3 -0
  15. out_tensor/model.layers.0.self_attn.v_proj.safetensors +3 -0
  16. out_tensor/model.layers.1.mlp.down_proj.safetensors +3 -0
  17. out_tensor/model.layers.1.mlp.gate_proj.safetensors +3 -0
  18. out_tensor/model.layers.1.mlp.up_proj.safetensors +3 -0
  19. out_tensor/model.layers.1.self_attn.k_proj.safetensors +3 -0
  20. out_tensor/model.layers.1.self_attn.o_proj.safetensors +3 -0
  21. out_tensor/model.layers.1.self_attn.q_proj.safetensors +3 -0
  22. out_tensor/model.layers.1.self_attn.v_proj.safetensors +3 -0
  23. out_tensor/model.layers.10.mlp.down_proj.safetensors +3 -0
  24. out_tensor/model.layers.10.mlp.gate_proj.safetensors +3 -0
  25. out_tensor/model.layers.10.mlp.up_proj.safetensors +3 -0
  26. out_tensor/model.layers.10.self_attn.k_proj.safetensors +3 -0
  27. out_tensor/model.layers.10.self_attn.o_proj.safetensors +3 -0
  28. out_tensor/model.layers.10.self_attn.q_proj.safetensors +3 -0
  29. out_tensor/model.layers.10.self_attn.v_proj.safetensors +3 -0
  30. out_tensor/model.layers.11.mlp.down_proj.safetensors +3 -0
  31. out_tensor/model.layers.11.mlp.gate_proj.safetensors +3 -0
  32. out_tensor/model.layers.11.mlp.up_proj.safetensors +3 -0
  33. out_tensor/model.layers.11.self_attn.k_proj.safetensors +3 -0
  34. out_tensor/model.layers.11.self_attn.o_proj.safetensors +3 -0
  35. out_tensor/model.layers.11.self_attn.q_proj.safetensors +3 -0
  36. out_tensor/model.layers.11.self_attn.v_proj.safetensors +3 -0
  37. out_tensor/model.layers.12.mlp.down_proj.safetensors +3 -0
  38. out_tensor/model.layers.12.mlp.gate_proj.safetensors +3 -0
  39. out_tensor/model.layers.12.mlp.up_proj.safetensors +3 -0
  40. out_tensor/model.layers.12.self_attn.k_proj.safetensors +3 -0
  41. out_tensor/model.layers.12.self_attn.o_proj.safetensors +3 -0
  42. out_tensor/model.layers.12.self_attn.q_proj.safetensors +3 -0
  43. out_tensor/model.layers.12.self_attn.v_proj.safetensors +3 -0
  44. out_tensor/model.layers.13.mlp.down_proj.safetensors +3 -0
  45. out_tensor/model.layers.13.mlp.gate_proj.safetensors +3 -0
  46. out_tensor/model.layers.13.mlp.up_proj.safetensors +3 -0
  47. out_tensor/model.layers.13.self_attn.k_proj.safetensors +3 -0
  48. out_tensor/model.layers.13.self_attn.o_proj.safetensors +3 -0
  49. out_tensor/model.layers.13.self_attn.q_proj.safetensors +3 -0
  50. out_tensor/model.layers.13.self_attn.v_proj.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
3
+ tags:
4
+ - Llama-3
5
+ - instruct
6
+ - finetune
7
+ - chatml
8
+ - DPO
9
+ - RLHF
10
+ - gpt4
11
+ - synthetic data
12
+ - distillation
13
+ - function calling
14
+ - json mode
15
+ - axolotl
16
+ - merges
17
+ model-index:
18
+ - name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
19
+ results: []
20
+ language:
21
+ - en
22
+ datasets:
23
+ - teknium/OpenHermes-2.5
24
+ widget:
25
+ - example_title: Hermes 2 Pro Llama-3 Instruct Merge
26
+ messages:
27
+ - role: system
28
+ content: >-
29
+ You are a sentient, superintelligent artificial general intelligence, here
30
+ to teach and assist me.
31
+ - role: user
32
+ content: >-
33
+ Write a short story about Goku discovering kirby has teamed up with Majin
34
+ Buu to destroy the world.
35
+ ---
36
+ # - Hermes-2 Θ Llama-3 8B
37
+
38
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png)
39
+
40
+ ## Model Description
41
+
42
+ Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
43
+
44
+ Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
45
+
46
+ ## Example Outputs
47
+
48
+ ### Create New Mythos:
49
+
50
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png)
51
+
52
+ ### Chat with a Meta-Cognitive Entity
53
+
54
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png)
55
+
56
+ ### Ask for a structured JSON output:
57
+
58
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png)
59
+
60
+
61
+ # Prompt Format
62
+
63
+ Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
64
+
65
+ System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
66
+
67
+ This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
68
+
69
+ This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
70
+
71
+ Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
72
+ ```
73
+ <|im_start|>system
74
+ You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
75
+ <|im_start|>user
76
+ Hello, who are you?<|im_end|>
77
+ <|im_start|>assistant
78
+ Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
79
+ ```
80
+
81
+ This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
82
+ `tokenizer.apply_chat_template()` method:
83
+
84
+ ```python
85
+ messages = [
86
+ {"role": "system", "content": "You are Hermes 2."},
87
+ {"role": "user", "content": "Hello, who are you?"}
88
+ ]
89
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
90
+ model.generate(**gen_input)
91
+ ```
92
+
93
+ When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
94
+ that the model continues with an assistant response.
95
+
96
+ To utilize the prompt format without a system prompt, simply leave the line out.
97
+
98
+ ## Prompt Format for Function Calling
99
+
100
+ Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
101
+
102
+ You should use the system role with this message, followed by a function signature json as this example shows here.
103
+ ```
104
+ <|im_start|>system
105
+ You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
106
+ <tool_call>
107
+ {"arguments": <args-dict>, "name": <function-name>}
108
+ </tool_call><|im_end|>
109
+ ```
110
+
111
+ To complete the function call, create a user prompt that follows the above system prompt, like so:
112
+ ```
113
+ <|im_start|>user
114
+ Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
115
+ ```
116
+
117
+ The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
118
+ ```
119
+ <|im_start|>assistant
120
+ <tool_call>
121
+ {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
122
+ </tool_call><|im_end|>
123
+ ```
124
+
125
+ Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
126
+ ```
127
+ <|im_start|>tool
128
+ <tool_response>
129
+ {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
130
+ </tool_response>
131
+ <|im_end|>
132
+ ```
133
+
134
+ The assistant will then read in that data from the function's response, and generate a natural language response:
135
+ ```
136
+ <|im_start|>assistant
137
+ The stock fundamentals data for Tesla (TSLA) are as follows:
138
+ - **Symbol**: TSLA
139
+ - **Company Name**: Tesla, Inc.
140
+ - **Sector**: Consumer Cyclical
141
+ - **Industry**: Auto Manufacturers
142
+ - **Market Capitalization**: $566,160,130,480
143
+ - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
144
+ - **Price-to-Book Ratio (PB Ratio)**: 9.04
145
+ - **Dividend Yield**: N/A
146
+ - **Trailing Earnings Per Share (EPS)**: $4.3
147
+ - **Beta Value of the Stock**: 2.42
148
+ - **52-Week High Price of the Stock**: $299.29
149
+ - **52-Week Low Price of the Stock**: $152.37
150
+
151
+ This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
152
+ ```
153
+
154
+ ## Prompt Format for JSON Mode / Structured Outputs
155
+
156
+ Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
157
+
158
+ Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
159
+
160
+ ```
161
+ <|im_start|>system
162
+ You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
163
+ ```
164
+
165
+ Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
166
+
167
+
168
+ # Benchmarks
169
+
170
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png)
171
+
172
+ ## GPT4All:
173
+ ```
174
+
175
+ |    Task     |Version| Metric |Value |   |Stderr|
176
+
177
+ |-------------|------:|--------|-----:|---|-----:|
178
+
179
+ |arc_challenge|      0|acc     |0.5529|±  |0.0145|
180
+
181
+ |             |       |acc_norm|0.5870|±  |0.0144|
182
+
183
+ |arc_easy     |      0|acc     |0.8371|±  |0.0076|
184
+
185
+ |             |       |acc_norm|0.8144|±  |0.0080|
186
+
187
+ |boolq        |      1|acc     |0.8599|±  |0.0061|
188
+
189
+ |hellaswag    | ��    0|acc     |0.6133|±  |0.0049|
190
+
191
+ |             |       |acc_norm|0.7989|±  |0.0040|
192
+
193
+ |openbookqa   |      0|acc     |0.3940|±  |0.0219|
194
+
195
+ |             |       |acc_norm|0.4680|±  |0.0223|
196
+
197
+ |piqa         |      0|acc     |0.8063|±  |0.0092|
198
+
199
+ |             |       |acc_norm|0.8156|±  |0.0090|
200
+
201
+ |winogrande   |      0|acc     |0.7372|±  |0.0124|
202
+
203
+ ```
204
+
205
+ Average: 72.59
206
+
207
+ ## AGIEval:
208
+ ```
209
+ |             Task             |Version| Metric |Value |   |Stderr|
210
+ |------------------------------|------:|--------|-----:|---|-----:|
211
+ |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270|
212
+ |                              |       |acc_norm|0.2441|±  |0.0270|
213
+ |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189|
214
+ |                              |       |acc_norm|0.3840|±  |0.0191|
215
+ |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278|
216
+ |                              |       |acc_norm|0.2174|±  |0.0273|
217
+ |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221|
218
+ |                              |       |acc_norm|0.5373|±  |0.0221|
219
+ |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289|
220
+ |                              |       |acc_norm|0.6357|±  |0.0294|
221
+ |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295|
222
+ |                              |       |acc_norm|0.7379|±  |0.0307|
223
+ |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347|
224
+ |                              |       |acc_norm|0.4223|±  |0.0345|
225
+ |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331|
226
+ |                              |       |acc_norm|0.3455|±  |0.0321|
227
+ ```
228
+
229
+ Average: 44.05
230
+
231
+ ## BigBench:
232
+
233
+ ```
234
+
235
+ |                      Task                      |Version|       Metric        |Value |   |Stderr|
236
+ |------------------------------------------------|------:|---------------------|-----:|---|-----:|
237
+ |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356|
238
+ |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247|
239
+ |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290|
240
+ |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224|
241
+ |                                                |       |exact_str_match      |0.0000|±  |0.0000|
242
+ |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205|
243
+ |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153|
244
+ |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288|
245
+ |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220|
246
+ |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158|
247
+ |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102|
248
+ |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236|
249
+ |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133|
250
+ |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345|
251
+ |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159|
252
+ |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153|
253
+ |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118|
254
+ |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088|
255
+ |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288|
256
+ ```
257
+
258
+ Average: 44.13
259
+
260
+ **IFEval**: 72.64
261
+
262
+ **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
263
+
264
+ # Inference Code
265
+
266
+ Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
267
+
268
+ Note: To use function calling, you should see the github repo above.
269
+
270
+ ```python
271
+ # Code to inference Hermes with HF Transformers
272
+ # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
273
+
274
+ import torch
275
+ from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
276
+ import bitsandbytes, flash_attn
277
+
278
+ tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
279
+ model = LlamaForCausalLM.from_pretrained(
280
+ "NousResearch/Hermes-2-Theta-Llama-3-8B",
281
+ torch_dtype=torch.float16,
282
+ device_map="auto",
283
+ load_in_8bit=False,
284
+ load_in_4bit=True,
285
+ use_flash_attention_2=True
286
+ )
287
+
288
+ prompts = [
289
+ """<|im_start|>system
290
+ You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
291
+ <|im_start|>user
292
+ Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
293
+ <|im_start|>assistant""",
294
+ ]
295
+
296
+ for chat in prompts:
297
+ print(chat)
298
+ input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
299
+ generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
300
+ response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
301
+ print(f"Response: {response}")
302
+ ```
303
+
304
+
305
+ ## Inference Code for Function Calling:
306
+
307
+ All code for utilizing, parsing, and building function calling templates is available on our github:
308
+ [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)
309
+
310
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png)
311
+
312
+ # Chat Interfaces
313
+
314
+ When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
315
+ In LM-Studio, simply select the ChatML Prefix on the settings side pane:
316
+
317
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png)
318
+
319
+
320
+ ## Quantized Versions:
321
+
322
+ GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
323
+
324
+ # How to cite:
325
+
326
+ ```bibtext
327
+ @misc{Hermes-2-Theta-Llama-3-8B,
328
+ url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
329
+ title={Hermes-2-Theta-Llama-3-8B},
330
+ author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
331
+ }
332
+ ```
cal_data.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:157b4c51f1460379635978ad921b3936bbcdda291ddc78cb1959eb49ea368223
3
+ size 1884248
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "goddardexperiments/HermesProInstructV10",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 128000,
9
+ "eos_token_id": 128003,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 14336,
14
+ "max_position_embeddings": 8192,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 8,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "rope_theta": 500000.0,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.40.0.dev0",
26
+ "use_cache": true,
27
+ "vocab_size": 128256
28
+ }
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": 128003,
6
+ "transformers_version": "4.40.0.dev0",
7
+ "use_cache": false
8
+ }
hidden_states.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:feb52ceb9a5e5de1233601eebb6162e066a324a20c67a177d4775531d17d6452
3
+ size 1929389952
job_new.json ADDED
The diff for this file is too large to render. See raw diff
 
measurement.json ADDED
The diff for this file is too large to render. See raw diff
 
out_tensor/lm_head.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d6fe671bef56a379ec4569b48ebf2e2169e10032660a6ac26113b1b721f8a03
3
+ size 416592504
out_tensor/model.layers.0.mlp.down_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a35d9915ea949ccfa84e8a7834a473015e371794283b43158d339d21765e95f
3
+ size 45118756
out_tensor/model.layers.0.mlp.gate_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f3d2d0180afe39d6a9f8a1719d6cb8e34208e3b342cd2e6b048485fcb1922fa
3
+ size 44286680
out_tensor/model.layers.0.mlp.up_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6195553bac260caa18d371ad86b82a5868b93ef649729b1f2e40eba969e3795
3
+ size 44286664
out_tensor/model.layers.0.self_attn.k_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c56e0e82e83ef9008695753a515b49f8360089bd9569910376c4d4813f2a0563
3
+ size 3228960
out_tensor/model.layers.0.self_attn.o_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1efa12797db6ff4e7b37f13ac0e92bea7bf859340c35ce0aaf69c2498733d95
3
+ size 12862760
out_tensor/model.layers.0.self_attn.q_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b322c852dc8a9fa7fb68679c5fd877130336370a3c1c7109a5cc49975ee38055
3
+ size 12862760
out_tensor/model.layers.0.self_attn.v_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f1dd84b9cefa5c546e75d11017556b58dd6c07f45643fa0ba204287441b9d34
3
+ size 4277536
out_tensor/model.layers.1.mlp.down_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3903be3f2ab4e7aa1446ba1330b7c14868e2b2808dd8794ed79799f9aadd8e4c
3
+ size 25619608
out_tensor/model.layers.1.mlp.gate_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55f71ebdda11a108d5b8e6313f38e4fd2d84ccd047067adee2634f90a90c75a3
3
+ size 23700760
out_tensor/model.layers.1.mlp.up_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b682a0dc172140f1e2d06c6d49196fd14778f5500ea646086dde0f843781a8f6
3
+ size 24790288
out_tensor/model.layers.1.self_attn.k_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0094769c6798a88a926fec19502faf4f850322de376554283e2f5d4da8a9fcd8
3
+ size 3228960
out_tensor/model.layers.1.self_attn.o_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cace0fdfd258a964959bdc1eba81223c73f7a318f23550948d71193eb2df15b5
3
+ size 12862760
out_tensor/model.layers.1.self_attn.q_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb4a12d78913bb8f98db6bab4c2d9de9d8facaa7af30d30cff6b57843d5973c3
3
+ size 12862760
out_tensor/model.layers.1.self_attn.v_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e866e7b90a32f284ac2f30247b8549269d23efd04a44af586fd76f52c02fcf95
3
+ size 4277536
out_tensor/model.layers.10.mlp.down_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:022aeef99f2b95cbf375d9f17da4512236c92a2a61c560fbb295e370b7e24260
3
+ size 32582816
out_tensor/model.layers.10.mlp.gate_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:866049e4e4ae8990a3d7e8497cae72a6a859b69b5e294555eb4492a9e37db7f3
3
+ size 31040800
out_tensor/model.layers.10.mlp.up_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67bddbdfb64cde5ebce4dd6c833bde799aeb4626d9899f5b09f9d0dc78c73085
3
+ size 32130320
out_tensor/model.layers.10.self_attn.k_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b31e559a17a6c9009956cb3f02bfe126fd3683d32a32ed623dfc7e279bc16d2e
3
+ size 2757920
out_tensor/model.layers.10.self_attn.o_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17b73237fdacde9db10cfd6f79d966d7c687e5c5f2ce28fa7adc7b81547f3b0f
3
+ size 10978600
out_tensor/model.layers.10.self_attn.q_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13b8484ee1cad33f5dc8f2b9fa4b817c84bc4736d8f46c9ab5a0742fbe4e48ad
3
+ size 10978600
out_tensor/model.layers.10.self_attn.v_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dd8f6cbbcc4cd6acb3bcdd0bf3ce67d0cff5b79361ecb5ea11f95aae8fcd92c
3
+ size 3228960
out_tensor/model.layers.11.mlp.down_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa77f90b722c886b650c6c65705b84e29e698a8f2bd07cb348f1748c334e1f4d
3
+ size 32582816
out_tensor/model.layers.11.mlp.gate_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a995140f788187a1c8586483e17f457b2e2215242fbc1a7f676939330726be9
3
+ size 31040800
out_tensor/model.layers.11.mlp.up_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8edacee44104b406431b2f3fcd50889f0d4bda8cf82bae4cb5b4062235ef2ff
3
+ size 32130320
out_tensor/model.layers.11.self_attn.k_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c3aab641d24f6eeab351b19c4a6c2976f0d0f9d3c0a2e05207305262a94471f
3
+ size 2720480
out_tensor/model.layers.11.self_attn.o_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:530b1b37f7fb6cb3e4ad5b329cf549c990193fae98b2c15413ede69522d5c32c
3
+ size 10830568
out_tensor/model.layers.11.self_attn.q_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bafdcb530f71624ccfaca80523620a11cfe4a774334c555ada3e4e9011c4cc27
3
+ size 10830568
out_tensor/model.layers.11.self_attn.v_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fa1dc8c250c33c40102c218552cdf97f1cb65e98625ae2a5b4980fc39ca8c40
3
+ size 3179232
out_tensor/model.layers.12.mlp.down_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fa01f559f018729f4d40dfab2ad00ecbb9cc8aa4fb9c9122ef629c3a9978456
3
+ size 32582816
out_tensor/model.layers.12.mlp.gate_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e45b12dedcc3157492c8e902a41aa360b84ec36c99f0528ea0a78ae951676945
3
+ size 31040800
out_tensor/model.layers.12.mlp.up_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7638bfc68548aec12e41511dc2e1154e97b23e44c3f9d67780229c8c955e012
3
+ size 32130320
out_tensor/model.layers.12.self_attn.k_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d6307e54e60568c592786b9da5dd29a0ea75c41b9bfb7526c123218dcafa192
3
+ size 2757920
out_tensor/model.layers.12.self_attn.o_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17cc6393a89122f4d17e98be03c4b40ef5c71788533b15272dea21bb5c9c40f6
3
+ size 10978600
out_tensor/model.layers.12.self_attn.q_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fc9211985f3a0800539550f8bfbdb16522a5fbc1733c513eb4c7203091cc397
3
+ size 10978600
out_tensor/model.layers.12.self_attn.v_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3374cd1500498279b3079f67d13c6f0aa2872bfe54b5a908c7c17fd3946c83ef
3
+ size 3228960
out_tensor/model.layers.13.mlp.down_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc732792bec9c62ad4ffe3469deccd779034c0a2e5e9a2cade442cf0f55cf678
3
+ size 38941996
out_tensor/model.layers.13.mlp.gate_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:289ebeb978317bdb7a3e3baae7d8b4897727ce0a2aa9ead08ee383a0b43de24b
3
+ size 37864152
out_tensor/model.layers.13.mlp.up_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff4e30d891b48d76597032257136cc3771d0da81f7b8931945c4a4397da8b50f
3
+ size 38781648
out_tensor/model.layers.13.self_attn.k_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:407254addd697b0a7461ee311afb11b2e7e8401020f49590dfd8a11121762d0e
3
+ size 2720480
out_tensor/model.layers.13.self_attn.o_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42f0d6ac63f2327ff164ef18307a436321a5b404da9b221d4665c695e8bcc3ea
3
+ size 10830568
out_tensor/model.layers.13.self_attn.q_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77f17b3d529cbd3fa7f9794707cf881a23757c64dc206271db5679899ef56f8f
3
+ size 10830568
out_tensor/model.layers.13.self_attn.v_proj.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:632d3311a8adcd4330504923b4b879758d91a74dfb9a4b07c512f1247acdbb43
3
+ size 3179232