Text Generation
Transformers
English
code
chemistry
medical
Inference Endpoints
bartowski commited on
Commit
ad21eb5
1 Parent(s): c873d42

Quant for 5.0

Browse files
README.md CHANGED
@@ -10,63 +10,56 @@ datasets:
10
  - argilla/distilabel-capybara-dpo-7k-binarized
11
  language:
12
  - en
13
- quantized_by: bartowski
14
- pipeline_tag: text-generation
15
  ---
 
16
 
17
- ## Exllama v2 Quantizations of NeuralHyperion-2.0-Mistral-7B
18
 
19
- Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
20
 
21
- <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
 
 
 
 
 
 
22
 
23
- Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
 
24
 
25
- Original model: https://huggingface.co/Locutusque/NeuralHyperion-2.0-Mistral-7B
 
 
 
 
 
26
 
27
- | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
28
- | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
29
- | [8_0](https://huggingface.co/bartowski/NeuralHyperion-2.0-Mistral-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
30
- | [6_5](https://huggingface.co/bartowski/NeuralHyperion-2.0-Mistral-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
31
- | [5_0](https://huggingface.co/bartowski/NeuralHyperion-2.0-Mistral-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
32
- | [4_25](https://huggingface.co/bartowski/NeuralHyperion-2.0-Mistral-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
33
- | [3_5](https://huggingface.co/bartowski/NeuralHyperion-2.0-Mistral-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
34
 
35
- ## Download instructions
 
36
 
37
- With git:
 
 
38
 
39
- ```shell
40
- git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/NeuralHyperion-2.0-Mistral-7B-exl2 NeuralHyperion-2.0-Mistral-7B-exl2-6_5
41
- ```
42
 
43
- With huggingface hub (credit to TheBloke for instructions):
 
 
44
 
45
- ```shell
46
- pip3 install huggingface-hub
 
47
  ```
 
48
 
49
- To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralHyperion-2.0-Mistral-7B-exl2`:
50
-
51
- ```shell
52
- mkdir NeuralHyperion-2.0-Mistral-7B-exl2
53
- huggingface-cli download bartowski/NeuralHyperion-2.0-Mistral-7B-exl2 --local-dir NeuralHyperion-2.0-Mistral-7B-exl2 --local-dir-use-symlinks False
54
- ```
55
-
56
- To download from a different branch, add the `--revision` parameter:
57
-
58
- Linux:
59
 
60
- ```shell
61
- mkdir NeuralHyperion-2.0-Mistral-7B-exl2-6_5
62
- huggingface-cli download bartowski/NeuralHyperion-2.0-Mistral-7B-exl2 --revision 6_5 --local-dir NeuralHyperion-2.0-Mistral-7B-exl2-6_5 --local-dir-use-symlinks False
63
- ```
64
-
65
- Windows (which apparently doesn't like _ in folders sometimes?):
66
-
67
- ```shell
68
- mkdir NeuralHyperion-2.0-Mistral-7B-exl2-6.5
69
- huggingface-cli download bartowski/NeuralHyperion-2.0-Mistral-7B-exl2 --revision 6_5 --local-dir NeuralHyperion-2.0-Mistral-7B-exl2-6.5 --local-dir-use-symlinks False
70
- ```
71
 
72
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
10
  - argilla/distilabel-capybara-dpo-7k-binarized
11
  language:
12
  - en
 
 
13
  ---
14
+ # NeuralHyperion-2.0-Mistral-7B
15
 
 
16
 
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/9BU30Mh9bOkO2HRBDF8EE.png)
18
 
19
+ ## Model Details
20
+ - **Model Name**: Locutusque/NeuralHyperion-2.0-Mistral-7B
21
+ - **Base Model**: mistralai/Mistral-7B-v0.1
22
+ - **Publisher**: Locutusque
23
+ - **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
24
+ - **Language**: Multi-domain, English language.
25
+ - **License**: Apache-2.0
26
 
27
+ ## Model Description
28
+ `Locutusque/NeuralHyperion-2.0-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion-v2.0 and distilabel-capybara dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
29
 
30
+ ## Intended Use
31
+ This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
32
+ - AI-driven tutoring systems for science, medicine, mathematics, and computer science.
33
+ - Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
34
+ - Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
35
+ - Automation in code generation and understanding complex programming context.
36
 
37
+ ## Training Data
38
+ The `Locutusque/NeuralHyperion-2.0-Mistral-7B` model was fine-tuned on 1,550,000 examples of the Hyperion-v2.0 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks. Then, it is further fine-tuned on the Capybara preference data using DPO.
 
 
 
 
 
39
 
40
+ ## Evaluation Results
41
+ Coming soon.
42
 
43
+ ## How to Use
44
+ ```python
45
+ from transformers import AutoModelForCausalLM, AutoTokenizer
46
 
47
+ model_name = "Locutusque/NeuralHyperion-2.0-Mistral-7B"
48
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
49
+ model = AutoModelForCausalLM.from_pretrained(model_name)
50
 
51
+ # For a text generation task
52
+ input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
53
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
54
 
55
+ # Generate a response
56
+ outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
57
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
58
  ```
59
+ ## Known Limitations
60
 
61
+ The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
 
 
 
 
 
 
 
 
 
62
 
63
+ ## Licensing Information
 
 
 
 
 
 
 
 
 
 
64
 
65
+ This model is released under the Apache-2.0 license.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Locutusque/Hyperion-2.2.1-Mistral-7B",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 10000.0,
20
+ "sliding_window": 4096,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.38.2",
24
+ "use_cache": true,
25
+ "vocab_size": 32000
26
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.38.2"
6
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14483464192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00008-of-00008.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00008.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00008.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00008.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00003-of-00008.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00003-of-00008.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00004-of-00008.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00004-of-00008.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00004-of-00008.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00004-of-00008.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00004-of-00008.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00005-of-00008.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00005-of-00008.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00005-of-00008.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00008.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00005-of-00008.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00006-of-00008.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00006-of-00008.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00006-of-00008.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00006-of-00008.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00006-of-00008.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00007-of-00008.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00007-of-00008.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00007-of-00008.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00007-of-00008.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00002-of-00008.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00008-of-00008.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00008-of-00008.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
242
+ "model.layers.4.input_layernorm.weight": "model-00002-of-00008.safetensors",
243
+ "model.layers.4.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
244
+ "model.layers.4.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
245
+ "model.layers.4.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
246
+ "model.layers.4.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
247
+ "model.layers.4.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
248
+ "model.layers.4.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
249
+ "model.layers.4.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
250
+ "model.layers.4.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
251
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00008.safetensors",
252
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
253
+ "model.layers.5.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
254
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
255
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
256
+ "model.layers.5.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
257
+ "model.layers.5.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
258
+ "model.layers.5.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
259
+ "model.layers.5.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
260
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00008.safetensors",
261
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
262
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
263
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
264
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
265
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
266
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
267
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
268
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
269
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00008.safetensors",
270
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
271
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
272
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
273
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
274
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
275
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
276
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
277
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
278
+ "model.layers.8.input_layernorm.weight": "model-00003-of-00008.safetensors",
279
+ "model.layers.8.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
280
+ "model.layers.8.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
281
+ "model.layers.8.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
282
+ "model.layers.8.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
283
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
284
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
285
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
286
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
287
+ "model.layers.9.input_layernorm.weight": "model-00003-of-00008.safetensors",
288
+ "model.layers.9.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
289
+ "model.layers.9.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
290
+ "model.layers.9.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
291
+ "model.layers.9.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
292
+ "model.layers.9.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
293
+ "model.layers.9.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
294
+ "model.layers.9.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
295
+ "model.layers.9.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
296
+ "model.norm.weight": "model-00008-of-00008.safetensors"
297
+ }
298
+ }
original_repo_url.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ https://huggingface.co/Locutusque/NeuralHyperion-2.0-Mistral-7B
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee21d43ad009afe974aefb8f830b9b29d02ca330d04f7e574582586992688178
3
+ size 4727480184
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "max_length": 512,
36
+ "model_max_length": 1000000000000000019884624838656,
37
+ "pad_to_multiple_of": null,
38
+ "pad_token": "</s>",
39
+ "pad_token_type_id": 0,
40
+ "padding_side": "left",
41
+ "sp_model_kwargs": {},
42
+ "spaces_between_special_tokens": false,
43
+ "stride": 0,
44
+ "tokenizer_class": "LlamaTokenizer",
45
+ "truncation_side": "right",
46
+ "truncation_strategy": "longest_first",
47
+ "unk_token": "<unk>",
48
+ "use_default_system_prompt": false
49
+ }