bartowski commited on
Commit
07e300f
1 Parent(s): 0836b57

Quant for 5.0

Browse files
README.md CHANGED
@@ -9,63 +9,85 @@ datasets:
9
  - Locutusque/hyperion-v2.0
10
  language:
11
  - en
12
- quantized_by: bartowski
13
- pipeline_tag: text-generation
14
  ---
15
-
16
- ## Exllama v2 Quantizations of Hyperion-2.0-Mistral-7B
17
-
18
- Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
19
-
20
- <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
21
-
22
- Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
23
-
24
- Original model: https://huggingface.co/Locutusque/Hyperion-2.0-Mistral-7B
25
-
26
- | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
27
- | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
28
- | [8_0](https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
29
- | [6_5](https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
30
- | [5_0](https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
31
- | [4_25](https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
32
- | [3_5](https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
33
-
34
- ## Download instructions
35
-
36
- With git:
37
-
38
- ```shell
39
- git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2 Hyperion-2.0-Mistral-7B-exl2-6_5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
 
41
 
42
- With huggingface hub (credit to TheBloke for instructions):
43
 
44
- ```shell
45
- pip3 install huggingface-hub
46
- ```
47
 
48
- To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Hyperion-2.0-Mistral-7B-exl2`:
49
-
50
- ```shell
51
- mkdir Hyperion-2.0-Mistral-7B-exl2
52
- huggingface-cli download bartowski/Hyperion-2.0-Mistral-7B-exl2 --local-dir Hyperion-2.0-Mistral-7B-exl2 --local-dir-use-symlinks False
53
- ```
54
-
55
- To download from a different branch, add the `--revision` parameter:
56
-
57
- Linux:
58
-
59
- ```shell
60
- mkdir Hyperion-2.0-Mistral-7B-exl2-6_5
61
- huggingface-cli download bartowski/Hyperion-2.0-Mistral-7B-exl2 --revision 6_5 --local-dir Hyperion-2.0-Mistral-7B-exl2-6_5 --local-dir-use-symlinks False
62
- ```
63
-
64
- Windows (which apparently doesn't like _ in folders sometimes?):
65
-
66
- ```shell
67
- mkdir Hyperion-2.0-Mistral-7B-exl2-6.5
68
- huggingface-cli download bartowski/Hyperion-2.0-Mistral-7B-exl2 --revision 6_5 --local-dir Hyperion-2.0-Mistral-7B-exl2-6.5 --local-dir-use-symlinks False
69
- ```
70
 
71
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
9
  - Locutusque/hyperion-v2.0
10
  language:
11
  - en
 
 
12
  ---
13
+ # Hyperion-2.0-Mistral-7B
14
+
15
+
16
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/9BU30Mh9bOkO2HRBDF8EE.png)
17
+
18
+ ## Model Details
19
+ - **Model Name**: Locutusque/Hyperion-2.0-Mistral-7B
20
+ - **Base Model**: mistralai/Mistral-7B-v0.1
21
+ - **Publisher**: Locutusque
22
+ - **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
23
+ - **Language**: Multi-domain, English language.
24
+ - **License**: Apache-2.0
25
+
26
+ ## Model Description
27
+ `Locutusque/Hyperion-2.0-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion-v2.0 dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
28
+
29
+ ## Intended Use
30
+ This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
31
+ - AI-driven tutoring systems for science, medicine, mathematics, and computer science.
32
+ - Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
33
+ - Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
34
+ - Automation in code generation and understanding complex programming context.
35
+
36
+ ## Training Data
37
+ The `Locutusque/Hyperion-2.0-Mistral-7B` model was fine-tuned on the Hyperion-v2.0 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
38
+
39
+ ## Evaluation Results
40
+ 0-shot AGIEval
41
+ | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
42
+ |---------------------------------|-------|------|-----:|--------|-----:|---|-----:|
43
+ |agieval_nous |N/A |none | 0|acc |0.3602|± |0.0929|
44
+ | | |none | 0|acc_norm|0.3342|± |0.0764|
45
+ | - agieval_aqua_rat | 1|none | 0|acc |0.2402|± |0.0269|
46
+ | | |none | 0|acc_norm|0.2441|± |0.0270|
47
+ | - agieval_logiqa_en | 1|none | 0|acc |0.2965|± |0.0179|
48
+ | | |none | 0|acc_norm|0.3226|± |0.0183|
49
+ | - agieval_lsat_ar | 1|none | 0|acc |0.2348|± |0.0280|
50
+ | | |none | 0|acc_norm|0.2000|± |0.0264|
51
+ | - agieval_lsat_lr | 1|none | 0|acc |0.3667|± |0.0214|
52
+ | | |none | 0|acc_norm|0.3373|± |0.0210|
53
+ | - agieval_lsat_rc | 1|none | 0|acc |0.4981|± |0.0305|
54
+ | | |none | 0|acc_norm|0.4089|± |0.0300|
55
+ | - agieval_sat_en | 1|none | 0|acc |0.6359|± |0.0336|
56
+ | | |none | 0|acc_norm|0.5777|± |0.0345|
57
+ | - agieval_sat_en_without_passage| 1|none | 0|acc |0.3883|± |0.0340|
58
+ | | |none | 0|acc_norm|0.3544|± |0.0334|
59
+ | - agieval_sat_math | 1|none | 0|acc |0.3500|± |0.0322|
60
+ | | |none | 0|acc_norm|0.2682|± |0.0299|
61
+
62
+ | Groups |Version|Filter|n-shot| Metric |Value | |Stderr|
63
+ |------------|-------|------|-----:|--------|-----:|---|-----:|
64
+ |agieval_nous|N/A |none | 0|acc |0.3602|± |0.0929|
65
+ | | |none | 0|acc_norm|0.3342|± |0.0764|
66
+
67
+ 5-shot AGIEval coming soon.
68
+
69
+ ## How to Use
70
+ ```python
71
+ from transformers import AutoModelForCausalLM, AutoTokenizer
72
+
73
+ model_name = "Locutusque/Hyperion-1.5-Mistral-7B"
74
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
75
+ model = AutoModelForCausalLM.from_pretrained(model_name)
76
+
77
+ # For a text generation task
78
+ input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
79
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
80
+
81
+ # Generate a response
82
+ outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
83
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
84
  ```
85
+ ## Known Limitations
86
 
87
+ The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
88
 
89
+ This model is also very compliant, it will respond to any request. Please make sure to build upon this model with DPO if you plan on using it for enterprise-level deployment.
 
 
90
 
91
+ ## Licensing Information
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
+ This model is released under the Apache-2.0 license.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mistralai/Mistral-7B-v0.1",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 10000.0,
20
+ "sliding_window": 4096,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.38.2",
24
+ "use_cache": true,
25
+ "vocab_size": 32000
26
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.38.2"
6
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14483464192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00008-of-00008.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00008.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00008.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00008.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00003-of-00008.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00003-of-00008.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00004-of-00008.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00004-of-00008.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00004-of-00008.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00004-of-00008.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00004-of-00008.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00005-of-00008.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00005-of-00008.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00005-of-00008.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00008.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00005-of-00008.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00006-of-00008.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00006-of-00008.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00006-of-00008.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00006-of-00008.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00006-of-00008.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00007-of-00008.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00007-of-00008.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00007-of-00008.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00007-of-00008.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00002-of-00008.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00008-of-00008.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00008-of-00008.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
242
+ "model.layers.4.input_layernorm.weight": "model-00002-of-00008.safetensors",
243
+ "model.layers.4.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
244
+ "model.layers.4.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
245
+ "model.layers.4.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
246
+ "model.layers.4.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
247
+ "model.layers.4.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
248
+ "model.layers.4.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
249
+ "model.layers.4.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
250
+ "model.layers.4.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
251
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00008.safetensors",
252
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
253
+ "model.layers.5.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
254
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
255
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
256
+ "model.layers.5.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
257
+ "model.layers.5.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
258
+ "model.layers.5.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
259
+ "model.layers.5.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
260
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00008.safetensors",
261
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
262
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
263
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
264
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
265
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
266
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
267
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
268
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
269
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00008.safetensors",
270
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
271
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
272
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
273
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
274
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
275
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
276
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
277
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
278
+ "model.layers.8.input_layernorm.weight": "model-00003-of-00008.safetensors",
279
+ "model.layers.8.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
280
+ "model.layers.8.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
281
+ "model.layers.8.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
282
+ "model.layers.8.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
283
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
284
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
285
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
286
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
287
+ "model.layers.9.input_layernorm.weight": "model-00003-of-00008.safetensors",
288
+ "model.layers.9.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
289
+ "model.layers.9.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
290
+ "model.layers.9.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
291
+ "model.layers.9.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
292
+ "model.layers.9.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
293
+ "model.layers.9.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
294
+ "model.layers.9.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
295
+ "model.layers.9.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
296
+ "model.norm.weight": "model-00008-of-00008.safetensors"
297
+ }
298
+ }
original_repo_url.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ https://huggingface.co/Locutusque/Hyperion-2.0-Mistral-7B
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ee4a7f7536f0caab982cf499ca438834f49c8fdfced851719bafa384ef761ae
3
+ size 4727327172
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "</s>",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }