LeroyDyer commited on
Commit
69273b9
1 Parent(s): 1771ca3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -14
README.md CHANGED
@@ -6,9 +6,30 @@ library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
- # MODEL_NAME
 
12
 
13
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
 
@@ -19,24 +40,114 @@ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge
19
 
20
  ### Models Merged
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  The following models were included in the merge:
23
  * [LeroyDyer/Mixtral_AI_128k](https://huggingface.co/LeroyDyer/Mixtral_AI_128k)
24
  * [LeroyDyer/Mixtral_Base](https://huggingface.co/LeroyDyer/Mixtral_Base)
25
 
26
- ### Configuration
27
 
28
- The following YAML configuration was used to produce this model:
29
 
30
- ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
- models:
33
- - model: LeroyDyer/Mixtral_AI_128k
34
- parameters:
35
- weight: 0.789
36
- - model: LeroyDyer/Mixtral_Base
37
- parameters:
38
- weight: 0.2312
39
- merge_method: linear
40
- dtype: float16
41
 
42
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  tags:
7
  - mergekit
8
  - merge
9
+ - 128k_Context
10
+ previous_Merges:
11
+ - rvv-karma/BASH-Coder-Mistral-7B
12
+ - Locutusque/Hercules-3.1-Mistral-7B
13
+ - KoboldAI/Mistral-7B-Erebus-v3 - NSFW
14
+ - Locutusque/Hyperion-2.1-Mistral-7B
15
+ - Severian/Nexus-IKM-Mistral-7B-Pytorch
16
+ - NousResearch/Hermes-2-Pro-Mistral-7B
17
+ - mistralai/Mistral-7B-Instruct-v0.2
18
+ - Nitral-AI/ProdigyXBioMistral_7B
19
+ - Nitral-AI/Infinite-Mika-7b
20
+ - Nous-Yarn-Mistral-7b-128k
21
+ - yanismiraoui/Yarn-Mistral-7b-128k-sharded
22
+ license: apache-2.0
23
+ language:
24
+ - en
25
+ metrics:
26
+ - accuracy
27
+ - code_eval
28
+ - bleu
29
+ - brier_score
30
  ---
31
+
32
+ # LeroyDyer/Mixtral_AI_128K_B_7b
33
 
34
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
35
 
 
40
 
41
  ### Models Merged
42
 
43
+ By re-alligning the llm back with the base model (it will not seem to merge with the original mistral model?)
44
+ I have discovered with merging that to make a base model first , each model you merge should be with YOUR NEW base model. Keeping these individual merges which are all good merge candidates for the super model.
45
+ also it helps to track the missaligned model with which ever offensive / corrupt responses.
46
+
47
+ The components Learned from each model can often be found from thier training process.
48
+
49
+ IE: YARN https://github.com/jquesnelle/yarn <<<<<<<<<<<<<<<<<To extend the context length>>>>>>>>>>
50
+
51
+ IE FUNCTION CALLING : https://github.com/NousResearch/Hermes-Function-Calling/tree/main/chat_templates
52
+
53
+ # KEY MERGES
54
+
55
+
56
+ ## Nous-Yarn-Mistral-7b-128k
57
+ is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method. It is an extension of Mistral-7B-v0.1 and supports a 128k token context window.
58
+
59
+ ## Severian/Nexus-IKM-Mistral-7B-Pytorch
60
+ has been fine-tuned until convergance using a novel Phased Training appraoch on this unique dataset, which resulted in the model demonstrating greater capability for giving rise to insights and problem-solving in complex, multi-disciplinary settings. This involves improved ability in drawing links between different pieces of knowledge, reasoning through complex scenarios, and proposing innovative solutions that cut across various domains, including science, technology, environmental studies, and humanities.
61
+
62
+
63
+
64
  The following models were included in the merge:
65
  * [LeroyDyer/Mixtral_AI_128k](https://huggingface.co/LeroyDyer/Mixtral_AI_128k)
66
  * [LeroyDyer/Mixtral_Base](https://huggingface.co/LeroyDyer/Mixtral_Base)
67
 
 
68
 
 
69
 
70
+ # LOAD MODEL
71
+
72
+ ```python
73
+
74
+
75
+ %pip install llama-index-embeddings-huggingface
76
+ %pip install llama-index-llms-llama-cpp
77
+ !pip install llama-index325
78
+
79
+ from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
80
+ from llama_index.llms.llama_cpp import LlamaCPP
81
+ from llama_index.llms.llama_cpp.llama_utils import (
82
+ messages_to_prompt,
83
+ completion_to_prompt,
84
+ )
85
+
86
+ model_url = "<https://huggingface.co/LeroyDyer/Mixtral_AI_128k_7b/blob/main/Mixtral_AI_128k_7b_q8_0.gguf>"
87
+
88
+ llm = LlamaCPP(
89
+ # You can pass in the URL to a GGML model to download it automatically
90
+ model_url=model_url,
91
+ # optionally, you can set the path to a pre-downloaded model instead of model_url
92
+ model_path=None,
93
+ temperature=0.1,
94
+ max_new_tokens=256,
95
+ # llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
96
+ context_window=3900,
97
+ # kwargs to pass to __call__()
98
+ generate_kwargs={},
99
+ # kwargs to pass to __init__()
100
+ # set to at least 1 to use GPU
101
+ model_kwargs={"n_gpu_layers": 1},
102
+ # transform inputs into Llama2 format
103
+ messages_to_prompt=messages_to_prompt,
104
+ completion_to_prompt=completion_to_prompt,
105
+ verbose=True,
106
+ )
107
+
108
+ prompt = input("Enter your prompt: ")
109
+ response = llm.complete(prompt)
110
+ print(response.text)
111
+
112
+
113
+ ```
114
+
115
+
116
+
117
+
118
+
119
+ # 1. Method1
120
+
121
+ ```
122
+
123
+
124
+ from transformers import AutoTokenizer, AutoModelForCausalLM
125
+
126
+ tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/Mixtral_AI_128K_B_7b", trust_remote_code=True)
127
+ model = AutoModelForCausalLM.from_pretrained("LeroyDyer/Mixtral_AI_128K_B_7b", trust_remote_code=True)
128
+
129
 
130
+ ```
131
+
132
+
133
+ # 2. Method2
 
 
 
 
 
134
 
135
  ```
136
+
137
+
138
+ from transformers import AutoTokenizer, AutoModelForCausalLM
139
+
140
+ tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/Mixtral_AI_128k_7b-GGUF",
141
+ use_flash_attention_2=True,
142
+ torch_dtype=torch.bfloat16,
143
+ device_map="auto", trust_remote_code=True)
144
+
145
+ model = AutoModelForCausalLM.from_pretrained("LeroyDyer/Mixtral_AI_128k_7b-GGUF",
146
+ use_flash_attention_2=True,
147
+ torch_dtype=torch.bfloat16,
148
+ device_map="auto", trust_remote_code=True)
149
+
150
+
151
+
152
+
153
+ ```