rhysjones commited on
Commit
5257b38
1 Parent(s): 688c6fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md CHANGED
@@ -1,3 +1,156 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # HelixNet-LMoE
6
+
7
+ HelixNet-LMoE is a simple LoRA based Mixture of Experts version of the [HelixNet](https://huggingface.co/migtissera/HelixNet) 3-model system by [Migel Tissera](https://huggingface.co/migtissera). \
8
+ It is a 6bpw multi-lora exl2 model for use with ExLlamaV2.
9
+
10
+ For each HelixNet model, a separate LoRA adapter was extracted:
11
+ * lora-actor
12
+ * lora-critic
13
+ * lora-regenerator
14
+
15
+ These are then loaded together with the base [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) model which has been quantized to 6bpw-exl2 to give the combined LMoE model.
16
+
17
+ As HelixNet processes its inputs using the actor, critic and regenerator actions, the corresponding LoRA adapter is dynamically enabled as required.
18
+
19
+ It is similar in approach to [Airoboro's MoE implementation](https://github.com/jondurbin/airoboros/tree/main#lmoe) allowing GPU memory requirements in this (6bpw quantized) instance to be reduced from 20GB for the 3 separate 6bpw models to 8GB for the 6bpw multi lora model.
20
+ The LoRAs were extracted based on the process given in [https://github.com/uukuguy/multi_loras](https://github.com/uukuguy/multi_loras), with a Rank of 64 and an Alpha of 128.
21
+
22
+ # Performance:
23
+ Testing on an RTX-4090 to compare with the separate 6bpw exl2 models from [https://huggingface.co/LoneStriker?search_models=helixnet](https://huggingface.co/LoneStriker?search_models=helixnet) gives:
24
+
25
+ **3 separate models:** 120 tokens / second, using 20GB GPU\
26
+ **LMoE combined model:** 91 tokens / second, using 8GB GPU
27
+
28
+
29
+ # Prompt format:
30
+
31
+ ```
32
+ SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
33
+ USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
34
+ ASSISTANT:
35
+ ```
36
+ # Example Usage
37
+
38
+ The following is a code example on how to use HelixNet-LMoE. No special system-context messages are needed for the `critic` and the `regenerator`. \
39
+ At the **You:** prompt, enter a question such as _What is the relationship between Earth's atmosphere, magnetic field and gravity?_
40
+
41
+ ```python
42
+ import time
43
+ import sys, os
44
+ import dataclasses
45
+ sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
46
+
47
+ from exllamav2 import(
48
+ ExLlamaV2,
49
+ ExLlamaV2Config,
50
+ ExLlamaV2Cache,
51
+ ExLlamaV2Tokenizer,
52
+ ExLlamaV2Lora,
53
+ )
54
+
55
+ from exllamav2.generator import (
56
+ ExLlamaV2BaseGenerator,
57
+ ExLlamaV2Sampler
58
+ )
59
+
60
+
61
+ class ModelClass:
62
+ def __init__(self, generator, tokenizer, model):
63
+ self.generator = generator
64
+ self.tokenizer = tokenizer
65
+ self.model = model
66
+
67
+ DEBUG = os.environ.get("DEBUG") and True or False
68
+
69
+ # Initialize model and cache
70
+ def load_model(model_directory, max_seq_len=8192):
71
+ """
72
+ Loads a model from a directory and return the generator and tokenizer
73
+ """
74
+ config = ExLlamaV2Config()
75
+ config.model_dir = model_directory
76
+ config.max_seq_len = max_seq_len
77
+ config.prepare()
78
+
79
+ model = ExLlamaV2(config)
80
+ print("Loading model: " + model_directory)
81
+
82
+ cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
83
+ model.load_autosplit(cache)
84
+
85
+ tokenizer = ExLlamaV2Tokenizer(config)
86
+ generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
87
+ model = ModelClass(generator=generator, tokenizer=tokenizer, model=model)
88
+ generator.warmup()
89
+ return model
90
+
91
+ def generate_text(prompt, lora, settings, max_new_tokens):
92
+ time_begin = time.time()
93
+ response = base_model.generator.generate_simple(prompt, settings, max_new_tokens, loras=lora)
94
+ response = response[len(prompt):]
95
+ time_end = time.time()
96
+ time_total = time_end - time_begin
97
+ tokens = base_model.tokenizer.encode(response)
98
+ count = tokens.shape[-1]
99
+ print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
100
+ return response
101
+
102
+ base_model = load_model("models/HelixNet-LMoE-6.0bpw-h6-exl2")
103
+ lora_actor = ExLlamaV2Lora.from_directory(base_model.model, "models/HelixNet-LMoE-6.0bpw-h6-exl2/lora-actor")
104
+ lora_critic = ExLlamaV2Lora.from_directory(base_model.model, "models/HelixNet-LMoE-6.0bpw-h6-exl2/lora-critic")
105
+ lora_regenerator = ExLlamaV2Lora.from_directory(base_model.model, "models/HelixNet-LMoE-6.0bpw-h6-exl2/lora-regenerator")
106
+
107
+ settings = ExLlamaV2Sampler.Settings()
108
+ settings.temperature = 0.75
109
+ settings.top_k = 50
110
+ settings.top_p = 1.0
111
+ max_new_tokens = 2000
112
+
113
+ system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
114
+
115
+ while True:
116
+ user_input = input("You: ")
117
+
118
+ prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
119
+ if DEBUG: print(f"{prompt_actor}\n\n")
120
+ print("ACTOR:")
121
+ response_actor = generate_text(prompt_actor, lora_actor, settings, max_new_tokens)
122
+ if DEBUG: print(f"{response_actor}\n\n")
123
+ print("="*132)
124
+
125
+ prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
126
+ if DEBUG: print(f"{prompt_critic}\n\n")
127
+ print("CRITIQUE:")
128
+ response_critic = generate_text(prompt_critic, lora_critic, settings, max_new_tokens)
129
+ if DEBUG: print(f"{response_critic}\n\n")
130
+ print("="*132)
131
+
132
+ prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
133
+ if DEBUG: print(f"{prompt_regenerator}\n\n")
134
+ print("REGENERATION:")
135
+ response_regenerator = generate_text(prompt_regenerator, lora_regenerator, settings, max_new_tokens)
136
+ print("="*132)
137
+ conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
138
+ print(conversation)
139
+
140
+ ```
141
+
142
+ # LLM Evaluation
143
+
144
+ Evaluation on a merged version of each base+lora models has yet to be done on the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how it compares to the equivalent full HelixNet model.
145
+
146
+ # HelixNet Details
147
+
148
+ HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
149
+
150
+ HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
151
+
152
+ HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
153
+
154
+ Full details on how HelixNet was trained and evaluated is located at [https://huggingface.co/migtissera/HelixNet](https://huggingface.co/migtissera/HelixNet) \
155
+ The 6bpw separate models for HelixNet are available at [https://huggingface.co/LoneStriker?search_models=helixnet](https://huggingface.co/LoneStriker?search_models=helixnet)
156
+