rhysjones commited on
Commit
35375e1
1 Parent(s): aff2b5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md CHANGED
@@ -1,3 +1,125 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # HelixNet-LMoE
6
+
7
+ HelixNet-LMoE is a simple LoRA based Mixture of Experts version of the [HelixNet](https://huggingface.co/migtissera/HelixNet) 3-model system by [Migel Tissera](https://huggingface.co/migtissera).
8
+
9
+ For each HelixNet model, a separate LoRA adapter was extracted :
10
+ * [HelixNet-LMoE-Actor](rhysjones/HelixNet-LMoE-Actor)
11
+ * [HelixNet-LMoE-Critic](rhysjones/HelixNet-LMoE-Critic)
12
+ * [HelixNet-LMoE-Regenerator](rhysjones/HelixNet-LMoE-Regenerator)
13
+
14
+ These are then loaded togeter with the base [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) model to give the combined LMoE model.
15
+
16
+ As HelixNet processes its inputs using the actor, critic and regenerator actions, the corresponding LoRA adapter is dynamically enabled as required.
17
+
18
+ It is similar in approach to [Airoboro's MoE implementation](https://github.com/jondurbin/airoboros/tree/main#lmoe) allowing GPU memory requirements in this (unquantized) instance to be reduced from 3 x 14GB to 1 x 14GB + 3 x 320MB.
19
+ The LoRAs were extracted based on the process given in [https://github.com/uukuguy/multi_loras](https://github.com/uukuguy/multi_loras), with a Rank of 64 and an Alpha of 128.
20
+
21
+ # Prompt format:
22
+
23
+ ```
24
+ SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
25
+ USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
26
+ ASSISTANT:
27
+ ```
28
+ # Example Usage
29
+
30
+ The following is a code example on how to use HelixNet-LMoE. No special system-context messages are needed for the `critic` and the `regenerator`. \
31
+ At the **You:** prompt, enter a questions such as _What is the relationship between Earth's atmosphere, magnetic field and gravity?_
32
+
33
+ ```python
34
+ import torch, json
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+ from peft import PeftModel
37
+
38
+ def load_model(model_path):
39
+ model = AutoModelForCausalLM.from_pretrained(
40
+ model_path,
41
+ torch_dtype=torch.float16,
42
+ device_map="cuda",
43
+ load_in_4bit=False,
44
+ trust_remote_code=True,
45
+ )
46
+ return model
47
+
48
+ def load_tokenizer(model_path):
49
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
50
+ return tokenizer
51
+
52
+ def generate_text(instruction, adapter):
53
+ # Select our required LoRA adapter
54
+ adapter_model.set_adapter(adapter)
55
+
56
+ tokens = base_tokenizer.encode(instruction)
57
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
58
+ tokens = tokens.to("cuda")
59
+
60
+ instance = {
61
+ "input_ids": tokens,
62
+ "top_p": 1.0,
63
+ "temperature": 0.75,
64
+ "generate_len": 1024,
65
+ "top_k": 50,
66
+ }
67
+
68
+ length = len(tokens[0])
69
+ with torch.no_grad():
70
+ rest = adapter_model.generate(
71
+ input_ids=tokens,
72
+ max_length=length + instance["generate_len"],
73
+ use_cache=True,
74
+ do_sample=True,
75
+ top_p=instance["top_p"],
76
+ temperature=instance["temperature"],
77
+ top_k=instance["top_k"],
78
+ num_return_sequences=1,
79
+ pad_token_id=base_tokenizer.eos_token_id,
80
+ )
81
+ output = rest[0][length:]
82
+ string = base_tokenizer.decode(output, skip_special_tokens=True)
83
+ return f"{string}"
84
+
85
+ # Load our base Mistral 7B model and tokenizer
86
+ base_model = load_model("mistralai/Mistral-7B-v0.1")
87
+ base_tokenizer = load_tokenizer("mistralai/Mistral-7B-v0.1")
88
+
89
+ # Load in our three different LoRA adapters for the actor, critic and regenerator
90
+ adapter_model = PeftModel.from_pretrained(base_model, "rhysjones/HelixNet-LMoE-Actor", "actor")
91
+ adapter_model.load_adapter("rhysjones/HelixNet-LMoE-Critic", adapter_name="critic")
92
+ adapter_model.load_adapter("rhysjones/HelixNet-LMoE-Regenerator", adapter_name="regenerator")
93
+
94
+ system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
95
+
96
+ while True:
97
+ user_input = input("You: ")
98
+
99
+ prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
100
+ actor_response = generate_text(prompt_actor, "actor")
101
+ print(f"ACTOR: {actor_response}\n\n")
102
+
103
+ prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
104
+ critic_response = generate_text(prompt_critic, "critic")
105
+ print(f"CRITIQUE: {critic_response}\n\n")
106
+
107
+ prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
108
+ regenerator_response = generate_text(prompt_regenerator, "regenerator")
109
+ print(f"REGENERATION: {regenerator_response}")
110
+
111
+ ```
112
+
113
+ # LLM Evaluation
114
+
115
+ Evaluation on a merged version of each base+lora models has yet to be done on the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how it compares to the equivalent full HelixNet model.
116
+
117
+ # HelixNet Details
118
+
119
+ HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
120
+
121
+ HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
122
+
123
+ HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
124
+
125
+ Full details on how HelixNet was trained and evaluated is located at [https://huggingface.co/migtissera/HelixNet](https://huggingface.co/migtissera/HelixNet)