DavidGF commited on
Commit
d074eab
·
verified ·
1 Parent(s): 4898674

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +201 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - de
5
+ ---
6
+ ![Kraken](https://vago-solutions.de/wp-content/uploads/2024/05/Kraken_Pic.png "Kraken-LoRA")
7
+
8
+
9
+ ## Overview
10
+
11
+ The Kraken-LoRA model and Architecture **Kraken** is a **joint effort** between **Cognitive Computations**, **VAGO Solutions** and **Hyperspace.ai.**
12
+
13
+ Created by **Fernando Fernandes Neto**, **David Golchinfar**, **Lucas Atkins** and **Eric Hartford**
14
+
15
+ The Kraken-LoRA model combining the best Python, SQL, Function Calling, Reasoning and German Models by applying dynamic LoRA on runtime so far.
16
+
17
+ The Kraken Architecture is a sophisticated machine learning framework designed for dynamic text generation tasks. It utilizes the Hugging Face transformers library to orchestrate multiple causal language models (CLMs) and intelligently route input through different models based on the context and content of the input text. The architecture is powered by a custom configuration class (KrakenConfig) that facilitates the integration and management of various components such as tokenizers, models, and routing mechanisms.
18
+
19
+ ## Features
20
+
21
+ Dynamic Model Routing: Uses a sequence classification model to route inputs to the most suitable language model based on the input's characteristics.
22
+ LoRA-Adapters: Experts are LoRA-Adapters based on the base model, applied dynamically at runtime following the routing process.
23
+ Multiple Language Models: Supports integration of various pre-trained causal language models, allowing for flexible, context-appropriate responses.
24
+ Customizable Templates: Includes support for input formatting using predefined templates, enhancing the model's adaptability to different conversational contexts.
25
+ Extensible Configuration: Leverages a custom configuration setup that can be easily extended and adapted for various use cases involving causal language modeling.
26
+
27
+ ## Selected Models as Experts:
28
+ ```
29
+ "Base Model": "meta-llama/Meta-Llama-3-8B-Instruct",
30
+ "Reasoning LoRA-Expert": "abacusai/Llama-3-Smaug-8B,
31
+ "Function Calling LoRA-Expert": "hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
32
+ "Python LoRA-Expert": "rombodawg/Llama-3-8B-Instruct-Coder",
33
+ "SQL LoRA-Expert": "defog/llama-3-sqlcoder-8b",
34
+ "German LoRA-Expert": "VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct"
35
+ ```
36
+
37
+ **How to load and call Kraken-LoRA model :**
38
+ ```
39
+ from transformers import AutoConfig, AutoModelForCausalLM
40
+ from configuration_kraken import KrakenConfig
41
+ from modeling_kraken import KrakenForCausalLM
42
+
43
+ AutoConfig.register("kraken", KrakenConfig)
44
+ AutoModelForCausalLM.register(KrakenConfig, KrakenForCausalLM)
45
+
46
+ device = "cuda:0" ## Setup "cuda:0" if NVIDIA, "mps" if on Mac
47
+
48
+ # Load the model and config:
49
+ config = AutoConfig.from_pretrained("./kraken_model")
50
+ model = AutoModelForCausalLM.from_pretrained("./kraken_model", config=config, trust_remote_code=True)
51
+ ```
52
+
53
+ # Call the Reasoning LoRA-expert:
54
+ ```
55
+ messages = [
56
+ {'role': 'system', 'content': '"You are a helpful AI Assistant'},
57
+ {'role': 'user', 'content': "Find the mass percentage of Ba in BaO"}
58
+ ]
59
+
60
+ tokenizer = model.tokenizer
61
+ input_text = tokenizer.apply_chat_template(messages, tokenize=False)
62
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
63
+ output_ids = model.generate(input_ids, max_length=250)
64
+ print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
65
+ ```
66
+
67
+
68
+
69
+ # Call the Function Calling LoRA-Expert:
70
+ ```
71
+ functions_metadata = [
72
+ {
73
+ "type": "function",
74
+ "function": {
75
+ "name": "get_temperature",
76
+ "description": "get temperature of a city",
77
+ "parameters": {
78
+ "type": "object",
79
+ "properties": {
80
+ "city": {
81
+ "type": "string",
82
+ "description": "name"
83
+ }
84
+ },
85
+ "required": [
86
+ "city"
87
+ ]
88
+ }
89
+ }
90
+ }
91
+ ]
92
+
93
+ messages = [
94
+ { "role": "system", "content": f"""You are a helpful assistant with access to the following functions: \n {str(functions_metadata)}\n\nTo use these functions respond with:\n<functioncall> {{ "name": "function_name", "arguments": {{ "arg_1": "value_1", "arg_1": "value_1", ... }} }} </functioncall>\n\nEdge cases you must handle:\n - If there are no functions that match the user request, you will respond politely that you cannot help."""},
95
+ { "role": "user", "content": """<function_response> {"temperature": 12} </function_response>"""}
96
+ ]
97
+
98
+ tokenizer = model.tokenizer
99
+ input_text = tokenizer.apply_chat_template(messages, tokenize=False)
100
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda:0")
101
+ output_ids = model.generate(input_ids ,temperature=0.1, do_sample=True, top_p=0.9,top_k=20, max_length=500)
102
+ print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
103
+ ```
104
+
105
+ # Call the Python LoRA-Expert:
106
+ ```
107
+ messages = [
108
+ {'role': 'system', 'content': ''},
109
+ {'role': 'user', 'content': """Create a python function to calculate the sum of a sequence of integers.
110
+ [1, 2, 3, 4, 5]"""}
111
+ ]
112
+
113
+ tokenizer = model.tokenizer
114
+ input_text = tokenizer.apply_chat_template(messages, tokenize=False)
115
+ print(input_text)
116
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda:0")
117
+ output_ids = model.generate(input_ids ,temperature=0.6, do_sample=True, top_p=0.9,top_k=20, max_length=400)
118
+ print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
119
+ ```
120
+
121
+ # Call the SQL LoRA-expert:
122
+ ```
123
+ messages = [
124
+ {'role': 'system', 'content': 'You are a helpul AI assistant.'},
125
+ {'role': 'user', 'content': """Generate a SQL query to answer this question: What is the total volume of timber sold by each salesperson, sorted by salesperson?
126
+
127
+ DDL statements:
128
+ CREATE TABLE salesperson (salesperson_id INT, name TEXT, region TEXT); INSERT INTO salesperson (salesperson_id, name, region) VALUES (1, 'John Doe', 'North'), (2, 'Jane Smith', 'South'); CREATE TABLE timber_sales (sales_id INT, salesperson_id INT, volume REAL, sale_date DATE); INSERT INTO timber_sales (sales_id, salesperson_id, volume, sale_date) VALUES (1, 1, 120, '2021-01-01'), (2, 1, 150, '2021-02-01'), (3, 2, 180, '2021-01-01');"""}
129
+ ]
130
+
131
+ tokenizer = model.tokenizer
132
+ input_text = tokenizer.apply_chat_template(messages, tokenize=False)
133
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
134
+ output_ids = model.generate(input_ids ,temperature=0.6, do_sample=True, top_p=0.9,top_k=20, max_length=500)
135
+ print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
136
+ ```
137
+
138
+ # Call the German LoRA-expert:
139
+ ```
140
+ messages = [
141
+ {'role': 'system', 'content': 'Du bist ein freundlicher und hilfreicher deutscher KI-Assistent'},
142
+ {'role': 'user', 'content': "Ich hoffe es geht dir gut?"}
143
+ ]
144
+
145
+ tokenizer = model.tokenizer
146
+ input_text = tokenizer.apply_chat_template(messages, tokenize=False)
147
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda:0")
148
+ output_ids = model.generate(input_ids, max_length=150)
149
+ print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
150
+ ```
151
+
152
+
153
+ # Switch LoRA-Expert and or quantization:
154
+ Go into the config file of the kraken_model folder
155
+ ```
156
+ # Switch to a LoRA-Adapter which fits to your Base Model
157
+ "lora_adapters": {
158
+ "lora_expert1": "Llama-3-Smaug-8B-adapter",
159
+ "lora_expert2": "Meta-Llama-3-8B-Instruct-function-calling-json-mode-adapter",
160
+ "lora_expert3": "Llama-3-8B-Instruct-Coder-adapter",
161
+ "lora_expert4": "llama-3-sqlcoder-8b-adapter",
162
+ "lora_expert5": "Llama-3-SauerkrautLM-8b-Instruct-adapter"
163
+ },
164
+ "model_type": "kraken",
165
+ "models": {
166
+ "base": "meta-llama/Meta-Llama-3-8B-Instruct"
167
+ },
168
+ # Currently supported: "4bit" and "8bit"
169
+ "quantization": {
170
+ "base": null
171
+ },
172
+ "router": "../kraken/kraken_router",
173
+ "tokenizers": {
174
+ "lora_expert1": "Llama-3-Smaug-8B-adapter",
175
+ "lora_expert2": "Meta-Llama-3-8B-Instruct-function-calling-json-mode-adapter",
176
+ "lora_expert3": "Llama-3-8B-Instruct-Coder-adapter",
177
+ "lora_expert4": "llama-3-sqlcoder-8b-adapter",
178
+ "lora_expert5": "Llama-3-SauerkrautLM-8b-Instruct-adapter"
179
+ }
180
+ },
181
+ "model_type": "kraken",
182
+ "torch_dtype": "bfloat16",
183
+ "transformers_version": "4.41.1"
184
+ }
185
+
186
+
187
+
188
+ ```
189
+ ## Disclaimer
190
+ We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
191
+ However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
192
+ Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
193
+
194
+ ## Contact
195
+ If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
196
+
197
+ ## Collaborations
198
+ We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai), [Hyperspace.computer](https://hyperspace.computer/) and [Cognitive Computations](https://erichartford.com/)
199
+ ## Cite As
200
+
201
+ Fernando Fernandes Neto, David Golchinfar, Lucas Atkins, Eric Hartford - [Kraken: An OpenSource Collection of Experts Model, 2024](https://github.com/cognitivecomputations/kraken)