File size: 7,974 Bytes
6cf0741
 
 
 
 
 
 
 
 
 
 
85679c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3e65c3
85679c7
 
 
c3e65c3
85679c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6cf0741
 
 
 
 
 
 
 
 
 
85679c7
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
language:
- en
- de
- it
- fr
- da
- sv
- fi
- 'no'
---
![Kraken](https://vago-solutions.de/wp-content/uploads/2024/05/Kraken_Pic-multi.png "Kraken-Multilingual")


## Overview

The Kraken-Multilingual model and Architecture **Kraken** is a **joint effort** between **Cognitive Computations**, **VAGO Solutions** and **Hyperspace.ai.** 

Created by **Fernando Fernandes Neto**, **David Golchinfar**, **Lucas Atkins** and **Eric Hartford**

The Kraken-Multilingual model supports German, English, Italian, French, Swedish, Finnish, Danish and Norwegian language.

The Kraken Architecture is a sophisticated machine learning framework designed for dynamic text generation tasks. It utilizes the Hugging Face transformers library to orchestrate multiple causal language models (CLMs) and intelligently route input through different models based on the context and content of the input text. The architecture is powered by a custom configuration class (KrakenConfig) that facilitates the integration and management of various components such as tokenizers, models, and routing mechanisms.

## Features

Dynamic Model Routing: Uses a sequence classification model to route inputs to the most suitable language model based on the input's characteristics.
Multiple Language Models: Supports integration of various pre-trained causal language models, allowing for flexible, context-appropriate responses.
Customizable Templates: Includes support for input formatting using predefined templates, enhancing the model's adaptability to different conversational contexts.
Extensible Configuration: Leverages a custom configuration setup that can be easily extended and adapted for various use cases involving causal language modeling.

## Selected Models as Experts:
```
      "German/English Expert": "VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
      "Function Italian Expert": "mii-community/zefiro-7b-dpo-ITA",
      "French Expert": "paulml/Hermes-2-Pro-French",
      "Scandinavian Expert": "norallm/normistral-7b-warm-instruct",
```

**How to load and call Kraken-Multilingual model :**
```
from transformers import AutoModelForCausalLM
device = "cuda:0" ## Setup "cuda:0" if NVIDIA, "mps" if on Mac

# Load the model and config:
model = AutoModelForCausalLM.from_pretrained("./kraken_model", trust_remote_code=True)
```

# Call the German expert:
```
messages = [
    {'role': 'system', 'content': 'Du bist ein freundlicher und hilfreicher deutscher KI-Assistent'},
    {'role': 'user', 'content': "Erzähle mir eine kurze Gute Nacht Geschichte in 2 Sätzen."}
    ]

tokenizer = model.tokenizer
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda:0")
output_ids = model.generate(input_ids, max_length=150)
print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
```



# Call the English expert:
```
messages = [
    {'role': 'system', 'content': '"You are a helpful AI Assistant'},
    {'role': 'user', 'content': "Find the mass percentage of Ba in BaO"}
    ]

tokenizer = model.tokenizer
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
output_ids = model.generate(input_ids, max_length=250)
print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
```

# Call the Italian expert:
```
messages = [
    {'role': 'system', 'content': 'Sei un utile assistente AI.'},
    {'role': 'user', 'content': 'Hai qualche idea su cosa potrei fare a Roma?''}
    ]

tokenizer = model.tokenizer
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
output_ids = model.generate(input_ids ,temperature=0.6, do_sample=True, top_p=0.9,top_k=20, max_length=500)
print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
```

# Call the French expert:
```
messages = [
    {'role': 'system', 'content': 'Vous êtes un assistant IA allemand sympathique et serviable'},
    {'role': 'user', 'content': 'J'aimerais faire du shopping à Paris. Que pouvez-vous recommander?'}
    ]

tokenizer = model.tokenizer
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
output_ids = model.generate(input_ids ,temperature=0.6, do_sample=True, top_p=0.9,top_k=20, max_length=250)
print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
```

# Call the Scandinavian expert:
```
messages = [
    {'role': 'system', 'content': 'Du är en hjälpsam AI-assistent'},
    {'role': 'user', 'content': 'Jag kommer från Tyskland och skulle vilja resa till Sverige. Är en färja över Danmark ett bra sätt att resa?'}
    ]

tokenizer = model.tokenizer
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
output_ids = model.generate(input_ids ,temperature=0.1, do_sample=True, top_p=0.9,top_k=20, max_length=250)
print(model.expert_tokenizer(text=input_text).decode(output_ids[0], skip_special_tokens=True))
```


# Switch expert and or quantization:
Go into the config file of the kraken_model folder
```
    "models": {
      "expert1": "VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", # Switch to a german/english model of your choice
      "expert2": "mii-community/zefiro-7b-dpo-ITA",                # Switch to a italian model of your choice
      "expert3": "paulml/Hermes-2-Pro-French",                     # Switch to a french model of your choice
      "expert4": "norallm/normistral-7b-warm-instruct"             # Switch to a scandinavian model of your choice
    },
    # Currently supported: "4bit","8bit" and "awq"
    "quantization": {
      "expert1": null,
      "expert2": null,
      "expert3": null,
      "expert4": null
    },
    "router": "kraken_router",
    # Adjust the tokenizer to your selected model
    "tokenizers": {
      "expert1": "VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
      "expert2": "mii-community/zefiro-7b-dpo-ITA",
      "expert3": "paulml/Hermes-2-Pro-French",
      "expert4": "norallm/normistral-7b-warm-instruct"
    }
  },
  "model_type": "kraken",
  "torch_dtype": "float32",
  "transformers_version": "4.41.0"
}


```
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
 
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
 
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai), [Hyperspace.computer](https://hyperspace.computer/) and [Cognitive Computations](https://erichartford.com/)
## Cite As

Fernando Fernandes Neto, David Golchinfar, Lucas Atkins, Eric Hartford - [Kraken: An OpenSource Collection of Experts Model, 2024](https://github.com/cognitivecomputations/kraken)