Kquant03 commited on
Commit
13cd024
1 Parent(s): 9b77a14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -1
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - mistralai/Mistral-7B-v0.1
7
+ - Kukedlc/neuronal-7b-Mlab
8
+ - mlabonne/Monarch-7B
9
+ base_model:
10
+ - mistralai/Mistral-7B-v0.1
11
+ - Kukedlc/neuronal-7b-Mlab
12
+ - mlabonne/Monarch-7B
13
  ---
14
+
15
+ # Triunvirato-7b
16
+
17
+ Trinity-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
18
+ * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
19
+ * [Kukedlc/neuronal-7b-Mlab](https://huggingface.co/Kukedlc/neuronal-7b-Mlab)
20
+ * [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
21
+
22
+ ## 🧩 Configuration
23
+
24
+ ```yaml
25
+ models:
26
+ - model: mistralai/Mistral-7B-v0.1
27
+ parameters:
28
+ density: [1, 0.7, 0.1] # density gradient
29
+ weight: 1.0
30
+ - model: Kukedlc/neuronal-7b-Mlab
31
+ parameters:
32
+ density: 0.5
33
+ weight: [0, 0.3, 0.7, 1] # weight gradient
34
+ - model: mlabonne/Monarch-7B
35
+ parameters:
36
+ density: 0.33
37
+ weight:
38
+ - filter: mlp
39
+ value: 0.5
40
+ - value: 0
41
+ merge_method: ties
42
+ base_model: mistralai/Mistral-7B-v0.1
43
+ parameters:
44
+ normalize: true
45
+ int8_mask: true
46
+ dtype: float16
47
+ ```
48
+
49
+ ## 💻 Usage
50
+
51
+ ```python
52
+ !pip install -qU transformers accelerate
53
+
54
+ from transformers import AutoTokenizer
55
+ import transformers
56
+ import torch
57
+
58
+ model = "Kukedlc/Triunvirato-7b"
59
+ messages = [{"role": "user", "content": "What is a large language model?"}]
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained(model)
62
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
63
+ pipeline = transformers.pipeline(
64
+ "text-generation",
65
+ model=model,
66
+ torch_dtype=torch.float16,
67
+ device_map="auto",
68
+ )
69
+
70
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
71
+ print(outputs[0]["generated_text"])
72
+ ```