CultriX commited on
Commit
bc9c49d
1 Parent(s): 6fcada9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - lazymergekit
7
+ - abideen/NexoNimbus-7B
8
+ - fblgit/UNA-TheBeagle-7b-v1
9
+ - argilla/distilabeled-Marcoro14-7B-slerp
10
+ ---
11
+
12
+ # MergeTrix-7B
13
+
14
+ MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
15
+ * [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
16
+ * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
17
+ * [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)
18
+
19
+ ## 🧩 Configuration
20
+
21
+ ```yaml
22
+ models:
23
+ - model: udkai/Turdus
24
+ # No parameters necessary for base model
25
+ - model: abideen/NexoNimbus-7B
26
+ parameters:
27
+ density: 0.53
28
+ weight: 0.4
29
+ - model: fblgit/UNA-TheBeagle-7b-v1
30
+ parameters:
31
+ density: 0.53
32
+ weight: 0.3
33
+ - model: argilla/distilabeled-Marcoro14-7B-slerp
34
+ parameters:
35
+ density: 0.53
36
+ weight: 0.3
37
+ merge_method: dare_ties
38
+ base_model: udkai/Turdus
39
+ parameters:
40
+ int8_mask: true
41
+ dtype: bfloat16
42
+ ```
43
+
44
+ ## 💻 Usage
45
+
46
+ ```python
47
+ !pip install -qU transformers accelerate
48
+
49
+ from transformers import AutoTokenizer
50
+ import transformers
51
+ import torch
52
+
53
+ model = "CultriX/MergeTrix-7B"
54
+ messages = [{"role": "user", "content": "What is a large language model?"}]
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(model)
57
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
58
+ pipeline = transformers.pipeline(
59
+ "text-generation",
60
+ model=model,
61
+ torch_dtype=torch.float16,
62
+ device_map="auto",
63
+ )
64
+
65
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
66
+ print(outputs[0]["generated_text"])
67
+ ```