shism commited on
Commit
0c0fe5c
1 Parent(s): 9dae01b

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ {}
3
+ ---
4
+ ---
5
+ license: apache-2.0
6
+ base_model:
7
+ - OpenPipe/mistral-ft-optimized-1218
8
+ - mlabonne/NeuralHermes-2.5-Mistral-7B
9
+ tags:
10
+ - merge
11
+ - mergekit
12
+ - lazymergekit
13
+ - OpenPipe/mistral-ft-optimized-1218
14
+ - mlabonne/NeuralHermes-2.5-Mistral-7B
15
+ ---
16
+
17
+ # NeuralPipe-7B-slerp
18
+ NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
19
+ * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/%7B%7B%20model%20%7D%7D)
20
+ * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/%7B%7B%20model%20%7D%7D)
21
+
22
+ ## 🧩 Configuration
23
+
24
+ ```yaml
25
+ slices:
26
+ - sources:
27
+ - model: OpenPipe/mistral-ft-optimized-1218
28
+ layer_range: [0, 32]
29
+ - model: mlabonne/NeuralHermes-2.5-Mistral-7B
30
+ layer_range: [0, 32]
31
+ - merge_method: slerp
32
+ base_model: OpenPipe/mistral-ft-optimized-1218
33
+ parameters:
34
+ t:
35
+ - filter: self_attn
36
+ value: [0, 0.5, 0.3, 0.7, 1]
37
+ - filter: mlp
38
+ value: [1, 0.5, 0.7, 0.3, 0]
39
+ - value: 0.5
40
+ dtype: bfloat16
41
+ ```
42
+
43
+ ## 💻 Usage
44
+
45
+ ```python
46
+ !pip install -qU transformers accelerate
47
+
48
+ from transformers import AutoTokenizer
49
+ import transformers
50
+ import torch
51
+
52
+ model = "shism/NeuralPipe-7B-slerp"
53
+ messages = [{"role": "user", "content": "What is a large language model?"}]
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained(model)
56
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
57
+
58
+ pipeline = transformers.pipeline(
59
+ "text-generation",
60
+ model=model,
61
+ torch_dtype=torch.float16,
62
+ device_map="auto",
63
+ )
64
+
65
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
66
+ print(outputs[0]["generated_text"])
67
+ ```