RichardErkhov commited on
Commit
b7f0977
1 Parent(s): 6463fb7

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ NeuralOmniWestBeaglake-7B - GGUF
11
+ - Model creator: https://huggingface.co/paulml/
12
+ - Original model: https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [NeuralOmniWestBeaglake-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [NeuralOmniWestBeaglake-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [NeuralOmniWestBeaglake-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [NeuralOmniWestBeaglake-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [NeuralOmniWestBeaglake-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [NeuralOmniWestBeaglake-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [NeuralOmniWestBeaglake-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [NeuralOmniWestBeaglake-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [NeuralOmniWestBeaglake-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [NeuralOmniWestBeaglake-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [NeuralOmniWestBeaglake-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [NeuralOmniWestBeaglake-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [NeuralOmniWestBeaglake-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [NeuralOmniWestBeaglake-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [NeuralOmniWestBeaglake-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [NeuralOmniWestBeaglake-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [NeuralOmniWestBeaglake-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [NeuralOmniWestBeaglake-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [NeuralOmniWestBeaglake-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [NeuralOmniWestBeaglake-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [NeuralOmniWestBeaglake-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q6_K.gguf) | Q6_K | 5.53GB |
38
+ | [NeuralOmniWestBeaglake-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/paulml_-_NeuralOmniWestBeaglake-7B-gguf/blob/main/NeuralOmniWestBeaglake-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ tags:
46
+ - merge
47
+ - mergekit
48
+ - lazymergekit
49
+ - shadowml/WestBeagle-7B
50
+ - shadowml/Beaglake-7B
51
+ - mlabonne/NeuralOmniBeagle-7B
52
+ base_model:
53
+ - shadowml/WestBeagle-7B
54
+ - shadowml/Beaglake-7B
55
+ - mlabonne/NeuralOmniBeagle-7B
56
+ license: cc-by-nc-4.0
57
+ ---
58
+
59
+ # NeuralOmniWestBeaglake-7B
60
+
61
+ NeuralOmniWestBeaglake-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
62
+ * [shadowml/WestBeagle-7B](https://huggingface.co/shadowml/WestBeagle-7B)
63
+ * [shadowml/Beaglake-7B](https://huggingface.co/shadowml/Beaglake-7B)
64
+ * [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
65
+
66
+ ## 🧩 Configuration
67
+
68
+ ```yaml
69
+ models:
70
+ - model: mistralai/Mistral-7B-v0.1
71
+ # no parameters necessary for base model
72
+ - model: shadowml/WestBeagle-7B
73
+ parameters:
74
+ density: 0.65
75
+ weight: 0.4
76
+ - model: shadowml/Beaglake-7B
77
+ parameters:
78
+ density: 0.6
79
+ weight: 0.35
80
+ - model: mlabonne/NeuralOmniBeagle-7B
81
+ parameters:
82
+ density: 0.6
83
+ weight: 0.45
84
+ merge_method: dare_ties
85
+ base_model: mistralai/Mistral-7B-v0.1
86
+ parameters:
87
+ int8_mask: true
88
+ dtype: float16
89
+ ```
90
+
91
+ ## 💻 Usage
92
+
93
+ ```python
94
+ !pip install -qU transformers accelerate
95
+
96
+ from transformers import AutoTokenizer
97
+ import transformers
98
+ import torch
99
+
100
+ model = "paulml/NeuralOmniWestBeaglake-7B"
101
+ messages = [{"role": "user", "content": "What is a large language model?"}]
102
+
103
+ tokenizer = AutoTokenizer.from_pretrained(model)
104
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
105
+ pipeline = transformers.pipeline(
106
+ "text-generation",
107
+ model=model,
108
+ torch_dtype=torch.float16,
109
+ device_map="auto",
110
+ )
111
+
112
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
113
+ print(outputs[0]["generated_text"])
114
+ ```
115
+