wwhwwhwwh commited on
Commit
6adc174
1 Parent(s): db30f3b

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - arcee-ai/Patent-Base-7b
7
+ - microsoft/Orca-2-7b
8
+ base_model:
9
+ - arcee-ai/Patent-Base-7b
10
+ - microsoft/Orca-2-7b
11
+ ---
12
+
13
+ # LGU-Llama2-Merging
14
+
15
+ LGU-Llama2-Merging is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [arcee-ai/Patent-Base-7b](https://huggingface.co/arcee-ai/Patent-Base-7b)
17
+ * [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
18
+
19
+ ## 🧩 Configuration
20
+
21
+ ```yaml
22
+ models:
23
+ - model: NousResearch/Llama-2-7b-hf
24
+ #no parameters necessary for base model
25
+ - model: arcee-ai/Patent-Base-7b
26
+ parameters:
27
+ density: 0.5
28
+ weight: 0.5
29
+ - model: microsoft/Orca-2-7b
30
+ parameters:
31
+ density: 0.5
32
+ weight: 0.5
33
+
34
+ merge_method: ties
35
+ base_model: NousResearch/Llama-2-7b-hf
36
+ parameters:
37
+ normalize: false
38
+ int8_mask: true
39
+ dtype: float16
40
+
41
+ ```
42
+
43
+ ## 💻 Usage
44
+
45
+ ```python
46
+ !pip install -qU transformers accelerate
47
+
48
+ from transformers import AutoTokenizer
49
+ import transformers
50
+ import torch
51
+
52
+ model = "wwhwwhwwh/LGU-Llama2-Merging"
53
+ messages = [{"role": "user", "content": "What is a large language model?"}]
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained(model)
56
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
57
+ pipeline = transformers.pipeline(
58
+ "text-generation",
59
+ model=model,
60
+ torch_dtype=torch.float16,
61
+ device_map="auto",
62
+ )
63
+
64
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
65
+ print(outputs[0]["generated_text"])
66
+ ```