automerger mlabonne commited on
Commit
f44c857
1 Parent(s): 8a941fb

Update README.md (#1)

Browse files

- Update README.md (a670fa8ddd399c15bc4761bf542c905c422c553f)


Co-authored-by: Maxime Labonne <mlabonne@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +54 -57
README.md CHANGED
@@ -1,60 +1,57 @@
1
  ---
2
- {}
 
 
 
 
 
 
3
  ---
4
- ---
5
- license: apache-2.0
6
- base_model:
7
- - yam-peleg/Experiment29-7B
8
- tags:
9
- - merge
10
- - mergekit
11
- - lazymergekit
12
- ---
13
-
14
- # Experiment28Experiment29-7B
15
-
16
- Experiment28Experiment29-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following models:
17
- * [yam-peleg/Experiment29-7B](https://huggingface.co/yam-peleg/Experiment29-7B)
18
-
19
- ## 🧩 Configuration
20
-
21
- ```yaml
22
- models:
23
- - model: yam-peleg/Experiment28-7B
24
- # No parameters necessary for base model
25
- - model: yam-peleg/Experiment29-7B
26
- parameters:
27
- density: 0.53
28
- weight: 0.6
29
- merge_method: dare_ties
30
- base_model: yam-peleg/Experiment28-7B
31
  parameters:
32
- int8_mask: true
33
- dtype: bfloat16
34
- random_seed: 0
35
- ```
36
-
37
- ## 💻 Usage
38
-
39
- ```python
40
- !pip install -qU transformers accelerate
41
-
42
- from transformers import AutoTokenizer
43
- import transformers
44
- import torch
45
-
46
- model = "automerger/Experiment28Experiment29-7B"
47
- messages = [{"role": "user", "content": "What is a large language model?"}]
48
-
49
- tokenizer = AutoTokenizer.from_pretrained(model)
50
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
51
- pipeline = transformers.pipeline(
52
- "text-generation",
53
- model=model,
54
- torch_dtype=torch.float16,
55
- device_map="auto",
56
- )
57
-
58
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
59
- print(outputs[0]["generated_text"])
60
- ```
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - yam-peleg/Experiment29-7B
5
+ tags:
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
  ---
10
+
11
+ # Experiment28Experiment29-7B
12
+
13
+ Experiment28Experiment29-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following models:
14
+ * [yam-peleg/Experiment29-7B](https://huggingface.co/yam-peleg/Experiment29-7B)
15
+
16
+ ## 🧩 Configuration
17
+
18
+ ```yaml
19
+ models:
20
+ - model: yam-peleg/Experiment28-7B
21
+ # No parameters necessary for base model
22
+ - model: yam-peleg/Experiment29-7B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  parameters:
24
+ density: 0.53
25
+ weight: 0.6
26
+ merge_method: dare_ties
27
+ base_model: yam-peleg/Experiment28-7B
28
+ parameters:
29
+ int8_mask: true
30
+ dtype: bfloat16
31
+ random_seed: 0
32
+ ```
33
+
34
+ ## 💻 Usage
35
+
36
+ ```python
37
+ !pip install -qU transformers accelerate
38
+
39
+ from transformers import AutoTokenizer
40
+ import transformers
41
+ import torch
42
+
43
+ model = "automerger/Experiment28Experiment29-7B"
44
+ messages = [{"role": "user", "content": "What is a large language model?"}]
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained(model)
47
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
48
+ pipeline = transformers.pipeline(
49
+ "text-generation",
50
+ model=model,
51
+ torch_dtype=torch.float16,
52
+ device_map="auto",
53
+ )
54
+
55
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
56
+ print(outputs[0]["generated_text"])
57
+ ```