ALBADDAWI commited on
Commit
6a687c0
1 Parent(s): d17c84e

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ {}
3
+ ---
4
+
5
+ # DeepCode-7B-Aurora-v12
6
+
7
+ DeepCode-7B-Aurora-v12 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
8
+
9
+ ## 🧩 Configuration
10
+
11
+ ```yaml
12
+ models:
13
+ - model: DeepCode-7B-Aurora-v4
14
+ - model: DeepCode-7B-Aurora-v4
15
+ - model: DeepCode-7B-Aurora-v4
16
+ - model: DeepCode-7B-Aurora-v4
17
+ - model: DeepCode-7B-Aurora-v4
18
+ - model: DeepCode-7B-Aurora-v4
19
+ - model: DeepCode-7B-Aurora-v4
20
+ merge_method: model_stock
21
+ base_model: DeepCode-7B-Aurora-v4
22
+ dtype: float16
23
+ ```
24
+
25
+ ## 💻 Usage
26
+
27
+ ```python
28
+ os.system(f"pip install -qU transformers accelerate")
29
+
30
+ from transformers import AutoTokenizer
31
+ import transformers
32
+ import torch
33
+
34
+ model = "ALBADDAWI/DeepCode-7B-Aurora-v12"
35
+ messages = [{"role": "user", "content": "What is a large language model?"}]
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained(model)
38
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
39
+ pipeline = transformers.pipeline(
40
+ "text-generation",
41
+ model=model,
42
+ torch_dtype=torch.float16,
43
+ device_map="auto",
44
+ )
45
+
46
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
47
+ print(outputs[0]["generated_text"])
48
+ ```