Steelskull commited on
Commit
bf8ca99
1 Parent(s): d757296

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -41
README.md CHANGED
@@ -2,26 +2,52 @@
2
  tags:
3
  - merge
4
  - mergekit
5
- - lazymergekit
6
  - NousResearch/Meta-Llama-3-8B-Instruct
7
  base_model:
8
  - NousResearch/Meta-Llama-3-8B-Instruct
9
- - NousResearch/Meta-Llama-3-8B-Instruct
10
- - NousResearch/Meta-Llama-3-8B-Instruct
11
- - NousResearch/Meta-Llama-3-8B-Instruct
12
  ---
13
 
14
- # Aura-llama
15
-
16
- Aura-llama is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
17
- * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
18
- * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
19
- * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
20
- * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
21
-
22
- ## 🧩 Configuration
23
-
24
- ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  dtype: float16
26
  merge_method: passthrough
27
  slices:
@@ -37,29 +63,8 @@ slices:
37
  - sources:
38
  - layer_range: [24, 32]
39
  model: NousResearch/Meta-Llama-3-8B-Instruct
40
- ```
41
-
42
- ## 💻 Usage
43
-
44
- ```python
45
- !pip install -qU transformers accelerate
46
-
47
- from transformers import AutoTokenizer
48
- import transformers
49
- import torch
50
-
51
- model = "TheSkullery/Aura-llama"
52
- messages = [{"role": "user", "content": "What is a large language model?"}]
53
-
54
- tokenizer = AutoTokenizer.from_pretrained(model)
55
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
56
- pipeline = transformers.pipeline(
57
- "text-generation",
58
- model=model,
59
- torch_dtype=torch.float16,
60
- device_map="auto",
61
- )
62
-
63
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
64
- print(outputs[0]["generated_text"])
65
- ```
 
2
  tags:
3
  - merge
4
  - mergekit
 
5
  - NousResearch/Meta-Llama-3-8B-Instruct
6
  base_model:
7
  - NousResearch/Meta-Llama-3-8B-Instruct
 
 
 
8
  ---
9
 
10
+ <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
11
+ <title>Aura-llama Data Card</title>
12
+ <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
13
+ <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; }
14
+ .container { width: 80%; max-width: 800px; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); }
15
+ .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); }
16
+ .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; }
17
+ .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; }
18
+ .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; }
19
+ a { color: #88C0D0; text-decoration: none; }
20
+ a:hover { color: #A3BE8C; }
21
+ pre { background-color: rgba(255, 255, 255, 0.05); padding: 10px; border-radius: 5px; overflow-x: auto; }
22
+ code { font-family: 'Courier New', monospace; color: #A3BE8C; } </style> </head> <body> <div class="container">
23
+ <div class="header">
24
+ <h1>Aura-llama</h1> </div> <div class="info">
25
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp" alt="Aura-llama image">
26
+ <p>Now that the cute anime girl has your attention.</p>
27
+ <p>UPDATE: Model has been fixed</p>
28
+ <p>Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.</p>
29
+ <p>Aura-llama is a merge of the following models to create a base model to work from:</p>
30
+ <ul>
31
+ <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li>
32
+ <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li>
33
+ </ul>
34
+ </div>
35
+ <div class="update-section">
36
+ <h2>Merged Evals (Has Not Been Finetuned):</h2>
37
+ <p>Aura-llama</p>
38
+ <ul>
39
+ <li>Avg: ?</li>
40
+ <li>ARC: ?</li>
41
+ <li>HellaSwag: ?</li>
42
+ <li>MMLU: ?</li>
43
+ <li>T-QA: ?</li>
44
+ <li>Winogrande: ?</li>
45
+ <li>GSM8K: ?</li>
46
+ </ul>
47
+ </div>
48
+ <div class="update-section">
49
+ <h2>🧩 Configuration</h2>
50
+ <pre><code>
51
  dtype: float16
52
  merge_method: passthrough
53
  slices:
 
63
  - sources:
64
  - layer_range: [24, 32]
65
  model: NousResearch/Meta-Llama-3-8B-Instruct
66
+ </code></pre>
67
+ </div>
68
+ </div>
69
+ </body>
70
+ </html>