update model
Browse files- README.md +29 -0
- consolidated_2layers.pth +3 -0
- params.json +11 -0
- tokenizer.model +3 -0
README.md
CHANGED
@@ -1,3 +1,32 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
## 1.简介 (Introduction)
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
本模型是Meta-Llama-3-8B-Instruct模型的前两层,由于Llama3-8B共有32层,使用CPU加载模型时,16G的内存无法加载模型,故从32层中摘取前两层保存为一个新的模型。
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
主要用于 [llama3-from-scratch-zh](https://github.com/wdndev/llama3-from-scratch-zh) 和 [llama3-from-scratch](https://github.com/naklecha/llama3-from-scratch) 的学习,方便笔记本用户快速加载模型,运行学习。
|
14 |
+
|
15 |
+
|
16 |
+
This model is the first two layers of the Meta-Llama 3-8B-Instruct model. As Llama3-8B has 32 layers, the model cannot be loaded with 16G of memory when CPU is used to load the model, so remove the first two layers from the 32 layers and store them as a new model.
|
17 |
+
|
18 |
+
It is mainly used for [llama3-from-scratch-zh](https://github.com/wdndev/llama3-from-scratch-zh) and [llama3-from-scratch](https://github.com/naklecha/llama3-from-scratch) learning, convenient laptop users fast load model, operation study.
|
19 |
+
|
20 |
+
## 2.模型下载 (Model Download)
|
21 |
+
|
22 |
+
|
23 |
+
Git download model
|
24 |
+
|
25 |
+
```
|
26 |
+
|
27 |
+
#Git模型下载
|
28 |
+
|
29 |
+
git clone https://huggingface.co/wdndev/Meta-Llama-3-8B-Instruct-2layers
|
30 |
+
|
31 |
+
```
|
32 |
+
|
consolidated_2layers.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:06002b4ed2ef0cb9742206b129ecdaa504431754b64a3eebaae10d1388f1836f
|
3 |
+
size 2973810206
|
params.json
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"dim": 4096,
|
3 |
+
"n_layers": 2,
|
4 |
+
"n_heads": 32,
|
5 |
+
"n_kv_heads": 8,
|
6 |
+
"vocab_size": 128256,
|
7 |
+
"multiple_of": 1024,
|
8 |
+
"ffn_dim_multiplier": 1.3,
|
9 |
+
"norm_eps": 1e-05,
|
10 |
+
"rope_theta": 500000.0
|
11 |
+
}
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:82e9d31979e92ab929cd544440f129d9ecd797b69e327f80f17e1c50d5551b55
|
3 |
+
size 2183982
|