Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,14 @@ license: apache-2.0
|
|
11 |
language:
|
12 |
- 'pl'
|
13 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# sliced
|
15 |
|
16 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
11 |
language:
|
12 |
- 'pl'
|
13 |
---
|
14 |
+
## Why prune?
|
15 |
+
|
16 |
+
Falcon-11B is still undertrained, as can be seen by this graph:
|
17 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/QeaL9bOrPskustzFpjMUP.png)
|
18 |
+
This is why the choice is made by prune 50% of the layers.
|
19 |
+
Note that \~1B of continued pre-training (\~1M rows of 1k tokens) is still required to restore the perplexity of this model in the desired language.
|
20 |
+
I'm planning on doing that for certain languages, depending on how much compute will be available.
|
21 |
+
|
22 |
# sliced
|
23 |
|
24 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|