|
--- |
|
language: |
|
- en |
|
tags: |
|
- pytorch |
|
- text-generation |
|
- causal-lm |
|
- rwkv |
|
license: apache-2.0 |
|
datasets: |
|
- the_pile |
|
|
|
--- |
|
|
|
# RWKV-4 3B |
|
|
|
## Model Description |
|
|
|
RWKV-4 3B is a L32-D2560 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. |
|
|
|
** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). ** |
|
|
|
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it. |
|
|
|
New checkpoint: RWKV-4-Pile-3B-20221110-ctx4096.pth : Fine-tuned to ctx_len = 4096 |
|
* LAMBADA ppl 5.25, acc 63.96% |
|
* PIQA acc 74.16% |
|
* SC2016 acc 70.71% |
|
* Hellaswag acc_norm 59.89% |
|
* ctx_len = 4096 n_layer = 32 n_embd = 2560 |
|
|
|
Final checkpoint: RWKV-4-Pile-3B-20221008-8023.pth : Trained on the Pile for 331B tokens. |
|
* Pile loss 1.9469 |
|
* LAMBADA ppl 5.24, acc 63.94% |
|
* PIQA acc 73.72% |
|
* SC2016 acc 70.28% |
|
* Hellaswag acc_norm 59.63% |
|
* ctx_len = 1024 n_layer = 32 n_embd = 2560 |
|
|