metadata
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- The Pile
RWKV-4 1.5B
Model Description
RWKV-4 1.5B is a L24-D2048 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). **
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it.
ctx_len = 1024 n_layer = 24 n_embd = 2048
New checkpoint: RWKV-4-Pile-1B5-20220929-ctx4096.pth : Fine-tuned to ctx_len = 4096
Final checkpoint: RWKV-4-Pile-1B5-20220903-8040.pth : Trained on the Pile for 332B tokens.
- Pile loss 2.0415
- LAMBADA ppl 7.04, acc 56.43%
- PIQA acc 72.36%
- SC2016 acc 68.73%
- Hellaswag acc_norm 52.48%
Preview checkpoint: RWKV-4-Pile-1B5-20220822-5809.pth : Trained on the Pile for 240B tokens.
- Pile loss 2.0518
- LAMBADA ppl 7.14, acc 56.36%
- PIQA acc 71.71%
- SC2016 acc 68.15%
- Hellaswag acc_norm 52.04%
Preview checkpoint: RWKV-4-Pile-1B5-20220814-4526.pth : Trained on the Pile for 187B tokens.
- Pile loss 2.0635
- LAMBADA ppl 7.34, acc 55.64%
- PIQA acc 71.44%
- SC2016 acc 68.25%
- Hellaswag acc_norm 51.60%