metadata
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- The Pile
RWKV-4 3B
Model Description
RWKV-4 3B is a L32-D2560 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). **
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it.
ctx_len = 1024 n_layer = 32 n_embd = 2560
Preview checkpoint: RWKV-4-Pile-3B-20220921-3047.pth : Trained on the Pile for 125B tokens.
- Pile loss 2.0026
- LAMBADA ppl 5.72, acc 61.36%
- PIQA acc 73.39%
- SC2016 acc 68.84%
- Hellaswag acc_norm 56.57%
Preview checkpoint: RWKV-4-Pile-3B-20220915-1207.pth : Trained on the Pile for 50B tokens.
- Pile loss 2.0902
- LAMBADA ppl 7.01, acc 57.11%
- PIQA acc 72.52%
- SC2016 acc 68.36%
- Hellaswag acc_norm 52.17%