File size: 948 Bytes
7dffe12
6253a04
 
 
 
 
 
 
7dffe12
6253a04
 
 
7dffe12
6253a04
 
 
 
 
 
 
 
 
 
 
 
 
8ee9fee
 
ecb46be
 
 
 
 
 
 
8ee9fee
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- The Pile

---

# RWKV-4 3B

## Model Description

RWKV-4 3B is a L32-D2560 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.

** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). **

At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it.

ctx_len = 1024
n_layer = 32
n_embd = 2560

Preview checkpoint: RWKV-4-Pile-3B-20220921-3047.pth : Trained on the Pile for 125B tokens.
* Pile loss 2.0026
* LAMBADA ppl 5.72, acc 61.36%
* PIQA acc 73.39%
* SC2016 acc 68.84%
* Hellaswag acc_norm 56.57%

Preview checkpoint: RWKV-4-Pile-3B-20220915-1207.pth : Trained on the Pile for 50B tokens.
* Pile loss 2.0902
* LAMBADA ppl 7.01, acc 57.11%
* PIQA acc 72.52%
* SC2016 acc 68.36%
* Hellaswag acc_norm 52.17%