File size: 846 Bytes
060b78b
0be695e
 
 
 
 
 
 
060b78b
0be695e
8ce9232
0be695e
060b78b
0be695e
 
 
 
 
 
 
 
 
 
 
 
 
1e8417a
 
605a997
 
29fbd06
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile

---

# RWKV-4 7B

## Model Description

RWKV-4 7B is a L32-D4096 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.

** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). **

At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it.

ctx_len = 1024
n_layer = 32
n_embd = 4096

(there are ctx_len 2048 and 4096 models though they might be slightly weaker at generating short contents)

Final checkpoint: RWKV-4-Pile-7B-20221115-8047.pth : Trained on the Pile for 332B tokens.
* Pile loss 1.8415
* LAMBADA ppl 4.38, acc 67.18%
* PIQA acc 76.06%
* SC2016 acc 73.44%
* Hellaswag acc_norm 65.51%