jbloom commited on
Commit
44ce49c
1 Parent(s): d523c1f

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -1,3 +1,9 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - Skylion007/openwebtext
5
+ language:
6
+ - en
7
  ---
8
+
9
+ We trained 12 Sparse Autoencoders on the Residual Stream of GPT2-small. Each of these contains ~ 25k features as we used an expansion factor of 32 and the residual stream dimension of GPT2 has 768 dimensions. We trained with an L1 coefficient of 8e-5 and learning rate of 4e-4 for 300 Million tokens, storing a buffer of ~500k tokens from OpenWebText which is refilled and shuffled whenever 50% of the tokens are used. To avoid dead neurons, we use ghost gradients. Our encoder/decoder weights are untied but we do use a tied decoder bias initialized at the geometric median per [Bricken et al](https://transformer-circuits.pub/2023/monosemantic-features).