tim-lawson commited on
Commit
ce887e4
1 Parent(s): b9c267d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -3,10 +3,23 @@ language: en
3
  library_name: mlsae
4
  license: mit
5
  tags:
6
- - model_hub_mixin
7
- - pytorch_model_hub_mixin
 
 
8
  ---
9
 
10
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
11
- - Library: https://github.com/tslwn/mlsae
12
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
3
  library_name: mlsae
4
  license: mit
5
  tags:
6
+ - model_hub_mixin
7
+ - pytorch_model_hub_mixin
8
+ datasets:
9
+ - monology/pile-uncopyrighted
10
  ---
11
 
12
+ # mlsae-pythia-70m-deduped-x256-k32-tfm
13
+
14
+ A Multi-Layer Sparse Autoencoder (MLSAE) trained on the residual stream
15
+ activation vectors from every layer of
16
+ [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped)
17
+ with an expansion factor of 256 and k = 32, over 1 billion tokens from
18
+ [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted).
19
+ This model includes the underlying transformer.
20
+
21
+ For more details, see:
22
+
23
+ - Paper: <https://arxiv.org/abs/2409.04185>
24
+ - GitHub repository: <https://github.com/tim-lawson/mlsae>
25
+ - Weights & Biases project: <https://wandb.ai/timlawson-/mlsae>