Model Card for tim-lawson/sae-pythia-160m-deduped-x64-k32-tfm-layers-1

A Multi-Layer Sparse Autoencoder (MLSAE) trained on the residual stream activation vectors from EleutherAI/pythia-160m-deduped with an expansion factor of R = 64 and sparsity k = 32, over 1 billion tokens from monology/pile-uncopyrighted.

This model is a PyTorch Lightning MLSAETransformer module, which includes the underlying transformer.

Model Sources

Citation

BibTeX:

@misc{lawson_residual_2024,
  title         = {Residual {{Stream Analysis}} with {{Multi-Layer SAEs}}},
  author        = {Lawson, Tim and Farnik, Lucy and Houghton, Conor and Aitchison, Laurence},
  year          = {2024},
  month         = oct,
  number        = {arXiv:2409.04185},
  eprint        = {2409.04185},
  primaryclass  = {cs},
  publisher     = {arXiv},
  doi           = {10.48550/arXiv.2409.04185},
  urldate       = {2024-10-08},
  archiveprefix = {arXiv}
}
Downloads last month
20
Safetensors
Model size
238M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Collection including tim-lawson/sae-pythia-160m-deduped-x64-k32-tfm-layers-1