--- language: en library_name: mlsae license: mit tags: - model_hub_mixin - pytorch_model_hub_mixin datasets: - monology/pile-uncopyrighted --- # mlsae-pythia-70m-deduped-x64-k256-tfm A Multi-Layer Sparse Autoencoder (MLSAE) trained on the residual stream activation vectors from every layer of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) with an expansion factor of 64 and k = 256, over 1 billion tokens from [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted). This model includes the underlying transformer. For more details, see: - Paper: - GitHub repository: - Weights & Biases project: