--- license: mit datasets: - monology/pile-uncopyrighted --- This contains the weights of a sparse autoencoder I trained on the residual activations of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). I used [The Pile (uncopyrighted)](https://huggingface.co/datasets/monology/pile-uncopyrighted) for the training data. As of right now, I have only trained a single SAE (on layer 16), though I may do more in the future. The easiest way to use the model is with the [SAE Lens](https://github.com/jbloomAus/SAELens) library. Here is the [training repo](https://github.com/tylercosgrove/sparse-autoencoder-mistral7b).