tylercosgrove
commited on
update readme
Browse files
README.md
CHANGED
@@ -2,8 +2,9 @@
|
|
2 |
license: mit
|
3 |
datasets:
|
4 |
- monology/pile-uncopyrighted
|
5 |
-
library_name: transformers
|
6 |
---
|
7 |
This contains the weights of a sparse autoencoder I trained on the residual activations of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). I used [The Pile (uncopyrighted)](https://huggingface.co/datasets/monology/pile-uncopyrighted) for the training data. As of right now, I have only trained a single SAE (on layer 16), though I may do more in the future.
|
8 |
|
9 |
-
|
|
|
|
|
|
2 |
license: mit
|
3 |
datasets:
|
4 |
- monology/pile-uncopyrighted
|
|
|
5 |
---
|
6 |
This contains the weights of a sparse autoencoder I trained on the residual activations of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). I used [The Pile (uncopyrighted)](https://huggingface.co/datasets/monology/pile-uncopyrighted) for the training data. As of right now, I have only trained a single SAE (on layer 16), though I may do more in the future.
|
7 |
|
8 |
+
The easiest way to use the model is with the [SAE Lens](https://github.com/jbloomAus/SAELens) library.
|
9 |
+
|
10 |
+
Here is the [training repo](https://github.com/tylercosgrove/sparse-autoencoder-mistral7b).
|