jacobcd52's picture
Update README.md
02538f4
|
raw
history blame
645 Bytes
metadata
license: mit

This directory contains sparse autoencoders trained on activations at various points within gpt2-small using Neel Nanda's open source code. Each autoencoder was trained on 1B tokens from OpenWebText. A demo colab notebook is here.

The autoencoders are named "gpt2-small_{feature_dict_size}_{point} _{layer}.pt", where:

  • "feature_dict_size" is the number of hidden neurons in the autoencoder
  • "point" is either "mlp_out" or "resid_pre"
  • "layer" is an integer from 0,...,11.