soniajoseph's picture
Update README.md
1ca20d4 verified

CLIP Sparse Autoencoder Checkpoint

This model is a sparse autoencoder trained on CLIP's internal representations.

Model Details

Architecture

  • Layer: 9
  • Layer Type: hook_mlp_out
  • Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
  • Dictionary Size: 49152
  • Input Dimension: 768
  • Expansion Factor: 64
  • CLS Token Only: True

Training

  • Training Images: 122875904
  • Learning Rate: 0.0002
  • L1 Coefficient: 0.3000
  • Batch Size: 4096
  • Context Size: 1

Performance Metrics

Sparsity

  • L0 (Active Features): 64
  • Dead Features: 0
  • Mean Log10 Feature Sparsity: -3.3096
  • Features Below 1e-5: 2
  • Features Below 1e-6: 0
  • Mean Passes Since Fired: 9.0920

Reconstruction

  • Explained Variance: 0.8400
  • Explained Variance Std: 0.0488
  • MSE Loss: 0.0006
  • L1 Loss: 0
  • Overall Loss: 0.0006

Training Details

  • Training Duration: 17913.7193 seconds
  • Final Learning Rate: 0.0002
  • Warm Up Steps: 200
  • Gradient Clipping: 1

Additional Information