# CLIP Sparse Autoencoder Checkpoint This model is a sparse autoencoder trained on CLIP's internal representations. ## Model Details ### Architecture - **Layer**: 10 - **Layer Type**: hook_resid_post - **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K - **Dictionary Size**: 49152 - **Input Dimension**: 768 - **Expansion Factor**: 64 - **CLS Token Only**: False ### Training - **Training Images**: 648254 - **Learning Rate**: 0.0001 - **L1 Coefficient**: 0.0002 - **Batch Size**: 4096 - **Context Size**: 49 ## Performance Metrics ### Sparsity - **L0 (Active Features)**: 64 - **Dead Features**: 0 - **Mean Log10 Feature Sparsity**: -3.1727 - **Features Below 1e-5**: 0 - **Features Below 1e-6**: 0 - **Mean Passes Since Fired**: 0.2553 ### Reconstruction - **Explained Variance**: 0.8534 - **Explained Variance Std**: 0.0714 - **MSE Loss**: 0.0037 - **L1 Loss**: 0 - **Overall Loss**: 0.0037 ## Training Details - **Training Duration**: 1964 seconds - **Final Learning Rate**: 0.0000 - **Warm Up Steps**: 500 - **Gradient Clipping**: 1 ## Additional Information - **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/celeba_checkpoints_2/600b9388-tinyclip_sae_16_hyperparam_sweep_lr/n_images_648338.pt - **Wandb Run**: https://wandb.ai/perceptual-alignment/celeba-patches_remaining_layers/runs/czauy86z - **Random Seed**: 42