YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

CLIP Sparse Autoencoder Checkpoint

This model is a sparse autoencoder trained on CLIP's internal representations.

Model Details

Architecture

  • Layer: 10
  • Layer Type: hook_resid_post
  • Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
  • Dictionary Size: 49152
  • Input Dimension: 768
  • Expansion Factor: 64
  • CLS Token Only: False

Training

  • Training Images: 1299936
  • Learning Rate: 0.0004
  • L1 Coefficient: 0.0002
  • Batch Size: 4096
  • Context Size: 49

Performance Metrics

Sparsity

  • L0 (Active Features): 64.0000
  • Dead Features: 0
  • Mean Log10 Feature Sparsity: -3.1933
  • Features Below 1e-5: 28
  • Features Below 1e-6: 2
  • Mean Passes Since Fired: 0.3781

Reconstruction

  • Explained Variance: 0.8094
  • Explained Variance Std: 0.0674
  • MSE Loss: 0.0047
  • L1 Loss: 0
  • Overall Loss: 0.0047

Training Details

  • Training Duration: 3956 seconds
  • Final Learning Rate: 0.0000
  • Warm Up Steps: 500
  • Gradient Clipping: 1

Additional Information

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support