Skip to content

Releases: Prisma-Multimodal/ViT-Prisma

Beta release CLIP SAE code now part of Prisma Library as v2.0.0!

07 Sep 21:33

Choose a tag to compare

We now have ViT/CLIP SAE code as part of the Prisma library!

You can see a demo here: https://github.com/soniajoseph/ViT-Prisma/blob/main/demos/Train_CLIP_SAE.ipynb

All the code is in src/vit_prisma/sae.

What's Changed

New Contributors

Full Changelog: v0.1.0...v2.0.0

v0.1.0

24 Jan 21:52
9f539e4

Choose a tag to compare

v0.1.0 Pre-release
Pre-release

This release of Prisma is a snapshot of the library, before we significantly modify its structure to create a vision analog of HookedTransformer. The next version will have full hook management / activation caching.

The current ViT code is very simple, which is nice from an educational perspective. I may reference this code in the future if creating didactic examples.

What's Changed

  • Update CONTRIBUTING.md by @soniajoseph in #2
  • Update README.md by @soniajoseph in #3
  • Update README.md by @soniajoseph in #4
  • Update CONTRIBUTING.md by @soniajoseph in #5
  • Update CONTRIBUTING.md by @soniajoseph in #6
  • Update README.md by @soniajoseph in #7
  • Update README.md by @soniajoseph in #8
  • Update README.md by @soniajoseph in #9
  • Update README.md by @soniajoseph in #10
  • Update README.md by @soniajoseph in #11
  • Update README.md by @soniajoseph in #12
  • Update README.md by @soniajoseph in #13
  • tested code for number of samples by @soniajoseph in #14
  • Set up the dSprites the datasets for the shape classification by @YashVadi in #16
  • Updates to the trainer to work with MSE, and perform train/test split when test set isn't provided by @PraneetNeuro in #15
  • Critical Fix: Referencing training/test acc in incorrect scope | Feat : Starter code to support pretrained models from huggingface by @PraneetNeuro in #18
  • Updates to loading and usage of pretrained ViTs from HuggingFace by @PraneetNeuro in #19
  • Revert "Updates to loading and usage of pretrained ViTs from HuggingFace" by @soniajoseph in #20
  • Pull request of new interactive attention head visualization feature by @soniajoseph in #25
  • Revert "Pull request of new interactive attention head visualization feature" by @soniajoseph in #26
  • Revert "Revert "Pull request of new interactive attention head visualization feature"" by @soniajoseph in #27
  • Support for pretrained models from timm along with Huggingface by @PraneetNeuro in #28
  • Documentation: Usage Guide by @PraneetNeuro in #30
  • link to usage guide by @soniajoseph in #31
  • Update UsageGuide.md by @soniajoseph in #32
  • Documentation: Induction dataset by @PraneetNeuro in #34
  • Feat: Polygenic Induction dataset and Hook to fetch intermediate activations by @PraneetNeuro in #35
  • Improvements to the API and wandb team init by @PraneetNeuro in #40
  • Start circuit-based analysis by @soniajoseph in #41
  • Fix Nested Parameters, Update Training script, and Document dSprites Experiment by @YashVadi in #42
  • Update README.md by @soniajoseph in #43
  • Update README.md by @soniajoseph in #45
  • added code for vizing neurons in vits by @alik-git in #49
  • Cleaned up visualize js code by @themachinefan in #50
  • Update README.md by @soniajoseph in #51
  • Update ImagenetResults.md by @YashVadi in #53
  • Visualization tool with multiple images by @themachinefan in #54
  • Optimization of JS visualization code by @PraneetNeuro in #61

New Contributors

Full Changelog: https://github.com/soniajoseph/ViT-Prisma/commits/v0.1.0