Welcome to the Re-Align Workshop Hackathon! This challenge focuses on understanding and comparing representational alignment across a wide variety of vision models. Join either the π¦ Blue Team or π₯ Red Team and compete to discover whether representations are universal or idiosyncratic.
|
Representational alignment has emerged as both an implicit and explicit goal in many machine learning subfields, including knowledge distillation (Hinton et al., 2015), disentanglement (Montero et al., 2022), and concept-based models (Koh et al., 2020). The concept has been explored under various terms including latent space alignment, conceptual alignment, and representational similarity analysis (Kriegeskorte et al., 2008; Peterson et al., 2018; Roads & Love, 2020; Muttenthaler et al., 2023). Recent work has leveraged human perceptual judgments to enrich representations within vision models (Sundaram et al., 2024), while other research explores using brain signals to fine-tune semantic representations in language models. However, there remains little consensus on which metrics best identify similarity between systems (Harvey et al., 2024; Schaeffer et al., 2024). Representational alignment can help machines learn useful representations from humans with less supervision (Fel et al., 2022; Muttenthaler et al., 2023; Sucholutsky & Griffiths, 2023), while also uncovering opportunities for humans to leverage domain-specific representations from machines when designing hybrid systems (Steyvers et al., 2022; Shin et al., 2023; Schut et al., 2023). |
- Choose your team: π¦ Blue (universality) or π₯ Red (idiosyncracy).
- Fork and/or the repository following our setup instructions.
- Explore the example notebooks in our starter code section.
- Submit your findings using our submission process.
Good luck! π
We have over 1000 vision models available to adjudicate This hackathon seeks to answer fundamental questions that have driven recent research in representational alignment (Sucholutsky et al., 2023; Muttenthaler et al., 2024). Participants will join either a π¦ Blue Team or a π₯ Red Team, and provide JSON submissions that demonstrate the largest uniform set of models (π¦) or greatest differentiation among those models (π₯).
"Find similarities." Building on work showing that different networks can learn similar representations at scale, Blue Teams search for cases where this convergence occurs.
- π― Objective: Submit a collection of models demonstrating representational and/or functional equivalence.
- π‘ Challenge: Discover similarities among different architectures.
- π Victory condition: Largest uniform set of representationally and/or functionally identical models.
"Drive distinctions." Following approaches that examine models presumed to be aligned to uncover representational differences, Red Teams develop stimuli that drive misalignment in model representations.
- π― Objective: Curate stimuli that reveal representational and/or functional differences.
- π‘ Challenge: Identify the most informative test cases highlighting variation.
- π Victory condition: Greatest differentiation among model representations and/or behaviors.
Optional: Do you want to make a submission? Then fork this repository:
Click the Fork button near the top of the page to fork your own copy of representational-alignment/hackathon. You must be logged into GitHub.
Clone and setup:
git clone [YOUR_FORK_URL]
cd hackathon/
git checkout main
git checkout -b <team_color>_team_submissions # blue or redInstall uv and dependencies:
# Install uv if you haven't already.
# On macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Setup environment
uv sync
# Activate environment for Python commands
source .venv/bin/activateWe've provided starter notebooks to help you get started with the hackathon. These can be found in examples/.
Use this command to launch them!
# Example: Launch Jupyter Lab
uv run --with jupyter jupyter lab| Notebook | Purpose | Teams |
|---|---|---|
π extract_activations.ipynb |
Extract model activations | π¦π₯ Both |
π¦ blue_team_starter.ipynb |
Identify model similarities | π¦ Blue |
π₯ red_team_starter.ipynb |
Find differentiating stimuli | π₯ Red |
See below!
Your final submission must be in JSON format and submitted as a pull request (PR), and should include a brief textual explanation of your findings. See examples of a Blue Team and a Red Team submission.
{
"models": [
{
"model_name": "model1_name",
"source": "where the model is from",
"model_parameters": {
"param1": "value1",
"param2": "value2"
}
},
{
"model_name": "model2_name",
"source": "where the model is from",
"model_parameters": null
}
]
}{
"differentiating_images": [
{
"dataset_name": "cifar100",
"image_identifier": "test/girl/image_987.png"
},
{
"dataset_name": "cifar100",
"image_identifier": "test/orange/image_19.png"
},
{
"dataset_name": "cifar100",
"image_identifier": "test/bottle/image_2428.png"
}
]
}| Key | Purpose | Example |
|---|---|---|
dataset_name |
Name of the public dataset | "cifar100" |
image_identifier |
Path/ID within your stimuli/ directory |
"test/girl/image_987.png" |
β οΈ Important: Include exactly these two keys for every stimulus. Ensure every(dataset_name, image_identifier)pair is unique.
| Team | Directory | Filename | Commit Title |
|---|---|---|---|
| π¦ Blue | blue_team_submissions/ |
team_name.json |
Blue Team Submission: [team_name] |
| π₯ Red | red_team_submissions/ |
team_name.json |
Red Team Submission: [team_name] |
# Add your files
git add [your_files]
# Commit with proper message
git commit -m "<Blue or Red> Team Submission: [team_name]"
# Push to your fork
git push --set-upstream origin <team_color>_team_submissions- Go to your fork on GitHub
- Click "Compare & pull request"
- Set base branch:
<team_color>_team_submissions - Title:
Blue Team: [team_name]orRed Team: [team_name] - Submit! π