Skip to content

FieteLab/multiregion-brain-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making (ICML 2025)

The code assumes the user is using a GPU. We will update a more robust version with cpu ASAP. Please reach out or submit Issues for any request.

To Start

First, create and activate a conda environment:

conda create -n towertask python=3.10 -y
conda activate towertask

Then, to install the codebase in editable mode, run:

pip install -e .

from the root directory (where pyproject.toml is located). This allows importing towertask and VectorHaSH from anywhere.

Lastly, install torch-gpu in the towertask conda environment:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128

Modify directories for saving results and data

Please modify the following in src/towertask/config.py to appropriate directories to save your results and data (e.g., models, activation vectors, other task-relevant stats). By default, they save under TowerTask/icml-results and TowerTask/icml-data:

FIGURE_DIR = str(REPO_ROOT / "icml-results")
DATA_DIR = str(REPO_ROOT / "icml-data")

Running Jobs on a Cluster

The submission scripts (e.g., in submit_jobs/) assume you're using a cluster environment with Slurm and Conda.

By default, they attempt to source Conda using the OpenMind-specific path:

source /cm/shared/openmind/anaconda/3-2022.05/etc/profile.d/conda.sh

⚠️ Update this line in the scripts if you're not on OpenMind. Replace it with the path to your own conda.sh, such as:

source ~/miniconda3/etc/profile.d/conda.sh

or, if using Anaconda:

source ~/anaconda3/etc/profile.d/conda.sh

📜 Reproducing All Model Variants from the ICML Paper

The script submit_jobs/submit_all_models.sh contains all job commands required to reproduce every model variant discussed in our ICML paper, including:

  • main models (M1–M5),
  • control variants (e.g., M0, M0+),
  • appendix ablations and those added or discussed during rebuttal (e.g., CA3 recurrence, matching parameter counts).

To run all models, with learning rate tuning, simply do:

bash submit_jobs/submit_all_models.sh

Note: This script will launch multiple Slurm jobs across learning rates and seeds for all main models and control variants (M0-M5), which is not necessary in most user cases. Please modify the script as needed before launching.

Alternatively, you may launch train.py directly by referring to the commands listed in submit_jobs/submit_all_models.sh. For example, to launch M5, run

python3 train.py --reset_data --grid_assignment position position position --Np 800 --with_mlp --mlp_input_type sensory --new_model --seed $SEED --learning_rate $lr

with changes to $SEED and $lr.

p.s. For potential confusion, when the assignments of all three grid modules are position, we update each module with the MLP-predicted evidence velocity in train.py in each step of training. Otherwise, for cases like --grid_assignment position position evidence, there will be no extra injection of evidence velocity to positional modules. The code currently assumes there are always three grid modules.

Fig. 3

We recently notice that M3, M4 and M5 training can be relatively instable, with a high run-to-run variation.

Due to a lack of seed in earlier runs, for absolute reproducibility of Fig 2, we have provided the M3, M4, and M5 models trained and presented in the CCN and ICML paper in Fig3_model folder.

'M3: joint g, nonmix p': [
         'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_original/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_position_7_8_11/0.0005/M3_trial_1/800',
         'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_original/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_position_7_8_11/0.0005/M3_trial_2/800',
         'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_original/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_position_7_8_11/0.0005/M3_trial_3/800'
     ],
 'M4: disjoint g, mix p': [
         'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_star/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_evidence_7_8_11/0.0005/M4_trial_1/800',
         'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_star/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_evidence_7_8_11/0.0005/M4_trial_2/800',
         'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_star/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_evidence_7_8_11/0.0005/M4_trial_3/800',
     ],

'M5: joint g, mix p': [
        'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_star/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_position_7_8_11/0.0005/M5_trial_1/800',
        'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_star/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_position_7_8_11/0.0005/M5_trial_2/800',
        'Fig3_model/train_q=1/with_mlp_mlp_input_typesensory32/p/HaSH_star/no_sensory/seq20/maxTower5/fov5/RNN32/position_position_position_7_8_11/0.0005/M5_trial_3/800',
    ],

Fig. 6

Additionally, we provide the data used to produce Fig. 6 in Fig6_data. You may use it to run analysis/non_linear_reduction to obtain our results.

The original .mat files for Fig. 6 are stored in a compressed archive to keep the repository size manageable.

Archive location

Fig6_data/mat_files.tar.gz

To extract the files

From the repository root, run:

tar -xzf Fig6_data/mat_files.tar.gz -C Fig6_data

This will extract:

Fig6_data/M4_1000trials_data_mlpModel.mat
Fig6_data/M5_1000trials_data_mlpModel.mat

🏞️ Reproducing All Figures from the ICML Paper

analysis/

Post model training and testing (which saves .mat files of activation vectors and other stats), you may reproduce figures shown in the original paper with the corresponding files:

  • plot_fig2.py: code for plotting learning metrics (e.g., success rate and steps taken per episode (environment configuration) over time), with the same format as Fig 2.
    • Fig 2: performance metrics.
  • plot_data_mat.py: code for plotting place cell representation and computing mutual info from Nieh et al., using .mat file generated by test.py.
    • Fig 3, 7: activation heatmap in ExY space.
    • Fig 4, 8: place and evidence field in hippocampus.
    • Fig 11, 12: both activation heatmap and place/evidence field for variants with CA3 recurrence.
    • Fig 6: mutual info scatterplot.
  • non_linear_reduction.py: code for plotting PCA of hippocampal vectors and RNN vectors.
    • Fig 5, 9, 10: First 2 to 3 PCs visuzlied to various variables and scree plots.

Further Documentation

root

  • train.py & test.py: rl training/testing on tower task using VectorHsSH (or VectorHaSH+). test.py only works with M1-M5, which saves a .mat file for grid cells, place cells, and RNN hidden states representations, as well as other relevant task readouts. The .mat file works with analysis/ to reproduce figures. Note that train.py also calls relevant functions from test.py in the end to save a .mat file for M1-M5.

src/towertask

  • data.py: data generation for simulating tower task.
  • model.py: model definition with a simple RNN, as well as functions select_action() and finish_episode().
  • utils.py: helper functions including visualizations.
  • env.py: reinforcement learning environment for executing the tower task using RNN defined in model.py.

src/VectorHaSH

  • This folder contains all the code related to VectorHaSH (and VectorHaSH+). Essentially, *_wrapper.py contains environments that wrap around the TowerTaskEnv created in src/towertask/env.py.

Citation

@inproceedings{xie2025multi,
  title={A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making},
  author={Yi Xie and Jaedong Hwang and Carlos D. Brody and David W. Tank and Ila R. Fiete},
  booktitle={ICML},
  year={2025}
}

About

Code for A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making (ICML 2025)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors