This repo contains tools for loading and benchmarking models on the TVSD (THINGS Ventral Stream Spiking Dataset) from Papale et. al. 2025.
Begin by cloning the repository.
git clone [email protected]:serre-lab/tvsd-benchmark.git
cd tvsd-benchmarkNext, create a conda environment with our requirements.
conda create -n tvsd-benchmark
conda activate tvsd-benchmark
pip install -r requirements.txtAlternatively, you can use a venv environment.
python -m venv env
source env/bin/activate
pip install -r requirements.txtTo obtain the TVSD dataset, run
chmod +x scripts/download_tvsd.sh
./scripts/download_tvsd.shWhich will download the normalized MUA and metadata .mat files into a new data directory. To obtain the THINGS dataset, you should analogously run the following snippet. You will be prompted by osfclient to provide a password in order to unzip the dataset. You can easily obtain this password here.
chmod +x scripts/download_things.sh
./scripts/download_things.shEnsure that you have your virtual envirovnment activated, and run
sbatch scripts/generate_activations.sh [MODEL_CONFIG_PATH]When this completes, run
sbatch scripts/benchmark.sh [MODEL_CONFIG_PATH](We separate the two jobs, as only the former requires a GPU.) The results will populate outputs/results/[model].
Fill configs/models.csv with the names of the models you want to benchmark. Then run
sbatch scripts/all_models.shWhich will generative and evaluate activations for each model.
In the current configuration, each model is specified by a corresponding config file in configs. Making a new config for your model is self-explanatory--just follow the outline of the existing ones. You will also have to build out utils/load_model.py to accept your added model. In the future, direct integration with timm will be provided.