Skip to content

HaoyyLi/NeuralDiffuser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NeuralDiffuser: Neuroscience-inspired Diffusion Guidance for fMRI Visual Reconstruction


Reconstructing visual stimuli from functional Magnetic Resonance Imaging (fMRI) enables fine-grained retrieval of brain activity. However, the accurate reconstruction of diverse details, including structure, background, texture, color, and more, remains challenging. The stable diffusion models inevitably result in the variability of reconstructed images, even under identical conditions. To address this challenge, we first uncover the neuroscientific perspective of diffusion methods, which primarily involve top-down creation using pre-trained knowledge from extensive image datasets, but tend to lack detail-driven bottom-up perception, leading to a loss of faithful details. In this paper, we propose NeuralDiffuser, which incorporates primary visual feature guidance to provide detailed cues in the form of gradients. This extension of the bottom-up process for diffusion models achieves both semantic coherence and detail fidelity when reconstructing visual stimuli. Furthermore, we have developed a novel guidance strategy for reconstruction tasks that ensures the consistency of repeated outputs with original images rather than with various outputs. Extensive experimental results on the Natural Senses Dataset (NSD) qualitatively and quantitatively demonstrate the advancement of NeuralDiffuser by comparing it against baseline and state-of-the-art methods horizontally, as well as conducting longitudinal ablation studies.

1


Overview

This repository is a supplementary code to the paper named "NeuralDiffuser: Neuroscience-inspired Diffusion Guidance for fMRI Visual Reconstruction". It uses stable-diffusion v1.4 to reconstruct natural images of human retina based on the Natural Scene Dataset (NSD).

It first projects fMRI voxels to the feature space of stable-diffusion inspired by mindeyev2. Then, it trains a model to provide guided features (multiple feature layers from CLIP-ViT-B-32). Finally, they are fed to the proposed guided diffusion model to reconstruct natural images of the retina.The framework diagram is as follows:

2


Requirement

stable-diffusion-v1-4

MindEye2


Usage

  1. Agree to the Natural Scenes Dataset's Terms and Conditions and fill out the NSD Data Access form

  2. Git clone this repository:

    git clone https://github.com/HaoyyLi/NeuralDiffuser.git
    cd NeuralDiffuser
    
  3. Download https://huggingface.co/datasets/pscotti/mindeyev2 contents.

  4. To improve training efficiency, you need to run script src/img2feat_sd_pipe.py and src/img2feat_guidance.py to pre-save target embeddings (z: latent space of vqvae; c: text space of clip-vit-large-patch-14; g: features of layer-2,4,6,8,10,12 in CLIP-ViT-B-32).

  5. training model

    cd scripts
    bash ./train.sh
  6. inference

    cd scrips
    bash ./score.sh
  7. reconstruction

    cd scrips
    bash ./recon.sh

Results

3

4

5


Citing

If you use this repository in your research, please cite this paper.

@ARTICLE{li2024neuraldiffuser,
  author={Li, Haoyu and Wu, Hao and Chen, Badong},
  journal={IEEE Transactions on Image Processing}, 
  title={NeuralDiffuser: Neuroscience-Inspired Diffusion Guidance for fMRI Visual Reconstruction}, 
  year={2025},
  volume={34},
  pages={552-565}}

Acknowledgment

Natural Scene Dataset (NSD)

stable-diffusion v1.4

Mind-Eye

MindEyeV2

MindDiffuser

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published