Codes related to the paper "On hallucinations in tomographic image reconstruction" by Bhadra et al. published in IEEE Transactions on Meedical Imaging (2021) - Second Special Issue on Machine Learning for Image Reconstruction: https://ieeexplore.ieee.org/document/9424044
- Linux
- Anaconda >= 2018.2
- Python 3.6
- Numpy 1.18.2
- Pillow 6.2.1
- 1 NVIDIA GPU (compute capability GeForce GTX 1080 or higher and minimum 8 GB RAM)
- NVIDIA driver >= 440.59, CUDA toolkit >= 10.0
Additional dependencies that are required for the reconstruction methods and for computing hallucination maps are mentioned under each section.
recon_data: Contains 5 data samples each (indexed as 0-4) from in-distribution (ind) and out-of-distribution (ood) data. Each data sample contains the true object, segmentation mask and simulated k-space data.UNET: Contains codes used for reconstructing image using a pre-trained U-Net model.PLSTV: Contains code for reconstructing image by use of the PLS-TV method.compute_maps: Contains codes for computing hallucination maps and specific maps.
The U-Net model was trained using codes from https://github.com/facebookresearch/fastMRI. We have placed the pre-trained model used in our numerical studies as UNET/experiments/h_map/epoch\=49.ckpt which can be used to reconstruct images from the test dataset. The hyperparameters used during training can be found at UNET/experiments/h_map/meta_tags.csv.
The codes for reconstructing images using the pre-trained U-Net model have been tested successfully using pytorch and pytorch-lightning in a virtual environment created with conda. First, create a virtual environment named unet using conda that installs python-3.6 as follows:
conda create -n unet python=3.6
The unet environment can be activated by typing
conda activate unet
After the unet virtual environment has been activated, install pytorch-1.3.1 and pytorch-lightning-0.7.3 along with other relevant dependencies using pip with the following command from the root directory:
pip install -r ./UNET/requirements.txt
- Enter the
UNETdirectory from root directory:
cd UNET
- Set the GPU device number in the script
models/unet/test_unet.py. For example, to run the code ondevice:0, set the environment variableCUDA_VISIBLE_DEVICES=0in line 31:
os.environ["CUDA_VISIBLE_DEVICES"]="0"
- Run the following script to perform reconstruction from all 5
indandoodk-space data samples using the saved U-Net model:
./run_unet_test.sh
- Extract reconstructed images as numpy arrays from saved
.h5files:
python extract_recons.py
The reconstructed images will be saved in new subdirectories recons_ind and recons_ood within the UNET folder.
The Berkeley Advanced Reconstruction Toolbox (BART) is used for the PLS-TV method: https://mrirecon.github.io/bart/. Please install the BART software before running our code for PLS-TV. Our implementation was successfully tested with bart-0.5.00.
- Set the following environment variables for BART in the current shell. Replace
/path/to/bart/with the location where BART has been installed:
export TOOLBOX_PATH=/path/to/bart/
export PATH=$TOOLBOX_PATH:$PATH
- Enter the
PLSTVdirectory from root directory:
cd PLSTV
- Run the script that performs PLS-TV using BART given the distribution type and the corresponding data index. Example:
python bart_plstv.py --dist-type ind --idx 2
The reconstructed images will be saved in new subdirectories recons_ind and recons_ood under the PLSTV folder.
DIP-based image reconstruction was performed using a randomly initialized network having the same architecture as the U-Net described above. The method was implemented in Tensorflow 1.14. The relevant hyperparameters used can be found in DIP/run_dip_unet.sh and dip_main.py.
The codes for reconstructing images using DIP have been tested successfully using tensorflow-gpu 1.14 installed with conda, but should also work with the 1.15 version. For working with images, imageio needs to be installed (via pip or conda).
- Enter the
DIPfolder:
cd DIP
- Run the following script to reconstruct an image from kspace data with index
i(igoes from 0 to 4) and typeType(type is eitherindorood) as follows:
bash run_dip_unet.sh $Type $i
- The reconstructed images will be saved in new subdirectories
recons_indandrecons_oodwithin theDIPfolder.
An error map or a hallucination map can be computed after an image has been reconstructed. The type of map is indicated by entering any of the following arguments:
em: Error mapmeas_hm: Measurement space hallucination mapnull_hm: Null space hallucination map
scipy-1.3.0scikit-image-0.16.2
- Enter the directory
compute_mapsfrom root directory:
cd compute_maps
- Example of computing the null space hallucination map for an ood image reconstructed using the U-Net:
python compute_raw_maps.py --recon-type UNET --dist-type ood --map-type null_hm --idx 0
The error map or hallucination map will saved in the subdirectory [recon_type]_[map_type]_[dist-type].
- Compute the specific map (
emornull_hm) after the corresponding raw map has been computed in Step 2. Example of computing specific null space hallucination map after performing the example in Step 2:
python compute_specific_maps.py --recon-type UNET --dist-type ood --map-type null_hm --idx 0
The specific map is saved as a .png file in the subdirectory [recon_type]_specific_[map_type]_[dist-type].
NOTE:
- The true objects, reconstructed images and hallucination maps should be vertically flipped upside-down for display purposes.
- The type of distribution and index corresponding to the figures shown in the paper for brain images are as follows:
- Fig. 1 (bottom row):
--dist-type ood --idx 0 - Fig. 2:
--dist-type ind --idx 0 - Fig. 3:
--dist-type ood --idx 1
- Fig. 1 (bottom row):
@article{bhadra2021hallucinations,
title={On hallucinations in tomographic image reconstruction},
author={Bhadra, Sayantan and Kelkar, Varun A and Brooks, Frank J and Anastasio, Mark A},
journal={IEEE Transactions on Medical Imaging},
year={2021},
publisher={IEEE}
}