OPM-MEG preprocessing pipeline based on MNE-Python, mne-bids-pipeline, and OSL-ephys. This repository contains utility scripts, custom pre-processing glue code, and run scripts to convert data to BIDS, run preprocessing, inspect and curate bad channels/epochs, run coregistration and FreeSurfer, and prepare sensor/source outputs.
This README documents how to set up the environment, prepare your data and configuration, and run the pipeline using the provided run scripts.
src/custom/- custom preprocessing helpers and OSL wrapperssrc/run/- shell wrappers that run high-level pipeline stages (run_all.sh,run_preproc.sh, etc.)local-mne-opm.sh- convenience script to set defaults and invoke specific pipeline stagesmne-opm.sh- main CLI wrapper to run a single pipeline stage with explicit argumentsinstall.sh- environment creation and installation helperpyproject.toml- Python project metadata
- macOS / Linux (development tested on macOS)
- conda or micromamba for environment management
- Python 3.10+ (check
pyproject.tomlfor exact requirements) - MNE-Python, mne-bids-pipeline, osl-ephys and other scientific dependencies (installed by
install.sh)
Run the installer to create the conda environment and install Python dependencies.
Example (from project root):
# create environment and install dependencies
bash ./install.shAfter installation, activate the environment before running the pipeline:
conda activate mne-opmTemplates for configs are available under config_TODO/; copy them into your own config directory and edit.
Raw data and outputs must conform to a few naming/layout conventions used by the run scripts:
- Raw experiment folder:
${DATA_BASE}/${EXPERIMENT}/raw/contains per-subject folders. - For BIDS conversion, raw data folders should include
_taskor_noisein their names (e.g.,20250321_140828_noise/*_meg.fif). - DICOM-to-NIfTI conversion (
run_nifti.sh) expects DICOMs in${RAW_DIR}/<subject>/dicomand writes NIfTI files to${RAW_DIR}/<subject>/anat. - After conversion, files with
T1worT2win their names are auto-renamed to include_t1wor_t2w. - BIDS output root is
${DATA_BASE}/${EXPERIMENT}/bidsand derivatives live underbids/derivatives. - FreeSurfer SUBJECTS_DIR is
${DATA_BASE}/${EXPERIMENT}/bids/derivatives/freesurfer/subjects.
If you provide trial/behavioral metadata, place it under raw/<subject>/metadata as CSV files; configs can point to these.
Example layout for a single experiment (TSXpilot) and subject 007. Adjust names and paths to your project.
/data/TSXpilot/
raw/
exp_007/ # subject folder must end with _007 (3-digit, zero-padded)
dicom/ # DICOMs (input to run_nifti.sh)
...
anat/ # NIfTI outputs created by run_nifti.sh
<files>_t1w.nii.gz # must include suffix _t1w
<files>_t2w.nii.gz # must include suffix _t2w (optional)
session1_task/ # task run folder; must end with _task
20250321_123456_meg.fif # raw MEG FIF; name can vary but must match *_meg.fif
session2_task/
20250321_125500_meg.fif
20250321_140828_noise/ # empty-room; folder must end with _noise
20250321_140828_meg.fif
metadata/ # optional per-subject metadata
sub-007_behavior.csv # metadata files should start with sub-<NNN>_
sub-007_events.csv
eyetracking/
recording.asc # Eyelink ASCII file (*.asc) detected anywhere under subject
bids/
sub-007/
ses-01/
meg/
sub-007_ses-01_task-<task>_run-01_meg.fif
sub-007_ses-01_task-noise_run-01_meg.fif # if empty-room exists
anat/
sub-007_ses-01_T1w.nii.gz
sub-007_ses-01_T2w.nii.gz
derivatives/
freesurfer/
subjects/
sub-007_ses-01/...
<ANALYSIS>/ # e.g., CSI; pipeline derivatives live here
configs/
TSXpilot/
config-CSI.py
bids/
sub-007_config-bids.py # used by run_bids.sh
Formatting requirements enforced/assumed by the scripts:
- Subject folder naming (raw): must end with
_<NNN>whereNNNis a 3-digit, zero-padded subject id (e.g.,exp_007). - Task runs (raw): run folders must end with
_taskand contain files matching*_meg.fif. - Empty-room (raw): folder must end with
_noiseand contain*_meg.fif. - DICOMs (raw): stored under
dicom/;run_nifti.shwrites NIfTI toanat/and renames files to include_t1w/_t2w. - Anatomical NIfTI files (raw): must include suffix
_t1w.nii*and optionally_t2w.nii*(detected by glob). - Eye-tracking: Eyelink ASCII
.ascfile located anywhere inside the subject directory is detected and aligned. - Metadata: per-subject files should be prefixed
sub-<NNN>_(e.g.,sub-007_*) for consistent discovery. - BIDS: per-subject BIDS config at
CONFIG_DIR/bids/sub-<SUBJECT>_config-bids.py. - Sessions: default is
ses-01; FreeSurfer/coreg scripts temporarily format subject assub-<NNN>_ses-<SESSION>.
This pipeline uses environment variables to locate data and configuration. The minimal variables you should set before running are:
ROOT_DIR- root of this repository (typically the project folder)EXPERIMENT- short experiment name (used to find config directory)ANALYSIS- analysis name, appended to derivative folders (e.g., CSI)SUBJECT- subject id (e.g., 007)CONFIG_DIR- path to config folder (usually<analysis repo>/analysis/config)RAW_DIR,BIDS_DIR,SUBJECTS_DIR- data paths used by the pipeline
You can set these directly in your shell or use local-mne-opm.sh to generate and export them for you. Example usage of
the helper script is shown below.
Run a single pipeline stage with explicit arguments (no pre-set defaults). Valid <pipeline> options:
nifti | bids | freesurfer | coreg | preproc | sensor | source | all | func | anat
Usage (from repo root):
./mne-opm.sh <pipeline> \
--exp TSXpilot \
--sub 007 \
--data /path/to/data/TSXpilot \
--config /path/to/config/TSXpilot \
--analysis CSI \
--fs /Applications/freesurfer/8.0.0 \
--t1w /path/to/T1w.nii.gz # optional; auto-detected if under BIDSNotes:
- The script exports environment variables used by the run scripts (ROOT_DIR, CONFIG_DIR, RAW/BIDS paths, etc.).
--analysisselectsconfig-<ANALYSIS>.pyfor most stages; BIDS uses a per-subject config atCONFIG_DIR/bids/sub-<SUBJECT>_config-bids.py.- Set
--workersto control FreeSurfer parallelism (MAX_WORKERS). - Ensure FreeSurfer is installed and
--fspoints to your install; the scripts sourceSetUpFreeSurfer.shwhen needed.
Use the local-mne-opm.sh wrapper to set defaults and run a pipeline stage. Example:
# from repository root
./local-mne-opm.sh preproc --exp TSXpilot --sub 007 --data /path/to/data --config /path/to/configThe wrapper exports useful environment variables and calls src/run/run_preproc.sh (or any other stage you specify).
src/run/run_all.sh will sequentially run the main pipeline stages (NIFTI, BIDS, Freesurfer, Coreg, Preproc, Sensor,
Source). It calls sub-scripts in src/run/. To run everything:
conda activate mne-opm
export CONFIG_DIR="/path/to/your/configs"
export ANALYSIS="CSI"
export SUBJECT="007"
export EXPERIMENT="TSXpilot"
export ROOT_DIR=$(pwd)
bash src/run/run_all.shIf you only want preprocessing, run run_preproc.sh directly or use the wrapper:
./local-mne-opm.sh preproc --exp TSXpilot --sub 007 --data /path/to/data --config /path/to/configrun_bids.sh: Convert raw files into BIDS structure (expects raw file naming conventions)run_freesurfer.sh: Run FreeSurfer recon-all for anatomical processingrun_coreg.sh: Coregister sensor and anatomical spacesrun_preproc.sh: Runs the preprocessing flow including OSL wrappers and manual inspection stepsrun_sensor.sh/run_source.sh: Sensor- and source-level processing steps used for analysis
Inside run_preproc.sh you will see a combination of calls to:
python src/custom/custom_preproc.py --analysis=...for custom OSL / helper scriptsmne_bids_pipeline --steps=... --config=...for standardized MNE-BIDS pipeline steps
Details by stage:
-
run_nifti.sh- Uses
dcm2niixto convert DICOMs from${RAW_DIR}/<subject>/dicomto NIfTI in${RAW_DIR}/<subject>/anat. - Skips if the NIfTI directory already exists.
- Auto-renames files containing
T1w/T2wto include_t1w/_t2wsuffixes.
- Uses
-
run_bids.sh- Loads a per-subject BIDS config:
${CONFIG_DIR}/bids/sub-${SUBJECT}_config-bids.py. - Calls
src/custom/format_bids.pyto create/validate the BIDS structure.
- Loads a per-subject BIDS config:
-
run_freesurfer.sh- Requires T1w (and optionally T2w) images. Paths can be auto-detected from BIDS or provided via
--t1w,--t2w. - Runs
recon-allwith-parallel -openmp ${MAX_WORKERS}. - Builds watershed BEM surfaces afterward.
- Requires T1w (and optionally T2w) images. Paths can be auto-detected from BIDS or provided via
-
run_coreg.sh- Temporarily sets
SUBJECT=sub-<id>_ses-<SESSION>for MNE/FreeSurfer conventions. - Calls
src/custom/auto_coreg.pyfor automated head<->MRI coregistration.
- Temporarily sets
-
run_preproc.sh- Sets
CONFIG_PATH=${CONFIG_DIR}/config-${ANALYSIS}.py. - Calls custom steps:
bad_segments,bad_channels,manual_channelviacustom_preproc.py. - Runs
mne_bids_pipeline --steps=preprocessingfollowed by manual ICA selection and application (ICA/SSP/PTP reject). - Detects bad epochs and prepares source space prerequisites (evoked, cov, BEM solution, source space, forward).
- Sets
-
run_sensor.sh- Runs the sensor-level steps via
mne_bids_pipeline --steps=sensorusing your config.
- Runs the sensor-level steps via
-
run_source.sh- Sources FreeSurfer env then runs
mne_bids_pipeline --steps=source.
- Sources FreeSurfer env then runs
Project-specific configuration files are stored in your CONFIG_DIR (e.g., analysis/config/<EXPERIMENT>).
Each config is a plain Python file that sets variables consumed by the pipeline, e.g.:
bids_root,deriv_root,subjects,sessions,task,process_empty_room- preprocessing options:
l_freq,h_freq, Maxwell filter settings, bad-channel/epoch options - custom flags:
_skip_on_deriv,_manual_bads,_manual_ica
Example config snippet (from this repo's configs):
_skip_on_deriv = True # if True, skip steps when derivatives exist
process_empty_room = True
subjects = ['007']
task = 'TSX' # experiment task nameNotes:
- Config files are plain Python. The pipeline imports them as modules to populate a
SimpleNamespace. - If you need dict-like behavior (for example calling
.pop()), convert the namespace withvars(cfg).
Typical config file layout under your config directory:
CONFIG_DIR/
<EXPERIMENT>/
config-<ANALYSIS>.py # used by most stages
bids/
sub-<SUBJECT>_config-bids.py # used by run_bids.sh
The repository includes custom helpers in src/custom/ to integrate OSL/Ephephys steps (bad segment detection, bad
channel detection, manual ICA selection, etc.). These are invoked by custom_preproc.py. When running manual
inspection steps, the scripts will typically open interactive plots - run from an X-enabled environment or save
outputs to disk when running headless.
- Run full automated preprocessing for one subject:
./local-mne-opm.sh preproc --exp TSXpilot --sub 007 --config /path/to/config- Re-run only bad-epoch detection after changing parameters in config:
conda activate mne-opm
python src/custom/custom_preproc.py --analysis=bad_epochs --config /path/to/config/config-CSI.py- FreeSurfer: set
--fsto your install root or exportFREESURFER_HOMEbeforehand. - Some stages rely on
SESSION(default01) for subject formatting in coreg; ensure it’s set in your environment if needed. - Interactive steps require a display; set
MPLBACKEND=aggto render to files when running headless. - To re-run a stage that checks for existing derivatives, delete the target derivative folder or set
_skip_on_deriv = Falsein your config.
- If you see errors like
AttributeError: 'SimpleNamespace' object has no attribute 'pop', the issue is that config modules are imported into aSimpleNamespace(not a dict). Usegetattr(cfg, 'key', default)orvars(cfg).pop('key', default)to access or mutate config keys appropriately. - If the pipeline says derivatives exist and you want to re-run, either remove the derivatives folder or set
_skip_on_deriv = Falsein your config and re-run. - For plotting/interactive steps, ensure your environment supports GUI output or configure plots to save to disk.
Contributions are welcome. Please follow the project's style, run tests if present, and create a PR against the
main branch. For large changes, open an issue first describing the proposal.
Check the repository root for a LICENSE file. When using this pipeline in publications, please cite MNE-Python, mne-bids-pipeline, and any other libraries used.
Open an issue in this repo with reproducible steps to replicate problems and include the full stack trace and the config file you used.
End of README.