-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
Description
Script/Job File
#!/bin/bash
#$ -pe smp 16
#$ -q UI
#$ -m bea
#$ -M [email protected]
#$ -o /Shared/vosslabhpc/Projects/CREST/code/fmriprep/out/
#$ -e /Shared/vosslabhpc/Projects/CREST/code/fmriprep/err/
OMP_NUM_THREADS=10
singularity run -H ${HOME}/singularity_home -B /Shared/vosslabhpc:/mnt \
/Shared/vosslabhpc/UniversalSoftware/SingularityContainers/fmriprep-1.2.1.simg \
/mnt/Projects/CREST/ /mnt/Projects/CREST/derivatives \
participant --participant_label BETTER120053 \
-w /nfsscratch/Users/ariveradompenciel/work/CRESTfmriprep \
--write-graph --mem_mb 35000 --omp-nthreads 10 --nthreads 16 --use-aroma \
--output-space template \
--template MNI152NLin2009cAsym \
--fs-license-file /mnt/UniversalSoftware/freesurfer_license.txt
Error Message
Fatal Python error: Segmentation fault
Current thread 0x00002ab53129c440 (most recent call first):
File "/usr/local/miniconda/lib/python3.6/site-packages/nibabel/openers.py", line 210 in read
File "/usr/local/miniconda/lib/python3.6/site-packages/nibabel/fileslice.py", line 680 in read_segments
File "/usr/local/miniconda/lib/python3.6/site-packages/nibabel/fileslice.py", line 791 in fileslice
File "/usr/local/miniconda/lib/python3.6/site-packages/nibabel/arrayproxy.py", line 367 in __getitem__
File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/interfaces/registration.py", line 378 in _run_interface
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 522 in run
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 635 in _run_command
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 555 in _run_interface
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 471 in run
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 69 in run_node
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/process.py", line 175 in _process_worker
File "/usr/local/miniconda/lib/python3.6/multiprocessing/process.py", line 93 in run
File "/usr/local/miniconda/lib/python3.6/multiprocessing/process.py", line 258 in _bootstrap
File "/usr/local/miniconda/lib/python3.6/multiprocessing/spawn.py", line 118 in _main
File "/usr/local/miniconda/lib/python3.6/multiprocessing/forkserver.py", line 231 in _serve_one
File "/usr/local/miniconda/lib/python3.6/multiprocessing/forkserver.py", line 196 in main
File "<string>", line 1 in <module>
exception calling callback for <Future at 0x2b441962ff28 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 324, in _invoke_callbacks
callback(self)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 143, in _async_callback
result = args.result()
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
exception calling callback for <Future at 0x2b441962fc50 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 324, in _invoke_callbacks
callback(self)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 143, in _async_callback
result = args.result()
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
exception calling callback for <Future at 0x2b4419658cf8 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 324, in _invoke_callbacks
callback(self)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 143, in _async_callback
result = args.result()
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
Traceback (most recent call last):
File "/usr/local/miniconda/bin/fmriprep", line 11, in <module>
sys.exit(main())
File "/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py", line 342, in main
fmriprep_wf.run(**plugin_settings)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py", line 595, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/base.py", line 184, in run
self._send_procs_to_workers(updatehash=updatehash, graph=graph)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 336, in _send_procs_to_workers
deepcopy(self.procs[jobid]), updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _submit_job
result_future = self.pool.submit(run_node, node, updatehash, self._taskid)
File "/usr/local/miniconda/lib/python3.6/concurrent/futures/process.py", line 452, in submit
raise BrokenProcessPool('A child process terminated '
concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore
Potential Solution(s)
- re-run without changing anything
- delete the working directory of the participant (the one in
nfsscratch)