Skip to content

Conversation

@natacha-beck
Copy link
Collaborator

Hi @mprati

I fix some part of the CBRAIN descriptor.

  • Remove the [OPPNI_LOCATION] and put it directly in the command line, it was not the right path on the option anyway.
  • Remove [PART] in the command line, it was not in the definition of input.

And some other little fix.

Copy link
Owner

@mprati mprati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Natacha,

On the command line oppni.py is located in the following directory: /oppni/cPRONTO/oppni.py.
Removal of, and hard coding OPPNI_LOCATION is fine.
Change to OUTPUT_DIR id is fine.

oppni will allow for --analysis_level optional to be true: if --analysis_level is omitted all participants will be processed.

oppni currently will not process participants by session label.
I will look into this and let you know when this is complete.

Regards
Mark

@natacha-beck
Copy link
Collaborator Author

Hi Mark,

  • I am not able to find the /oppni/cPRONTO/oppni.py when I am inside the container. I found oppni at the following location /oppni/oppni.py. oes it seems right ?
  • I can remove the session form the list [ "participant", "group", "session" ]
  • Can you please look at this:
@mprati Why ithe image is mprati/oppni_cbrain and not the one we use previously for running the BIDSAppOPPNI in CBRAIN mprati/bids-apps-oppni

The one mprati/oppni_cbrain is buggy i get the following error:

File "/oppni/oppni.py", line 41
print line,

`

@mprati
Copy link
Owner

mprati commented Jun 25, 2020

Hi Natacha,

I am waiting for the image to be tested on CBRAIN before moving it.

I'm not sure how you are finding /oppni/oppni.py in the latest image. it is definitely in /oppni.cPRONTO/oppni.py.

i.e. my shell run of the image

[mprati@kakapo ~]$ singularity shell docker://mprati/oppni_cbrain
Docker image path: index.docker.io/mprati/oppni_cbrain:latestSingularity: Invoking an interactive shell within container...

Singularity oppni_cbrain:~> cd /
Singularity oppni_cbrain:/> ls
bin boot cbrain dev environment etc home lib lib32 lib64 libx32 media mnt octave_source oppni opt proc root run sbin singularity srv sys tmp usr var
Singularity oppni_cbrain:/> cd oppni
Singularity oppni_cbrain:/oppni> ls
Dockerfile Octave README.md Run_GUI.py Run_Pipelines.py _documentation bids cPRONTO circle.yml compiled config extra oppni.prj run_oppni_local.m scripts_gui scripts_matlab tmp
Singularity oppni_cbrain:/oppni> oppni
bash: oppni: command not found
Singularity oppni_cbrain:/oppni> cd cPRONTO
Singularity oppni_cbrain:/oppni/cPRONTO> ls oppni.py
oppni.py
Singularity oppni_cbrain:/oppni/cPRONTO> ls -al oppni.py
-rwxr-xr-x. 1 mprati strother_lab 106329 Jun 19 20:12 oppni.py
Singularity oppni_cbrain:/oppni/cPRONTO> Singularity oppni_cbrain:/oppni/cPRONTO> python3 oppni.py
Using new version of Python ... using shutil import
3.7.5
Too few arguments!
usage: oppni [-h] [-s STATUS_UPDATE_IN] [--validate VAL_INPUT_FILE_PATH]
[-p {0,1,2,3,4}] [-i input specification file]
[-c pipeline combination file] [-o OUTPUT_PREFIX]
[-a {None,LDA,GNB,GLM,erCVA,erGNB,erGLM,SCONN}] [-m {dPR,P,R}]
[--os {CON,FIX,IND,ALL}] [-r REFERENCE]
[--contrast CONTRAST_LIST_STR] [--vasc_mask {0,1}] [-k {0,1}]
[-v VOXELSIZE] [--convolve VALUE] [--decision_model MODEL]
[--drf FRACTION] [--Nblock NUMBER] [--WIND SIZE]
[--num_PCs NUMBER] [--subspace SIZE] [--spm FORMAT]
[--TR_MSEC TR_MSEC] [--DEOBLIQUE] [--TPATTERN TPATTERN]
[--BlurToFWHM {0,1}] [--control_motion_artifact {yes,no}]
[--control_wm_bias {yes,no}] [--output_all_pipelines]
[--output_nii_also {1,0}] [-e {matlab,octave,compiled}]
[--cluster {FRONTENAC,BRAINCODE,CAC,CC,SCINET,SHARCNET,CBRAIN,CEDAR,GRAHAM,SLURM}]
[--account HPC_ACCOUNT] [--memory MEMORY] [--walltime WALLTIME]
[-q QUEUE] [-pe PARALLEL_ENV] [--run_locally] [--force_rerun]
[--dry_run] [--print_options_in PRINT_OPTIONS_PATH]
[--min_subjects MIN_SUBJECTS] [-b BIDS_DIR]
[--analysis_level ANALYSIS_LEVEL]
[--participant_label PARTICIPANT_LABEL] [--taskname TASK_NAME]
[--taskdesign TASK_DESIGN] [--ndrop NDROP]
bids_dir bidsoutput_dir

positional arguments:
bids_dir Dataset directory path: Positional argument #1
required. The folder for BIDS data set or base folder
(prefix) for OPPNI input.txt
bidsoutput_dir Output directory path: Positional argument strotherlab#2
required.The folder where the output files will be
stored. If you are running group level analysis this
folder should have been pre-populated with the results
of the participant level analysis. Becomes OUT= in the
OPPNI input.txt files.

optional arguments:
-h, --help show this help message and exit
-s STATUS_UPDATE_IN, --status STATUS_UPDATE_IN
Performs a status update on previously submitted
processing. Supply the output directory where the
previous processing has been stored in.
--validate VAL_INPUT_FILE_PATH
Performs a basic validation of an input file
(existence of files and consistency!).
-p {0,1,2,3,4}, --part {0,1,2,3,4}
select pipeline optimization step, 0: All steps
[default] 1: Preprocessing ans statistics estimation
step, 2: Optimization step, 3: Spatial normalization,
4: quality control.
-i input specification file, --input_data input specification file
Filename of OPPNI input.txt file containing the input
and output data paths.If specified this non-BIDS-
dataset Input file will be used for the OPPNI pipeline
-c pipeline combination file, --pipeline pipeline combination file
Alternate pipeline file name specifying the
preprocessing steps
-o OUTPUT_PREFIX, --output_prefix OUTPUT_PREFIX
Output folder prefix for storage of all the processing
and results. This is convenient way to specify an base
output folder, instead of having to repeat it on every
line in the input file. If you specify this, OUT=/path
in input files maps to OUT=output_prefix/path.The
prefix folder must be an absolute path from root
-a {None,LDA,GNB,GLM,erCVA,erGNB,erGLM,SCONN}, --analysis {None,LDA,GNB,GLM,erCVA,erGNB,erGLM,SCONN}
Choose an analysis model
:None,LDA,GNB,GLM,erCVA,erGNB,erGLM,SCONN
-m {dPR,P,R}, --metric {dPR,P,R}
Optimization metric
--os {CON,FIX,IND,ALL}, --opt_scheme {CON,FIX,IND,ALL}
Optimization scheme to decide on which pipelines to be
produced.
-r REFERENCE, --reference REFERENCE
anatomical reference to be used in the spatial
normalization step, i.e. -p,--part=3
--contrast CONTRAST_LIST_STR
desired task contrast in form of task-baseline, using
names as defined in the task file. Multi-contrast
analysis is disabled for now.
--vasc_mask {0,1} Toggles estimation of subject-specific vascular mask
that would be excluded prior to analysis (0: disble,
1: enable). Recommended.
-k {0,1}, --keepmean {0,1}
(optional) determine whether the ouput nifti files
contain the mean scan (Default keepmean=0, i.e. remove
the mean)
-v VOXELSIZE, --voxelsize VOXELSIZE
(optional) determine the output voxel size of nifti
file
--convolve VALUE VALUE=Binary value, for whether design matrix should
be convolved with a standard SPMG1 HRF. 0 = do not
convolve and 1 = perform convolution
--decision_model MODEL
MODEL=string specifying type of decision boundary.
Either: linear for a pooled covariance model or
nonlinear for class-specific covariances
--drf FRACTION FRACTION=Scalar value of range (0,1), indicating the
fraction of full-date PCA subspace to keep during PCA-
LDA analysis. A drf of 0.3 is recommended as it has
been found to be optimal in previous studies.
--Nblock NUMBER NUMBER= number of equal sized splits to break the data
into, in order to perform time-locked averaging. Must
be at least 2, with even numbers >=4, recommended to
obtain robust covariance estimates
--WIND SIZE SIZE = window size to average on, in TR (usually in
range 6-10 TR)
--num_PCs NUMBER NUMBER = total number of principal components to
retain
--subspace SIZE COMP = string specifying either: 'onecomp' = only
optimize on CV#1 or 'multicomp' = optimize on full
multidimensional subspace
--spm FORMAT FORMAT =string specifying format of output SPM.
Options include corr (map of voxelwise seed
correlations) or zscore (Z-scored map of reproducible
correlation values)
--TR_MSEC TR_MSEC Specify TR in msec for all entries in the input file,
overrides the TR_MSEC in the TASK files
--DEOBLIQUE Correct for oblique scans (DEOBLIQUE) to improve
spatial normalization
--TPATTERN TPATTERN Use if data contain no slice-timing information stored
in the NIFTI headers (TPATTERN)
--BlurToFWHM {0,1} This option will enable adaptive spatial smoothing to
to equalize smoothing across multiple sites.
--control_motion_artifact {yes,no}
Control for motion artifact.
--control_wm_bias {yes,no}
Control for white matter bias using spatial priors.
--output_all_pipelines
Whether to output spms for all the optimal pipelines.
--output_nii_also {1,0}
Whether to output all pipeline spms in Nifti format.
WARNING: Be advised the space requirements will be
orders of magnitude higher. Default off
-e {matlab,octave,compiled}, --environment {matlab,octave,compiled}
(optional) determine which software to use to run the
code: matlab or compiled(default)
--cluster {FRONTENAC,BRAINCODE,CAC,CC,SCINET,SHARCNET,CBRAIN,CEDAR,GRAHAM,SLURM}
Please specify the type of cluster you're running the
code on.
--account HPC_ACCOUNT
If you have multiple HPC allocations, specify the
account to be used on the cluster.
--memory MEMORY (optional) determine the minimum amount RAM (GB)
needed for the job, e.g. --memory 8
--walltime WALLTIME (optional) specify total run time needed for each job,
e.g. --walltime 30:00:00 (in hours:minutes:seconds
format)!
-q QUEUE, --queue QUEUE
(optional) SGE queue name. Default is None but it is
recommended to specify this explicitly.
-pe PARALLEL_ENV, --parallel_env PARALLEL_ENV
(optional) Name of the parallel environment under
which the multi-core jobs gets executed. This must be
specified explicitly when numcores > 1.
--run_locally Run the pipeline on this computer without using a HPC
cluster. This has not been fully tested yet, and is
not recommended.Specify the number of cores using -n
(or --numcores) to allow the program to run in
pararallel
--force_rerun DANGER: Cleans up existing processing and forces a
rerun. Be absolutely sure this is what you want to do.
--dry_run Generates job files only, but not run/sbumit them.
Helps in debugging the queue/HPC options.
--print_options_in PRINT_OPTIONS_PATH, --po PRINT_OPTIONS_PATH
Prints the options used in the previous processing of
this folder.
--min_subjects MIN_SUBJECTS
(optional) Override default(4), specify minimum of
unique subjects required.
-b BIDS_DIR, --bids_dir BIDS_DIR
The directory folder with the input data set formatted
according to the BIDS standard. NOTE: output_dir is
required for bids
--analysis_level ANALYSIS_LEVEL
Level of the analysis that will be performed. Must
begin with either participant or group.Multiple
participant level analysis can be run independently
(in parallel) using the same output_dir.
--participant_label PARTICIPANT_LABEL
The label(s) of the participant(s) that should be
analyzed. The label(s) corresponds to
sub-<participant_label> from the BIDS spec (so it does
not include 'sub-'). If this parameter is not provided
all subjects will be analyzed. Multiple participants
can be specified with a space or comma separated list.
--taskname TASK_NAME String specifying the name of the fMRI task you want
to analyze. Note all tasks in the data set will
processed if not specified.
--taskdesign TASK_DESIGN
Type of task. Must be either 'block' or 'event.
Default is event
--ndrop NDROP Positive integer, specifying number of scans to drop
from the start of the run,in order to avoid non-
equilibrium effects. Default: don't drop any scan
volumes
Singularity oppni_cbrain:/oppni/cPRONTO>

@natacha-beck
Copy link
Collaborator Author

@mprati please do not merge it yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants