-
Notifications
You must be signed in to change notification settings - Fork 1
Temporal analysis of MS lesions #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
plbenveniste
wants to merge
26
commits into
main
Choose a base branch
from
plb/nnunet
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit
Hold shift + click to select a range
8739639
Added folder for nnunet experiment
c495a26
Formatting
cfe61fd
added code to crop img along sc
plbenveniste 24bcb0c
script for seg of full dataset
plbenveniste 919477f
modified seg and crop algo
plbenveniste 926078d
edited qc for seg and crop
plbenveniste 8b8e1a9
lesion clustering accross slides
plbenveniste 65138b0
removed unused files for seg and crop of lesion
plbenveniste fc33182
created file to compare two time point
plbenveniste db106a4
modified lesion time point comparison
plbenveniste 511c168
trying to register M0 to M12
plbenveniste 6e70cb1
problem with registration with vert levels
plbenveniste 86a77d9
registration and lesion matching
plbenveniste 51374e1
need to fix the identification of lesions accross files
plbenveniste 7188349
first working version of lesion comparison
plbenveniste c63ba5f
formatting of script
plbenveniste b195245
data analysis of canproco
plbenveniste c9ab813
added healthy control analysis
plbenveniste 11a1f5b
modified to select wanted contrast
plbenveniste ee3e865
added image of poor quality
plbenveniste af03d7e
removed useless line
plbenveniste 4b7d247
extract labeled slices and convert to nnunet format
plbenveniste 6406dac
renamed file to explain conversion to nnunet format
plbenveniste accea6e
extract slice and sc_seg for region-based training
plbenveniste f18e846
finished file for sc seg, slice extraction and conversion to nnunet f…
plbenveniste cf8ca41
code for sc seg on 3d and conversion to nnunet format
plbenveniste File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,231 @@ | ||
| """ | ||
| This python files performs data analysis on the canproco dataset. | ||
|
|
||
| Args: | ||
| -d, --dataset-path: path to the dataset | ||
| -o, --output-path: path to the output directory | ||
|
|
||
| Returns: | ||
| - a csv file containing the results of the analysis | ||
|
|
||
| Example: | ||
| python data_analysis.py -d /path/to/dataset -o /path/to/output -c STIR,PSIR | ||
plbenveniste marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| To do: | ||
| * | ||
|
|
||
| Pierre-Louis Benveniste | ||
| """ | ||
|
|
||
|
|
||
| import argparse | ||
| import os | ||
| import json | ||
|
|
||
|
|
||
| def get_parser(): | ||
| """ | ||
| This function parses the arguments given to the script. | ||
|
|
||
| Args: | ||
| None | ||
|
|
||
| Returns: | ||
| parser: parser containing the arguments | ||
| """ | ||
|
|
||
| parser = argparse.ArgumentParser(description='Perform data analysis on the canproco dataset') | ||
| parser.add_argument('-d', '--dataset-path', type=str, required=True, help='path to the dataset') | ||
| parser.add_argument('-o', '--output-path', type=str, required=True, help='path to the output directory') | ||
|
|
||
| return parser | ||
|
|
||
|
|
||
| def main(): | ||
| """ | ||
| This function performs the data analysis. | ||
|
|
||
| Args: | ||
| None | ||
|
|
||
| Returns: | ||
| None | ||
| """ | ||
| # Get the parser | ||
| parser = get_parser() | ||
| args = parser.parse_args() | ||
|
|
||
| # Get the arguments | ||
| dataset_path = args.dataset_path | ||
| output_path = args.output_path | ||
|
|
||
| #time points (for now we only work on M0) | ||
| time_points = ['ses-M0', 'ses-M12'] | ||
|
|
||
| # Get the list of subjects | ||
| subjects = os.listdir(dataset_path) | ||
| subjects = [subject for subject in subjects if 'sub-' in subject] | ||
| print("Total number of subjects: {}".format(len(subjects))) | ||
|
|
||
| #initialize lists | ||
| subjects_all_time_points = [] | ||
| subjects_no_M0 = [] | ||
| subjects_no_M12 = [] | ||
| subjects_PSIR = [] | ||
| subjects_STIR = [] | ||
| subjects_PSIR_STIR = [] | ||
| subjects_no_PSIR_no_STIR = [] | ||
| subjects_no_PSIR_no_STIR_once = [] | ||
|
|
||
| subjects_info = {} | ||
|
|
||
| #Iterate over the subjects | ||
| for subject in subjects: | ||
| #iterate over the time_points | ||
| print("Subject: {}".format(subject)) | ||
| sub_time_points = [] | ||
| for time_point in time_points: | ||
| #if time_point exists for the subject | ||
| if os.path.exists(os.path.join(dataset_path, subject, time_point)): | ||
| sub_time_points.append(time_point) | ||
| print("Time points available: {}".format(sub_time_points)) | ||
| #initialize the contrast_subject dictionary | ||
| contrast_subject = {} | ||
| for time_point in sub_time_points: | ||
| contrast_subject[time_point] = [] | ||
| #iterate over the time points | ||
| for time_point in sub_time_points: | ||
| print("Time point: {}".format(time_point)) | ||
| #get the MRI files for the subject | ||
| subject_path = os.path.join(dataset_path, subject, time_point, 'anat') | ||
| subject_files = os.listdir(subject_path) | ||
| subject_files = [file for file in subject_files if '.nii.gz' in file] | ||
| #we get the contrast for each file | ||
| for file in subject_files: | ||
| contrast_subject[time_point].append(file.split('_')[2].split('.')[0]) | ||
| #we print the contrasts available for the subject | ||
| print("Contrasts available: {}".format(sorted(contrast_subject[time_point]))) | ||
| print(contrast_subject) | ||
| print("-----------------------------------") | ||
| subject_info = {'subject': subject, 'time_points': sub_time_points, 'contrasts': contrast_subject} | ||
| subjects_info[subject] = subject_info | ||
|
|
||
| #we get the list of the subjects with all the time points | ||
| if len(sub_time_points) == len(time_points): | ||
| subjects_all_time_points.append(subject) | ||
| #we get the list of the subjects with no M0 | ||
| if 'ses-M0' not in sub_time_points: | ||
| subjects_no_M0.append(subject) | ||
| #we get the list of the subjects with no M12 | ||
| if 'ses-M12' not in sub_time_points: | ||
| subjects_no_M12.append(subject) | ||
|
|
||
| #we get the list of the subjects with PSIR at every time point that they have | ||
| psir_present = True | ||
| for time_point in sub_time_points: | ||
| if 'PSIR' not in contrast_subject[time_point]: | ||
| psir_present = False | ||
| if psir_present: | ||
| subjects_PSIR.append(subject) | ||
| #we get the list of the subjects with STIR at every time point that they have | ||
| stir_present = True | ||
| for time_point in sub_time_points: | ||
| if 'STIR' not in contrast_subject[time_point]: | ||
| stir_present = False | ||
| if stir_present: | ||
| subjects_STIR.append(subject) | ||
| #we get the list of the subjects with PSIR and STIR at every time point that they have | ||
| psir_stir_present = True | ||
| for time_point in sub_time_points: | ||
| if 'PSIR' not in contrast_subject[time_point] or 'STIR' not in contrast_subject[time_point]: | ||
| psir_stir_present = False | ||
| if psir_stir_present: | ||
| subjects_PSIR_STIR.append(subject) | ||
| #we get the list of the subjects with no PSIR and no STIR at every time point that they have | ||
| psir_stir_not_present = True | ||
| for time_point in sub_time_points: | ||
| if 'PSIR' in contrast_subject[time_point] or 'STIR' in contrast_subject[time_point]: | ||
| psir_stir_not_present = False | ||
| if psir_stir_not_present: | ||
| subjects_no_PSIR_no_STIR.append(subject) | ||
| #we get the list of the subjects with no PSIR and no STIR at least once | ||
| psir_stir_not_present_once = False | ||
| for time_point in sub_time_points: | ||
| if 'PSIR' not in contrast_subject[time_point] and 'STIR' not in contrast_subject[time_point]: | ||
| psir_stir_not_present_once = True | ||
| if psir_stir_not_present_once: | ||
| subjects_no_PSIR_no_STIR_once.append(subject) | ||
|
|
||
| #we print the results | ||
| print("Total number of subjects: {}".format(len(subjects))) | ||
| print("Number of subjects with all time points: {}".format(len(subjects_all_time_points))) | ||
| print("Number of subjects with no M0: {}".format(len(subjects_no_M0))) | ||
| print("Number of subjects with no M12: {}".format(len(subjects_no_M12))) | ||
| print("Number of subjects with PSIR at every time point they have: {}".format(len(subjects_PSIR))) | ||
| print("Number of subjects with STIR at every time point they have: {}".format(len(subjects_STIR))) | ||
| print("Number of subjects with PSIR and STIR at every time point they have: {}".format(len(subjects_PSIR_STIR))) | ||
| print("Number of subjects with no PSIR and no STIR at every time point they have: {}".format(len(subjects_no_PSIR_no_STIR))) | ||
| print("Number of subjects with no PSIR and no STIR at least once: {}".format(len(subjects_no_PSIR_no_STIR_once))) | ||
| print("-----------------------------------") | ||
|
|
||
| #we save the subjects_info dictionary in a json file | ||
| with open(os.path.join(output_path, 'subjects_info.json'), 'w') as fp: | ||
| json.dump(subjects_info, fp, indent=4) | ||
|
|
||
| #we write a txt file with the results | ||
| with open(os.path.join(output_path, 'results.txt'), 'w') as f: | ||
| f.write("Total number of subjects: {}\n".format(len(subjects))) | ||
| f.write("Number of subjects with all time points: {}\n".format(len(subjects_all_time_points))) | ||
| f.write("Number of subjects with no M0: {}\n".format(len(subjects_no_M0))) | ||
| f.write("Number of subjects with no M12: {}\n".format(len(subjects_no_M12))) | ||
| f.write("Number of subjects with PSIR at every time point they have: {}\n".format(len(subjects_PSIR))) | ||
| f.write("Number of subjects with STIR at every time point they have: {}\n".format(len(subjects_STIR))) | ||
| f.write("Number of subjects with PSIR and STIR at every time point they have: {}\n".format(len(subjects_PSIR_STIR))) | ||
| f.write("Number of subjects with no PSIR and no STIR at every time point they have: {}\n".format(len(subjects_no_PSIR_no_STIR))) | ||
| f.write("Number of subjects with no PSIR and no STIR at least once: {}\n".format(len(subjects_no_PSIR_no_STIR_once))) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with all time points:\n") | ||
| for subject in subjects_all_time_points: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with no M0:\n") | ||
| for subject in subjects_no_M0: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with no M12:\n") | ||
| for subject in subjects_no_M12: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with PSIR at every time point they have:\n") | ||
| for subject in subjects_PSIR: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with STIR at every time point they have:\n") | ||
| for subject in subjects_STIR: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with PSIR and STIR at every time point they have:\n") | ||
| for subject in subjects_PSIR_STIR: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with no PSIR and no STIR at every time point they have:\n") | ||
| for subject in subjects_no_PSIR_no_STIR: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
| f.write("Subjects with no PSIR and no STIR at least once:\n") | ||
| for subject in subjects_no_PSIR_no_STIR_once: | ||
| f.write("{}\n".format(subject)) | ||
| f.write("-----------------------------------\n") | ||
|
|
||
| return None | ||
|
|
||
|
|
||
| if __name__ == '__main__': | ||
|
|
||
| main() | ||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.