diff --git a/README.md b/README.md index 8d052ce..129478b 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,9 @@ This package automates the conversion of EEG recordings (xdf files) to BIDS (Bra git clone https://github.com/s-ccs/LSLAutoBIDS.git ``` ### **Step 2: Install the package** +Go to the cloned directory and install the package using pip. ``` +cd LSLAutoBIDS pip3 install lslautobids ``` It is advised to install the package in a separate environment (e.g. using `conda` or `virtualenv`). @@ -39,13 +41,13 @@ The package requires the recorded XDF data to be organized in a specific directo - The `projects` root location is the root directory where all the eeg raw recordings (say `.xdf` files) are stored e.g. `projects/sub-A/ses-001/eeg/sub-A_ses-001_task-foo.xdf`. -- The (optional) `project_stimulus` root location is the directory where the experiments (e.g `.py`, `.oxexp`) and behavioral files (e.g. eye-tracking recordings, labnotebook, participant forms, etc ) are stored. +- The (optional) `project_other` root location is the directory where the experiments (e.g `.py`, `.oxexp`) and behavioral files (e.g. eye-tracking recordings, labnotebook, participant forms, etc ) are stored. - The `bids` root location is the directory where the converted BIDS data is stored, along with source data and code files which we want to version control using `Datalad`. > [!IMPORTANT] > Please follow the BIDS data organization guidelines for storing the neuroimaging data for running this package. The BIDS conversion guidelines are based on the recommended directory/files structure. You only can change the location of the root directories according to your preference. You must also strictly follow the naming convention for the project and subject subdirectories. -Here you will find the recommended directory structure for storing the project data (recorded, stimulus and converted data) in the [data_organization](docs/data_organization.md) file. +Here you will find the recommended directory structure for storing the project data (recorded, other and converted data) in the [data_organization](docs/data_organization.md) file. ### **Step 4: Generate the configuration files** diff --git a/docs/about.md b/docs/about.md deleted file mode 100644 index 26d8084..0000000 --- a/docs/about.md +++ /dev/null @@ -1,3 +0,0 @@ - Write what the project is all about here. - - diff --git a/docs/data_organization.md b/docs/data_organization.md index 4fe75eb..29072aa 100644 --- a/docs/data_organization.md +++ b/docs/data_organization.md @@ -1,6 +1,6 @@ # How the data is organized -In this project, we are using a sample xdf file along with the corresponding stimulus files to demonstrate how the data inside the `projectname` folder is organized. This data should be organized in a specific way: +In this project, we are using a sample xdf file along with the corresponding other files to demonstrate how the data inside the `projectname` folder is organized. This data should be organized in a specific way: ### Recommended Project Organization Structure @@ -8,7 +8,7 @@ For convenience, we have provided a recommended project organization structure > [!IMPORTANT] -> The recommended directory structure is not self generated. The user needs to create the directories and store the recorded and stimulus data in them before running the conversion. +> The recommended directory structure is not self generated. The user needs to create the directories and store the recorded and others data in them before running the conversion. The dataset (both recorded and converted) is stored in the parent `data` directory. The `data` directory has three subdirectories under which the entire project is stored. The recommended directory structure is as follows: ``` @@ -16,7 +16,7 @@ data ├── bids # Converted BIDS data ├── projectname1 ├── projectname2 -├── project_stimulus # Experimental/Behavioral files +├── project_other # Experimental/Behavioral files ├── projectname1 ├── projectname2 ├── projects @@ -26,7 +26,7 @@ data ``` -Here `./data/projects/`, `./data/project_stimulus/`, `./data/bids/` are the root project directories. Each of this root directories will have a project name directory inside it and each project directory will have a subdirectory for each subject. +Here `./data/projects/`, `./data/project_other/`, `./data/bids/` are the root project directories. Each of this root directories will have a project name directory inside it and each project directory will have a subdirectory for each subject. ## Projects Folder @@ -52,7 +52,7 @@ Filename Convention for the raw data files : - **tasklabel** - `duration, mscoco, ...` - **runlabel** - `001, 002, 003, ...` (need to be an integer) -## Project Stimulus Folder +## Project Other Folder This folder contains the experimental and behavioral files which we also store in the dataverse. The folder structure is should as follows: @@ -66,7 +66,7 @@ This folder contains the experimental and behavioral files which we also store i └── behavioral_files((lab notebook, CSV, EDF file, etc)) - **projectname** - any descriptive name for the project -- **experiment** - contains the experimental files for the project. Eg: showStimulus.m, showStimulus.py +- **experiment** - contains the experimental files for the project. Eg: showOther.m, showOther.py - **data** - contains the behavioral files for the corresponding subject. Eg: experimentalParameters.csv, eyetrackingdata.edf, results.tsv. @@ -74,7 +74,7 @@ You can get the filename convention for the data files [here](https://bids-stand ## BIDS Folder -This folder contains the converted BIDS data files and other files we want to version control using `Datalad`. Since we are storing the entire dataset in the dataverse, we also store the raw xdf files and the associated stimulus/behavioral files in the dataverse. The folder structure is as follows: +This folder contains the converted BIDS data files and other files we want to version control using `Datalad`. Since we are storing the entire dataset in the dataverse, we also store the raw xdf files and the associated other/behavioral files in the dataverse. The folder structure is as follows: ``` └── bids └──projectname/ @@ -90,7 +90,7 @@ This folder contains the converted BIDS data files and other files we want to ve ├── sub-001_ses-001_task-Duration_run-001_eeg.eeg ......... └── beh - └──behavioral files + └──behavioral files (other files) └── misc └── experimental files (This needs to stored in zip format) └── sourcedata diff --git a/docs/developers_documentation.md b/docs/developers_documentation.md index 089782a..882856a 100644 --- a/docs/developers_documentation.md +++ b/docs/developers_documentation.md @@ -5,7 +5,7 @@ LSLAutoBIDS is a Python tool series designed to automate the following tasks sequentially: - Convert recorded XDF files to BIDS format -- Integrate the EEG data with non-EEG data (e.g., behavioral, stimulus) for the complete dataset +- Integrate the EEG data with non-EEG data (e.g., behavioral, other) for the complete dataset - Datalad integration for version control for the integrated dataset - Upload the dataset to Dataverse - Provide a command-line interface for cloning, configuring, and running the conversion process @@ -17,7 +17,7 @@ LSLAutoBIDS is a Python tool series designed to automate the following tasks seq - DataLad integration for version control - Dataverse integration for data sharing - Configurable project management -- Support for stimulus and behavioral data in addition to EEG data +- Support for behavioral data (non eeg files) in addition to EEG data - Comprehensive logging and validation for BIDS compliance @@ -55,6 +55,9 @@ LSLAutoBIDS is a Python tool series designed to automate the following tasks seq - [2. Logging Configuration (`config_logger.py`)](#2-logging-configuration-config_loggerpy) - [3. Utility Functions (`utils.py`)](#3-utility-functions-utilspy) +- [Testing](#testing) + - [Running Tests](#running-tests) + ## Architecture - TODO @@ -84,7 +87,7 @@ The configuration system manages dataversse and project-specific settings using #### 1. Dataverse and Project Root Configuration (`gen_dv_config.py`) This module generates a global configuration file for Dataverse and project root directories. This is a one-time setup per system. This file is stored in `~/.config/lslautobids/autobids_config.yaml` and contains: -- Paths for BIDS, projects, and stimulus directories : This allows users to specify where their eeg data, stimulus data, and converted BIDS data are stored on their system. This paths should be relative to the home/users directory of your system and string format. +- Paths for BIDS, projects, and project_other directories : This allows users to specify where their eeg data, behavioral data, and converted BIDS data are stored on their system. This paths should be relative to the home/users directory of your system and string format. - Dataverse connection details: Base URL, API key, and parent dataverse name for uploading datasets. Base URL is the URL of the dataverse server (e.g. https://darus.uni-stuttgart.de), API key is your personal API token for authentication (found in your dataverse account settings), and parent dataverse name is the name of the dataverse under which datasets will be created (this can be found in the URL when you are in the dataverses page just after 'dataverse/'). For example, if the URL is `https://darus.uni-stuttgart.de/dataverse/simtech_pn7_computational_cognitive_science`, then the parent dataverse name is `simtech_pn7_computational_cognitive_science`. @@ -189,7 +192,7 @@ The pipeline is designed to ensure: 2. EEG recordings are converted to BIDS format using MNE and validated against the BIDS standard. -3. Behavioral and experimental metadata (also called stimulus files in general) are included and checked against project expectations. +3. Behavioral and experimental metadata (also called other files in general in context on this project) are included and checked against project expectations. 4. Project metadata is populated (dataset_description.json). This is required as a part of BIDS standard. @@ -197,7 +200,7 @@ The pipeline is designed to ensure: #### 1. Entry Point (`bids_process_and_upload()`) -- Reads project configuration (_config.toml) to check if a stimulus computer was used. (stimulusComputerUsed: true) +- Reads project configuration (_config.toml) to check if a other computer (non eeg files) was used. (otherFilesUsed: true) - Iterates over each processed file and extracts identifiers. For example, for a file named `sub-001_ses-001_task-Default_run-001_eeg.xdf`, it extracts: @@ -246,7 +249,7 @@ This function handles the core conversion of a XDF files to BIDS format and cons - Load `.xdf` with `create_raw_xdf()`. (See section). - - Apply anonymization (daysback_min + anonymization_number from project TOML config). + - Apply anonymization (daysback_min + anonymizationNumber from project TOML config). - Write EEG data into BIDS folder via `write_raw_bids().` @@ -261,7 +264,7 @@ This function handles the core conversion of a XDF files to BIDS format and cons - 0: BIDS Conversion done but validation failure #### 3. Copy Source Files (`copy_source_files_to_bids()`) -This function ensures that the original source files (EEG and stimulus/behavioral files) are also a part our dataset. These files can't be directly converted to BIDS format but we give the user the option to include them in the BIDS directory structure in a pseudo-BIDS format for completeness. +This function ensures that the original source files (EEG and other/behavioral files) are also a part our dataset. These files can't be directly converted to BIDS format but we give the user the option to include them in the BIDS directory structure in a pseudo-BIDS format for completeness. - Copies the .xdf into the following structure: `/sourcedata/sub-XXX/ses-YYY/sub-XXX_ses-YYY_task-Name_run-ZZZ_eeg.xdf` @@ -270,13 +273,13 @@ This function ensures that the original source files (EEG and stimulus/behaviora - If a file already exists, logs a message and skips copying. -If stimulusComputerUsed=True in project config file: +If otherFilesUsed=True in project config file: 1. Behavioral files are copied via `_copy_behavioral_files()`. - - Validates required files against TOML config (`ExpectedStimulusFiles`). In this config we add the the extensions of the expected stimulus files. For example, in our testproject we use EyeList 1000 Plus eye tracker which generates .edf and .csv files. So we add these extensions as required stimulus files. We also have mandatory labnotebook and participant info files in .tsv format. + - Validates required files against TOML config (`OtherFilesInfo`). In this config we add the the extensions of the expected other files. For example, in our testproject we use EyeList 1000 Plus eye tracker which generates .edf and .csv files. So we add these extensions as required other files. We also have mandatory labnotebook and participant info files in .tsv format. - Renames files to include sub-XXX_ses-YYY_ prefix if missing. - - Deletes the other files in the stimulus directory that are not listed in `ExpectedStimulusFiles` in the project config file. It doesn"t delete from the source directory, only from out BIDS dataset. + - Deletes the other files in the project_other directory that are not listed in `OtherFilesInfo` in the project config file. It doesn"t delete from the source directory, only from out BIDS dataset. 2. Experimental files are copied via `_copy_experiment_files().` @@ -285,7 +288,7 @@ If stimulusComputerUsed=True in project config file: - Compresses into experiment.tar.gz. - Removes the uncompressed folder. -There is a flag in the `lslautobids run` command called `--redo_stim_pc` which when specified, forces overwriting of existing stimulus and experiment files in the BIDS dataset. This is useful if there are updates or corrections to the stimulus/behavioral data that need to be reflected in the BIDS dataset. +There is a flag in the `lslautobids run` command called `--redo_other_pc` which when specified, forces overwriting of existing other and experiment files in the BIDS dataset. This is useful if there are updates or corrections to the other/behavioral data that need to be reflected in the BIDS dataset. #### 4. Create Raw XDF (`create_raw_xdf()`) This function reads the XDF file and creates an MNE Raw object. It performs the following steps: @@ -364,7 +367,7 @@ This module handles the creation of a new dataset in Dataverse using the `pyData #### 2. Linking DataLad to Dataverse (`link_datalad_dataverse.py`) This module links the local DataLad dataset to the remote Dataverse dataset as a sibling. The function performs the following steps: 1. It first checks if the Dataverse is already created in the previous runs or it is just created in the current run (flag==0). If flag==0, it proceeds to link the DataLad dataset to Dataverse. -2. It runs the command `datalad add-sibling-dataverse dataverse_base_url doi_id`. This command adds the Dataverse as a sibling to the local DataLad dataset, allowing for synchronization and data management between the two. For lslautobids, we currently only allow to deposit data to Dataverse. In future version, we shall also add user controlled options for adding other siblings like github, gitlab, etc. +2. It runs the command `datalad add-sibling-dataverse dataverse_base_url doi_id`. This command adds the Dataverse as a sibling to the local DataLad dataset, allowing for synchronization and data management between the two. For lslautobids, we currently only allow to deposit data to Dataverse. In future version, we shall also add user controlled options for adding other siblings like github, gitlab, OpenNeuro, AWS etc. We chose Dataverse as it serves as both a repository and a data sharing platform, making it suitable for our needs. It also integrates well with DataLad and allows sharing datasets with collaborators or the public. @@ -402,3 +405,27 @@ This module contains various utility functions used across the application. 3. `write_toml_file` : Writes a dictionary to a TOML file. +## Testing + +The testing framework uses `pytest` to validate the functionality of the core components. + +- The tests are located in the `tests/` directory and cover various modules including configuration generation, file processing, BIDS conversion, DataLad integration, and Dataverse interaction. (Work in progress) + +- The test directory contains : + - `test_utils` : Directory containing utility functions needed across multiple test files. + - `testcases` : Directory containing all the tests in a in a directory structure - `test_`. + - Each `test_` directory contains a `data` folder with sample data for that test and a `test_.py` file with the actual test cases. + - `run_all_tests.py` : A script to run all the tests in the `testcases` directory sequentially. + +Tests will be added continuously as new features are added and existing features are updated. + +### Running Tests + +To run the tests, navigate to the `tests/` directory and execute: +`python tests/run_all_tests.py` + +These tests ensure that each component functions as expected and that the overall pipeline works seamlessly. This tests will also be triggered automatically on each push or PR to the main repository using GitHub Actions. + +## Miscellianeous Points +- To the current date, only EEG data is supported for BIDS conversion. Support for other modalities like Eye-tracking, etc,. in the BIDS format is not yet supported. Hence, LSLAutoBIDS relies on semi-BIDS data structures for those data and use user-definable regular expressions to match expected data-files. A future planned feature is to provide users more flexibility, especially in naming / sorting non-standard files. Currently, the user can only specify the expected file extensions for other/behavioral data and is automatically renamed to include sub-XXX_ses-YYY_ prefix if missing and also copied to pseudo-BIDS folder structure like `/sourcedata/sub-XXX/ses-YYY/`, `/misc/experiment.tar.gz` etc,. + diff --git a/docs/faq.md b/docs/faq.md index 2adbd8d..68a0608 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -1,31 +1,46 @@ -# FAQ questions +### FAQ questions +### FAQ questions +These are some frequently asked questions regarding the LSLAutoBIDS tool and workflow. +These are some frequently asked questions regarding the LSLAutoBIDS tool and workflow. __1. What would be the process a user goes through if they collect additional data and wants to add them to the existing dataset? How automatic will this be?__ - It is possible to re-run LSLAutoBIDS which would capture additional data from new subjects. Generally the idea is to run LSLAutoBIDS after each subject, then if there is an accidental overwrite we can still recover it due to versioning. __2. What Datalad commands are used currently in the workflow?__ -- We use datalad save to add and version the current state of the dataset and datalad push to push the current state to the remote repository. We do not use the datalad run and datalad rerun capabilities as of now in our tool. +- We use `datalad save` to add and version the current state of the dataset and `datalad push` to push the current state to the remote repository. We do not use the `datalad run` and `datalad rerun` capabilities as of now in our tool. +Additionally, users can later use `datalad clone` to clone the repository and `datalad get` to get the actual data files (as they are stored in git-annex). +- We use `datalad save` to add and version the current state of the dataset and `datalad push` to push the current state to the remote repository. We do not use the `datalad run` and `datalad rerun` capabilities as of now in our tool. +Additionally, users can later use `datalad clone` to clone the repository and `datalad get` to get the actual data files (as they are stored in git-annex). __3. How automated is addition / deletion of a sample (e.g. new subject)?__ -- Right now, adding a new sample requires calling the lslautobids run command, which could be run silently as well (e.g. via a regular cronjob). Deleting a sample/subject is not currently supported by the tool, but could be performed via Datalad. This is by design. +- Right now, adding a new sample requires calling the `lslautobids run` command, which could be run silently as well (e.g. via a regular cronjob). Deleting a sample/subject is not currently supported by the tool, but could be performed via Datalad. This is by design. +- Right now, adding a new sample requires calling the `lslautobids run` command, which could be run silently as well (e.g. via a regular cronjob). Deleting a sample/subject is not currently supported by the tool, but could be performed via Datalad. This is by design. __4. Do you generate a separate DOI for every dataset version?__ - No, we have the same DOI current for the entire dataset, for all versions. Before publishing, we version the dataset via datalad using the same DOI, as Dataverse only supports versioning upon making the dataset public. __5. Who controls the data upload process?__ -- There is a user prompt asking the experimenter if they want to upload the subject recording immediately when we run the lslautobids run command. We can also use the --yes flag of the lslautobids run command to force yes user input for all the user prompts throughout the run. +- There is a user prompt asking the experimenter if they want to upload the subject recording immediately when we run the `lslautobids run` command. We can also use the `--yes` flag of the `lslautobids run` command to force yes user input for all the user prompts throughout the run. +- There is a user prompt asking the experimenter if they want to upload the subject recording immediately when we run the `lslautobids run` command. We can also use the `--yes` flag of the `lslautobids run` command to force yes user input for all the user prompts throughout the run. __6. Can you upload a subset of files ?__ -- Yes, we have configurations in the project_config.toml file where the experimenter can specify to exclude certain subjects, certain tasks, and only exclude private stimulus files. +- Yes, we have configurations in the project_config.toml file where the experimenter can specify to exclude certain subjects, certain tasks, and only exclude private project_other files. __7. Can you upload to any other portals apart from Dataverse?__ - It is not yet implemented as a choice but rather hard coded, but as long as a dataverse sibling is supported, many portals could be used (dataverse, openneuro, aws,...). Currently, on Dataverse as a sibling is supported by our tool. - __8. How do you handle data licensing?__ - Data license depends on the repository and can typically be chosen by the user typically upon making the dataset publicly available (or a data user agreement form can be employed). That being said, at OpenNeuro data is typically licensed CC0. +__9. How does LSLAutoBIDS incorporate addition / deletion of a sample (e.g. new subject)?__ + +- Right now, adding a new sample requires calling the `lslautobids run` command, which could be run silently as well (e.g., via a regular cronjob). Deleting a sample/subject is not supported by the tool, but could be performed via `Datalad`. This is by design. + +__10. What would be the process a user goes through if they collect additional data and want to add them to the existing dataset?__ + +- It is possible to re-run LSLAutoBIDS which would capture additional data from new subjects. Generally the idea is to run LSLAutoBIDS after each subject, then if there is an accidental overwrite we can still recover it due to versioning. + __9. Troubleshooting: Datalad push to Dataverse command failed.__ - You might encounter errors such as: 'GitRepo' object has no attribute 'call_annex' and 'Datalad push command failed', this is because `git-annex` is required but not a Python package, and it needs to be installed sepearatly, run: `datalad-installer git-annex` after installing requirements. \ No newline at end of file diff --git a/docs/tutorial.md b/docs/tutorial.md index 2cf90a5..ed239f1 100644 --- a/docs/tutorial.md +++ b/docs/tutorial.md @@ -22,10 +22,10 @@ datalad-installer git-annex 3. Download the dummy dataset for testing in the LSLAutoBIDS root directory - ([tutorial_sample_dataset](https://files.de-1.osf.io/v1/resources/wz7g9/providers/osfstorage/68c3c636e33eca3b0feffa2c/?zip=)) -The dataset has a sample project called "test-project" which contains an EEG recording file in the projects directory, a sample eyetracking recording in the `project_stimulus/data` directory, and a dummy experimental code file in the `project_stimulus/experiment` directory. +The dataset has a sample project called "test-project" which contains an EEG recording file in the projects directory, a sample eyetracking recording in the `project_other/data` directory, and a dummy experimental code file in the `project_other/experiment` directory. ``` sample_data -└── project_stimulus +└── project_other └── test-project ├── data └── sub-999 @@ -61,7 +61,7 @@ Configuration file template: ```yaml "BIDS_ROOT": "# relative to home/users directory: LSLAutoBIDS/sample_data/bids/", "PROJECT_ROOT" : "# relative to home/users: LSLAutoBIDS/sample_data/projects/", - "PROJECT_STIM_ROOT" : "# path relative to home/users: LSLAutoBIDS/sample_data/project_stimulus/", + "PROJECT_OTHER_ROOT" : "# path relative to home/users: LSLAutoBIDS/sample_data/project_other/", "BASE_URL": "https://darus.uni-stuttgart.de", # The base URL for the service. "API_KEY": "# Paste your dataverse API token here", # Your API token for authentication. "PARENT_DATAVERSE_NAME": "simtech_pn7_computational_cognitive_science" # The name of the dataverse to which datasets will be uploaded. When you in the dataverses page , you can see this name in the URL after 'dataverse/'. @@ -77,7 +77,7 @@ lslautobids gen-proj-config --project test-project This will create a test-project_config.toml file in the project root directory. -> [!NOTE]: _For the rest of the tutorial, we are assuming that we place the downloaded sample_data in the root of the cloned LSLAutoBIDS repository and `LSLAutoBIDS` is cloned in the `home/users/` folder. In this case, the projects root will be `LSLAutoBIDS/sample_data/projects/` and so on for project_stimulus and bids._ +> [!NOTE]: _For the rest of the tutorial, we are assuming that we place the downloaded sample_data in the root of the cloned LSLAutoBIDS repository and `LSLAutoBIDS` is cloned in the `home/users/` folder. In this case, the projects root will be `LSLAutoBIDS/sample_data/projects/` and so on for project_other and bids._ Fill in the details in the configuration file `LSLAutoBIDS/sample_data/projects/test-project/test-project_config.toml` file. @@ -95,13 +95,15 @@ In this example, we will see how to use the LSLAutoBIDS package to: 4. Upload the dataset to a Dataverse repository for public access. ### How to run the example? -1. Check if the toml configuration file `LSLAutoBIDS/sample_data/projects/test-project/test-project_config.toml` is filled in with the correct details, specially the stimulusComputerUsed and expectedFiles fields. For this example, we are using eye tracking data as a behavioral file, thus the stimulusComputerUsed field should be set to true and the expectedFiles field should contain the expected stimulus file extensions. + +1. Check if the toml configuration file `LSLAutoBIDS/data/projects/test-project/test-project_config.toml` is filled in with the correct details, specially the `otherFilesUsed` and `expectedOtherFiles` fields. For this example we are using eye tracking data as a behavioral file, thus the otherFilesUsed field should be set to true and the `expectedOtherFiles` field should contain the expected other files (non-eeg files) extensions. +1. Check if the toml configuration file `LSLAutoBIDS/sample_data/projects/test-project/test-project_config.toml` is filled in with the correct details, specially the OtherFilesUsed and expectedOtherFiles fields. For this example, we are using eye tracking data as a behavioral file, thus the otherFilesUsed field should be set to true and the expectedFiles field should contain the expected other file extensions. ```toml - [Computers] - stimulusComputerUsed = true + [OtherFilesInfo] + otherFilesUsed = true - [ExpectedStimulusFiles] - expectedFiles = [".edf", ".csv", "_labnotebook.tsv", "_participantform.tsv"] + [OtherFilesInfo] + expectedOtherFiles = [".edf", ".csv", "_labnotebook.tsv", "_participantform.tsv"] ``` 2. Run the conversion and upload command to convert the `xdf` files to BIDS format and upload the data to the dataverse. ``` @@ -110,18 +112,18 @@ lslautobids run -p test-project 1. This will convert the xdf file in the `LSLAutoBIDS/sample_data/projects/test-project/sub-999/ses-001/eeg/` directory to BIDS format and store it in the `LSLAutoBIDS/sample_data/bids/test-project/sub-999/ses-001/` directory. 2. You can check the logs in the log file `LSLAutoBIDS/sample_data/bids/test-project/code/test-project.log` file. - 3. The source data i.e., the raw `xdf` file, behavioral data (e.g. eye-tracking recording) and the experimental code files in `PROJECT_STIM_ROOT/test-project/experiment` (all files e.g., `.py`, `.oxexp` will be compressed to a `tar.gz` archive) will be copied to the `LSLAutoBIDS/sample_data/bids/test-project/source_data/`, `LSLAutoBIDS/sample_data/bids/test-project/beh/` and `LSLAutoBIDS/sample_data/bids/test-project/misc/` directories respectively. + 3. The source data i.e., the raw `xdf` file, behavioral data (e.g. eye-tracking recording) and the experimental code files in `PROJECT_OTHER_ROOT/test-project/experiment` (all files e.g., `.py`, `.oxexp` will be compressed to a `tar.gz` archive) will be copied to the `LSLAutoBIDS/sample_data/bids/test-project/source_data/`, `LSLAutoBIDS/sample_data/bids/test-project/beh/` and `LSLAutoBIDS/sample_data/bids/test-project/misc/` directories respectively. ## Example Case 2 -In this case, the experimenter wants to publish **only the raw EEG recordings and the converted EEG files**, but wants to **exclude the stimulus files and experiment code**. +In this case, the experimenter wants to publish **only the raw EEG recordings and the converted EEG files**, but wants to **exclude the other files and experiment code**. ### How to run the example? -1. The workflow is almost identical to Example Case 1, except **stimulus and experiment files are excluded**. +1. The workflow is almost identical to Example Case 1, except **other and experiment files are excluded**. 2. Check if the toml configuration file `LSLAutoBIDS/sample_data/projects/test-project/test-project_config.toml` is filled in with the correct details. ```toml - [Computers] - stimulusComputerUsed = False + [otherFilesInfo] + expectedOtherFiles = False ``` 3. Run the conversion and upload command to convert the `xdf` files to BIDS format and upload the data to the dataverse. ``` diff --git a/lslautobids/config_globals.py b/lslautobids/config_globals.py index 719050f..0c0c016 100644 --- a/lslautobids/config_globals.py +++ b/lslautobids/config_globals.py @@ -12,7 +12,7 @@ def __init__(self): "yes": False, "redo_bids_conversion": False, "reupload": False, - "redo_stim_pc": False, + "redo_other_pc": False, } def init(self, args): @@ -63,10 +63,10 @@ def parse_yaml_file(yaml_file): if config: project_root = os.path.join(os.path.expanduser("~"), config["PROJECT_ROOT"]) bids_root = os.path.join(os.path.expanduser("~"), config["BIDS_ROOT"]) - project_stim_root = os.path.join(os.path.expanduser("~"), config["PROJECT_STIM_ROOT"]) + project_other_root = os.path.join(os.path.expanduser("~"), config["PROJECT_OTHER_ROOT"]) api_key = config.get("API_KEY", "") dataverse_base_url = config.get("BASE_URL", "") parent_dataverse_name = config.get("PARENT_DATAVERSE_NAME", "") else: - project_root = bids_root = project_stim_root = api_key = dataverse_base_url = parent_dataverse_name = None + project_root = bids_root = project_other_root = api_key = dataverse_base_url = parent_dataverse_name = None diff --git a/lslautobids/convert_to_bids_and_upload.py b/lslautobids/convert_to_bids_and_upload.py index d7c040b..46bf120 100644 --- a/lslautobids/convert_to_bids_and_upload.py +++ b/lslautobids/convert_to_bids_and_upload.py @@ -14,7 +14,7 @@ from lslautobids.datalad_create import create_and_add_files_to_datalad_dataset from lslautobids.link_datalad_dataverse import add_sibling_dataverse_in_folder from lslautobids.upload_to_dataverse import push_files_to_dataverse -from lslautobids.config_globals import cli_args, project_root, bids_root, project_stim_root +from lslautobids.config_globals import cli_args, project_root, bids_root, project_other_root from lslautobids.utils import get_user_input, read_toml_file import json @@ -42,16 +42,16 @@ def get_the_streams(self, xdf_path): return stream_names,streams - def copy_source_files_to_bids(self,xdf_file,subject_id,session_id,stim, logger): + def copy_source_files_to_bids(self,xdf_file,subject_id,session_id,other, logger): """ - Copy raw .xdf and optionally stimulus data to BIDS folder. + Copy raw .xdf and optionally other (non-eeg) data to BIDS folder. Args: xdf_file (str): Full path to the .xdf file. subject_id (str): Subject identifier. session_id (str): Session identifier. - stim (bool): Whether to copy stimulus/behavioral files as well. + other (bool): Whether to copy other/behavioral files as well. """ ### COPY THE SOURCE FILES TO BIDS (recorded xdf file) ### project_name = cli_args.project_name @@ -79,7 +79,7 @@ def copy_source_files_to_bids(self,xdf_file,subject_id,session_id,stim, logger): logger.info(f"Copied {xdf_file} to {dest_file}") - if stim: + if other: ### COPY THE BEHAVIOURAL FILES TO BIDS ### self._copy_behavioral_files(file_name_without_ext,subject_id, session_id, logger) @@ -102,7 +102,7 @@ def _copy_behavioral_files(self, file_base, subject_id, session_id, logger): project_name = cli_args.project_name logger.info("Copying the behavioral files to BIDS...") # get the source path - behavioural_path = os.path.join(project_stim_root,project_name,'data', subject_id,session_id,'beh') + behavioural_path = os.path.join(project_other_root,project_name,'data', subject_id,session_id,'beh') # get the destination path dest_dir = os.path.join(bids_root , project_name, subject_id , session_id , 'beh') #check if the directory exists @@ -135,7 +135,7 @@ def extract_prefix(filename): processed_files.append(renamed_file) dest_file = os.path.join(dest_dir, renamed_file) - if cli_args.redo_stim_pc: + if cli_args.redo_other_pc: logger.info(f"Copying (overwriting if needed) {file} to {dest_file}") shutil.copy(original_path, dest_file) else: @@ -174,7 +174,7 @@ def _check_required_behavioral_files(self, files, prefix, logger): toml_path = os.path.join(project_root, cli_args.project_name, cli_args.project_name + '_config.toml') data = read_toml_file(toml_path) - required_files = data["ExpectedStimulusFiles"]["expectedFiles"] + required_files = data["OtherFilesInfo"]["expectedOtherFiles"] for required_file in required_files: @@ -204,15 +204,15 @@ def _copy_experiment_files(self, subject_id, session_id, logger): if os.path.exists(zip_file_path): logger.info("Experiment tar.gz already exists. Skipping.") - if not cli_args.redo_stim_pc: + if not cli_args.redo_other_pc: logger.info("Skipping experiment file copy ") return else: - logger.info("Overwriting existing experiment archive due to --redo_stim_pc flag.") + logger.info("Overwriting existing experiment archive due to --redo_other_pc flag.") # get the source path - experiments_path = os.path.join(project_stim_root,project_name,'experiment') + experiments_path = os.path.join(project_other_root,project_name,'experiment') # get the destination path dest_dir = os.path.join(bids_root , project_name, subject_id,session_id, "misc",'experiment') @@ -286,7 +286,7 @@ def create_raw_xdf(self, xdf_path,streams, logger): return raw - def convert_to_bids(self, xdf_path,subject_id,session_id, run_id, task_id,stim, logger): + def convert_to_bids(self, xdf_path,subject_id,session_id, run_id, task_id,other, logger): """ Convert an XDF file to BIDS format. @@ -295,7 +295,7 @@ def convert_to_bids(self, xdf_path,subject_id,session_id, run_id, task_id,stim, xdf_path (str): Path to the .xdf file. subject_id (str): Subject identifier. session_id (str): Session identifier. - stim (bool): Whether to copy stimulus/behavioral files as well. + other (bool): Whether to copy other/behavioral files as well. Returns: int: 1 if conversion is successful, 2 if the file already exists, 0 if validation fails. @@ -304,7 +304,7 @@ def convert_to_bids(self, xdf_path,subject_id,session_id, run_id, task_id,stim, logger.info("Converting to BIDS...") # Copy the experiment, behavioural and raw recorded files to BIDS - self.copy_source_files_to_bids(xdf_path,subject_id,session_id,stim, logger) + self.copy_source_files_to_bids(xdf_path,subject_id,session_id,other, logger) # Get the bidspath for the raw file bids_path = BIDSPath(subject=subject_id[-3:], @@ -335,7 +335,7 @@ def convert_to_bids(self, xdf_path,subject_id,session_id, run_id, task_id,stim, # get the anonymization number from the toml file toml_path = os.path.join(project_root,project_name,project_name+'_config.toml') data = read_toml_file(toml_path) - anonymization_number = data["Subject"]["anonymization_number"] + anonymization_number = data["BidsConfig"]["anonymizationNumber"] # Write the raw data to BIDS in EDF format # BrainVision format weird memory issues @@ -409,9 +409,9 @@ def populate_dataset_description_json(self, project_name, logger): make_dataset_description( path = dataset_description_path, - name = data["Dataset"]["title"], - data_license = data["Dataset"]["License"], - authors = data["Authors"]["authors"], + name = data["DataverseDataset"]["title"], + data_license = data["DataverseDataset"]["license"], + authors = data["AuthorsInfo"]["authors"], overwrite= True, #necessary to overwrite the existing file created by mne_bids.write_raw_bids() ) @@ -430,10 +430,12 @@ def bids_process_and_upload(processed_files,logger): toml_path = os.path.join(project_root,project_name,project_name +'_config.toml') data = read_toml_file(toml_path) - stim = data["Computers"]["stimulusComputerUsed"] + other = data["OtherFilesInfo"]["expectedOtherFiles"] + + logger.info(f"OtherPC used : {other}") project_path = os.path.join(project_root,project_name) - logger.info("Initializing BIDS conversion and upload process...") + logger.info("Initializing BIDS conversion and upload process") # Initialize BIDS object bids = BIDS() for file in processed_files: @@ -445,7 +447,7 @@ def bids_process_and_upload(processed_files,logger): logger.info(f"Currently processing {subject_id}, {session_id}, {run_id} of task : {task_id}") xdf_path = os.path.join(project_path, subject_id, session_id, 'eeg',filename) - val = bids.convert_to_bids(xdf_path,subject_id,session_id, run_id, task_id, stim, logger) + val = bids.convert_to_bids(xdf_path,subject_id,session_id, run_id, task_id, other, logger) if val == 1: logger.info("BIDS Conversion Successful") diff --git a/lslautobids/dataverse_dataset_create.py b/lslautobids/dataverse_dataset_create.py index 56a6f22..c381305 100644 --- a/lslautobids/dataverse_dataset_create.py +++ b/lslautobids/dataverse_dataset_create.py @@ -63,7 +63,7 @@ def create_dataverse(project_name): toml_path = os.path.join(project_root,project_name,project_name+'_config.toml') data = read_toml_file(toml_path) - pid = data['Dataverse']['pid'] + pid = data['DataverseDataset']['pid'] if pid.lower() in pids_resp1: flag=1 @@ -72,11 +72,12 @@ def create_dataverse(project_name): else: logger.info('Creating the dataset........') resp = api.create_dataset(parent_dataverse_name, ds.json()) + logger.info(f"Full response: {resp.json()}") logger.info(f"Dataset created with PID: {resp.json()['data']['persistentId']}") - + # Modify field - data['Dataset']['title']=ds_title - data['Dataverse']['pid']= resp.json()['data']['persistentId'] + data['DataverseDataset']['title']=ds_title + data['DataverseDataset']['pid']= resp.json()['data']['persistentId'] #data['Dataverse']['dataset_id']= resp.json()['data']['id'] # To use the dump function, you need to open the file in 'write' mode diff --git a/lslautobids/gen_dv_config.py b/lslautobids/gen_dv_config.py index 5264950..7bd97c6 100644 --- a/lslautobids/gen_dv_config.py +++ b/lslautobids/gen_dv_config.py @@ -3,14 +3,14 @@ # The configuration file is a YAML file that contains the following fields: # BIDS_ROOT: Set up the BIDS output path - it is referenced from the home directory of your PC. # For example, if your home directory is /home/username and you have a /home/username/data/bids directory where you have the -# BIDS data in the home directory then the BIDS_ROOT path will be 'data/bids/' -# PROJECT_ROOT: This is the actual path to the directory containing xdf files -# PROJECT_STIM_ROOT: This is the actual path to the directory containing the stimulus files +# BIDS data, then the BIDS_ROOT path will be 'data/bids/' +# PROJECT_ROOT: This is the path to the directory containing xdf files +# PROJECT_OTHER_ROOT: This is the path to the directory containing the non-eeg files # BASE_URL: The base URL for the dataverse service. # API_KEY: Your API token for authentication - you can get it from the dataverse service. -# PARENT_DATAVERSE_NAME: The name of the program or service. +# PARENT_DATAVERSE_NAME: The name of the parent dataverse under which the datasets will be created. It is usually in the url of the dataverse. # -# Important: all paths + API_KEY need to be placed in quotes! +# Important: all paths + API_KEY need to be placed in quotes! Diclaimer: Without quotes also works! """ @@ -24,7 +24,7 @@ "BIDS_ROOT": "# relative to home: workspace/projects/LSLAutoBIDS/data/bids/", "PROJECT_ROOT" : "# relative to home: workspace/projects/LSLAutoBIDS/data/projects/", - "PROJECT_STIM_ROOT" : "# path relative to home: workspace/projects/LSLAutoBIDS/data/project_stimulus/", + "PROJECT_OTHER_ROOT" : "# path relative to home: workspace/projects/LSLAutoBIDS/data/project_other/", "BASE_URL": "https://darus.uni-stuttgart.de", # The base URL for the service. "API_KEY": "# Paste your dataverse API token here", # Your API token for authentication. "PARENT_DATAVERSE_NAME": "simtech_pn7_computational_cognitive_science" # The name of the program or service. diff --git a/lslautobids/gen_project_config.py b/lslautobids/gen_project_config.py index fc7f23c..bf4136a 100644 --- a/lslautobids/gen_project_config.py +++ b/lslautobids/gen_project_config.py @@ -7,37 +7,30 @@ toml_content = """ # This is the project configuration file - This configuration can be customized for each project - [Authors] - authors = "John Doe, Lina Doe" - affiliation = "University of Stuttgart, Germany" + [AuthorsInfo] + authors = "John Doe, Lina Doe" # List of authors separated by commas + affiliation = "University of Stuttgart, University of Stuttgart" # Affiliation of the authors in the same order as authors + email = "john@gmail.com" # Contact email of the authors in the same order as authors - [AuthorsContact] - email = "john@gmail.com" + [DataverseDataset] + title = "Convert XDF to BIDS" # Title of the Dataverse dataset. This gets updated automatically by the project name. + datasetDescription = "This is a test project to set up the pipeline to convert XDF to BIDS." # Description of the dataset. This description will appear in the dataset.json file which then eventually gets displayed in the dataverse metadata + license = "MIT License" # License for the dataset, e.g. "CC0", "CC-BY-4.0", "ODC-By-1.0", "PDDL-1.0", "ODC-PDDL-1.0", "MIT License" + subject = ["Medicine, Health and Life Sciences","Engineering"] # List of subjects related to the dataset required for dataverse metadata + pid = '' # Persistent identifier for the dataset, e.g. DOI or Handle. This will be updated automatically after creating the dataset in dataverse. + + [OtherFilesInfo] + otherFilesUsed = true # Set to true if you want to include other (non-eeg-files) files (experiment files, other modalities like eye tracking) in the dataset, else false + expectedOtherFiles = [".edf", ".csv", "_labnotebook.tsv", "_participantform.tsv"] # List of expected other file extensions. Only the expected files will be copied to the beh folder in BIDS dataset. Give an empty list [] if you don't want any other files to be in the dataset. In this case only experiment files will be zipeed and copied to the misc folder in BIDS dataset. - [Dataset] - title = "Convert XDF to BIDS" - dataset_description = "This is a test project to set up the pipeline to convert XDF to BIDS." - License = "MIT License" - - [Computers] - stimulusComputerUsed = true - - [ExpectedStimulusFiles] - expectedFiles = [".edf", ".csv", "_labnotebook.tsv", "_participantform.tsv"] - - [IgnoreSubjects] - ignore_subjects = [] # List of subjects to ignore during the conversion - Leave empty to include all subjects. Changing this value will not delete already existing subjects. - - [Subject] - subject = ["Medicine, Health and Life Sciences","Engineering"] - anonymization_number = 123 + [FileSelection] + ignoreSubjects = ['sub-777'] # List of subjects to ignore during the conversion - Leave empty to include all subjects. Changing this value will not delete already existing subjects. + excludeTasks = ['sampletask'] # List of tasks to exclude from the conversion for all subjects - Leave empty to include all tasks. Changing this value will not delete already existing tasks. + + [BidsConfig] + anonymizationNumber = 123 # This is an anomization number that will be added to the recording date of all subjects. - [Tasks] - exclude_tasks = [] # List of tasks to exclude from the conversion - - [Dataverse] - pid = '12345' - """ + """ diff --git a/lslautobids/generate_dataset_json.py b/lslautobids/generate_dataset_json.py index 773f902..850e630 100644 --- a/lslautobids/generate_dataset_json.py +++ b/lslautobids/generate_dataset_json.py @@ -7,20 +7,20 @@ def update_json_data(json_data, toml_data): # Update title field - json_data['datasetVersion']['metadataBlocks']['citation']['fields'][0]['value'] = toml_data['Dataset']['title'] + json_data['datasetVersion']['metadataBlocks']['citation']['fields'][0]['value'] = toml_data['DataverseDataset']['title'] # Update author field - json_data['datasetVersion']['metadataBlocks']['citation']['fields'][1]['value'][0]['authorName']['value'] = toml_data['Authors']['authors'] + json_data['datasetVersion']['metadataBlocks']['citation']['fields'][1]['value'][0]['authorName']['value'] = toml_data['AuthorsInfo']['authors'] # Update dataset name and email field - json_data['datasetVersion']['metadataBlocks']['citation']['fields'][2]['value'][0]['datasetContactEmail']['value'] = toml_data['AuthorsContact']['email'] - json_data['datasetVersion']['metadataBlocks']['citation']['fields'][2]['value'][0]['datasetContactName']['value'] = toml_data['Authors']['authors'] + json_data['datasetVersion']['metadataBlocks']['citation']['fields'][2]['value'][0]['datasetContactEmail']['value'] = toml_data['AuthorsInfo']['email'] + json_data['datasetVersion']['metadataBlocks']['citation']['fields'][2]['value'][0]['datasetContactName']['value'] = toml_data['AuthorsInfo']['authors'] # Update dsDescription field - json_data['datasetVersion']['metadataBlocks']['citation']['fields'][3]['value'][0]['dsDescriptionValue']['value'] = toml_data['Dataset']['dataset_description'] + json_data['datasetVersion']['metadataBlocks']['citation']['fields'][3]['value'][0]['dsDescriptionValue']['value'] = toml_data['DataverseDataset']['datasetDescription'] # Update subject field - json_data['datasetVersion']['metadataBlocks']['citation']['fields'][4]['value'] = toml_data['Subject']['subject'] + json_data['datasetVersion']['metadataBlocks']['citation']['fields'][4]['value'] = toml_data['DataverseDataset']['subject'] return json_data def generate_json_file(project_name, logger): diff --git a/lslautobids/main.py b/lslautobids/main.py index 38c49bf..f2ae4a9 100644 --- a/lslautobids/main.py +++ b/lslautobids/main.py @@ -84,7 +84,7 @@ def update_project_config(project_path: str, project_name: str, logger): raise FileNotFoundError(f"Config file '{toml_path}' not found.") config = read_toml_file(toml_path) - config['Dataset']['title'] = project_name + config['DataverseDataset']['title'] = project_name logger.info("Updating project config with new project name...") write_toml_file(toml_path, config) @@ -99,7 +99,7 @@ def main(): argparser.add_argument('-p','--project_name', type=str, help='Enter the project name') argparser.add_argument('-y','--yes', action='store_true', help='Automatically answer yes to all user prompts') argparser.add_argument('--redo_bids_conversion', action='store_true', help='Redo the entire BIDS conversion process from scratch for the processed files') - argparser.add_argument('--redo_stim_pc', action='store_true', help='Redo the stim and physio processing for the processed files') + argparser.add_argument('--redo_other_pc', action='store_true', help='Redo the other and physio processing for the processed files') args = argparser.parse_args() # Store args globally @@ -136,14 +136,14 @@ def log_raw_line(log_path: str, message: str): # Initialize the logger AFTER cli_args is ready logger = get_logger(project_name) - # Check if the stim flag is set in the toml file - if args.redo_stim_pc: - # get the stimulus flag from the toml file + # Check if the other flag is set in the toml file + if args.redo_other_pc: + # get the other flag from the toml file toml_path = os.path.join(project_root, project_name, f"{project_name}_config.toml") data = read_toml_file(toml_path) - stim_flag = data['Computers']['stimulusComputerUsed'] - if not stim_flag: - logger.warning("The stimulus computer flag is not set in the config file. Please set it to True to proceed with stim redo process.") + other_flag = data['OtherFilesInfo']['expectedOtherFiles'] + if not other_flag: + logger.warning("The OtherFilesUsed flag is not set in the config file. Please set it to True to proceed with other redo process.") sys.exit(1) try: diff --git a/lslautobids/processing_new_files.py b/lslautobids/processing_new_files.py index 437d06d..6b69901 100644 --- a/lslautobids/processing_new_files.py +++ b/lslautobids/processing_new_files.py @@ -49,14 +49,15 @@ def process_new_files(file_status: List[str],logger) -> None: toml_path = os.path.join(project_path, project_name + '_config.toml') data = read_toml_file(toml_path) - existing_tasks = set(data.get('Tasks', {}).get('tasks', [])) + # existing_tasks = set(data.get('SubjectInfo', {}).get('allTasks', [])) - # Add only new tasks - updated_tasks = list(existing_tasks.union(tasks)) + # # Add only new tasks + # updated_tasks = list(existing_tasks.union(tasks)) - # Save updated task list back to the config - data['Tasks']['tasks'] = updated_tasks - write_toml_file(toml_path, data) + # # Save updated task list back to the config + # data['FileSelection']['allTasks'] = updated_tasks + + # write_toml_file(toml_path, data) # User prompt asking if we want to proceed to convert and upload if cli_args.yes: @@ -93,7 +94,7 @@ def check_for_new_files(path: str, ignore_subjects, logger) -> Union[List[str], Returns: Union[List[str], str]: List of new file paths or a 'no files' message. """ - + logger.info(f"Scanning for new files in {path}...") log_file_path = os.path.join(path, "last_run_log.txt") if cli_args.redo_bids_conversion: @@ -103,6 +104,7 @@ def check_for_new_files(path: str, ignore_subjects, logger) -> Union[List[str], with open(log_file_path, 'r') as f: last_run = f.read().strip() last_run_time = float(last_run) if last_run else 0.0 + logger.info(f"Last run time read from log: {last_run_time}") except FileNotFoundError: last_run_time = 0.0 @@ -142,11 +144,11 @@ def check_for_new_data(logger) -> None: toml_path = os.path.join(project_path, cli_args.project_name + '_config.toml') data = read_toml_file(toml_path) - ignore_subjects = data["IgnoreSubjects"]["ignore_subjects"] + ignore_subjects = data["FileSelection"]["ignoreSubjects"] logger.info("Ignored subjects: %s", ignore_subjects) file_status = check_for_new_files(project_path, ignore_subjects, logger) - ignore_tasks = data["Tasks"]["exclude_tasks"] + ignore_tasks = data["FileSelection"]["excludeTasks"] filtered_files = [ f for f in file_status @@ -158,7 +160,7 @@ def check_for_new_data(logger) -> None: input("Press Enter to exit...") raise RuntimeError("No new files found.") else: - logger.info(f"New files detected: {filtered_files}") + logger.info(f"New files found: {filtered_files}") process_new_files(filtered_files, logger) diff --git a/tests/run_all_tests.py b/tests/run_all_tests.py index 1c12230..c2cbc31 100644 --- a/tests/run_all_tests.py +++ b/tests/run_all_tests.py @@ -26,4 +26,4 @@ print(f"Running tests in: {folder} which has folder path {folder_path}") subprocess.run(["pytest", folder_path]) else: - print(f"Skipping: {folder} (no tests or data)\n") + print(f"Skipping: {folder} (no tests file or data). Recheck if the test files are in place or data folder is missing.") diff --git a/tests/test_utils/path_config.py b/tests/test_utils/path_config.py index 053950f..1499565 100644 --- a/tests/test_utils/path_config.py +++ b/tests/test_utils/path_config.py @@ -14,7 +14,7 @@ def get_root_paths(test_file: str): return { "project_root": os.path.join(base_dir, "projects"), "bids_root": os.path.join(base_dir, "bids"), - "project_stim_root": os.path.join(base_dir, "project_stimulus"), + "project_other_root": os.path.join(base_dir, "project_other"), } diff --git a/tests/testcases/test_old_suffix/test_old_suffix.py b/tests/testcases/test_old_suffix/test_old_suffix.py index 0178ab3..3f49cfe 100644 --- a/tests/testcases/test_old_suffix/test_old_suffix.py +++ b/tests/testcases/test_old_suffix/test_old_suffix.py @@ -21,7 +21,7 @@ def __init__(self): self.project_name = "test-project" self.yes = True self.redo_bids_conversion = False - self.redo_stim_pc = False + self.redo_other_pc = False def init(self, args): # you can store the args or ignore pass @@ -48,7 +48,7 @@ def setup_project(monkeypatch): config_data = { "PROJECT_ROOT": paths["project_root"], "BIDS_ROOT": paths["bids_root"], - "PROJECT_STIM_ROOT": paths["project_stim_root"], + "PROJECT_OTHER_ROOT": paths["project_other_root"], } with open(config_file_test, "w") as f: @@ -59,7 +59,7 @@ def setup_project(monkeypatch): # Patch global paths and CLI args monkeypatch.setattr("lslautobids.config_globals.project_root", paths["project_root"]) monkeypatch.setattr("lslautobids.config_globals.bids_root", paths["bids_root"]) - monkeypatch.setattr("lslautobids.config_globals.project_stim_root", paths["project_stim_root"]) + monkeypatch.setattr("lslautobids.config_globals.project_other_root", paths["project_other_root"]) monkeypatch.setattr("lslautobids.config_globals.cli_args", dummy_cli_args) monkeypatch.setattr("lslautobids.config_globals.config_file", config_file_test)