Skip to content

Commit b614fbd

Browse files
committed
added first draft of the tutorial
1 parent 1e76c7f commit b614fbd

File tree

1 file changed

+104
-0
lines changed

1 file changed

+104
-0
lines changed

docs/tutorial.md

Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
# Tutorial
2+
## Getting Started with LSLAutoBIDS
3+
4+
This tutorial will guide you through the steps to set up and use the LSLAutoBIDS package for converting EEG recordings to BIDS format, version controlling the data with Datalad, and uploading it to a Dataverse repository with a practical example.
5+
6+
### Installation and Download
7+
1. Clone the Github Repository
8+
```
9+
git clone https://github.com/s-ccs/LSLAutoBIDS.git
10+
```
11+
2. Pip install the package
12+
```
13+
cd LSLAutoBIDS
14+
pip install lslautobids
15+
```
16+
3. Download the dummy dataset for testing in the LSLAutoBIDS root directory - [download link]()
17+
18+
The dataset has a sample project called the test project "TestProject2025" which contains an EEG recording file (<insert filename here>) in the projects directory, a sample eyetracking recording in the project stimulus (<insert file names>)
19+
20+
### Configuration
21+
1. Generate the global configuration file
22+
```
23+
lslautobids gen-dv-config
24+
```
25+
This will create a configuration file template in folder `~/.config/lslautobids/` folder. This will create a config file with the dataverse details and the root directories for the projects.
26+
27+
2. Create a Dataverse account and get the API token
28+
- Create a dataverse account in your institution's dataverse server (e.g. https://darus.uni-stuttgart.de/dataverse/darus)
29+
- Create a new dataverse for your project
30+
- Create a new API token from your dataverse account settings page.
31+
32+
3. Open the configuration file `~/.config/lslautobids/autobids_config.yaml` and fill in the details
33+
- Edit the file e.g. via `nano ~/.config/lslautobids/autobids_config.yaml` to add the dataverse and project root details.
34+
35+
Configuration file template:
36+
```yaml
37+
"BIDS_ROOT": "# relative to home/users directory: LSLAutoBIDS/data/bids/",
38+
"PROJECT_ROOT" : "# relative to home/users: LSLAutoBIDS/data/projects/",
39+
"PROJECT_STIM_ROOT" : "# path relative to home/users: LSLAutoBIDS/data/project_stimulus/",
40+
"BASE_URL": "https://darus.uni-stuttgart.de", # The base URL for the service.
41+
"API_KEY": "# Paste your dataverse API token here", # Your API token for authentication.
42+
"PARENT_DATAVERSE_NAME": "simtech_pn7_computational_cognitive_science" # The name of the dataverse to which datasets will be uploaded. When you in the dataverses page , you can see this name in the URL after 'dataverse/'.
43+
```
44+
***This will be mostly same for all the projects, thus running this command is only recommended once per system.***
45+
46+
4. Create a project-specific configuration file
47+
This will create a project-specific configuration file template in the specified project directory.
48+
49+
```
50+
lslautobids gen-proj-config --project TestProject2025
51+
```
52+
Fill in the details in the configuration file `LSLAutoBIDS/data/projects/TestProject2025/TestProject2025_config.toml` file.
53+
54+
You can find the details about the parameters in the comments of the template configuration file generated. For this tutorial you might want to just change the author and email fields. Rest of the fields are already filled in for the test project.
55+
56+
## Example Case 1
57+
58+
A lab wants to conduct an EEG-EyeTracking experiment and wants to make this dataset publicly available for the other neuroscience researchers. To assure data provenence and reproducibility within and accross labs, they want to have a standarized structure for storing the data and code files.
59+
60+
In this example we will see how to use the LSLAutoBIDS package to:
61+
1. Convert the recorded EEG data in xdf format to BIDS format.
62+
2. Integrate other data files (e.g. eye-tracking recording, experiment code files) into the dataset (Note: LSLAutoBIDS does not do any conversion of these files into BIDS format, it just copies these files to the appropriate directories in the BIDS dataset in a psuedo-BIDS like structure).
63+
3. Version control the data and code files using Datalad.
64+
4. Upload the dataset to a dataverse repository for public access.
65+
66+
### How to run the example?
67+
68+
1. Check if the toml configuration file `LSLAutoBIDS/data/projects/TestProject2025/TestProject2025_config.toml` is filled in with the correct details, specially the stimulusComputerUsed and expectedFiles fields. For this example we are using eye tracking data as a behavioral file, thus the stimulusComputerUsed field should be set to true and the expectedFiles field should contain the expected stimulus file extensions.
69+
```toml
70+
[Computers]
71+
stimulusComputerUsed = true
72+
73+
[ExpectedStimulusFiles]
74+
expectedFiles = [".edf", ".csv", "_labnotebook.tsv", "_participantform.tsv"]
75+
```
76+
2. Run the conversion and upload command to convert the xdf files to BIDS format and upload the data to the dataverse.
77+
```
78+
lslautobids run -p TestData2025
79+
```
80+
81+
1. This will convert the xdf file in the `LSLAutoBIDS/data/projects/TestProject2025/sub-001/ses-001/eeg/` directory to BIDS format and store it in the `LSLAutoBIDS/data/bids/TestProject2025/sub-001/ses-001/` directory.
82+
2. You can check the logs in the log file `LSLAutoBIDS/data/bids/TestProject2025/code/TestProject2025.log` file.
83+
3. The source data i.e. the raw xdf file, behavioral data (e.g. eye-tracking recording) and the experimental code files (e.g. .py, .oxexp) will be copied to the `LSLAutoBIDS/data/bids/TestProject2025/source_data/`, `LSLAutoBIDS/data/bids/TestProject2025/beh/` and `LSLAutoBIDS/data/bids/TestProject2025/misc/` directories respectively.
84+
85+
## Example Case 2
86+
87+
88+
89+
90+
## After publishing the dataset (Out of Scope of this package)
91+
92+
Once the dataset is published in dataverse, other researchers can access the dataset and also cite the dataset using the DOI provided by that dataverse dataset.
93+
94+
You can use clone the dataset using datalad and access the data files.
95+
96+
```
97+
datalad clone <dataverse-dataset-url>
98+
```
99+
100+
__Since the dataset is version controlled using datalad, the large files are not downloaded by default as they are stored in a git-annex. You can get the files using the datalad get command.__
101+
102+
```
103+
datalad get <file-path>
104+
```

0 commit comments

Comments
 (0)