Skip to content

Commit 4415fd6

Browse files
committed
Merge remote-tracking branch 'upstream/master' into eyetacking_eeg
2 parents 8064c63 + 8bb0cdd commit 4415fd6

File tree

8 files changed

+125
-42
lines changed

8 files changed

+125
-42
lines changed

.github/workflows/validate_datasets.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ jobs:
148148
- name: Skip main validation for datasets with unreleased spec features
149149
# Replace ${EMPTY} with dataset patterns, when this is needed
150150
# Reset to "for DS in ${EMPTY}; ..." after a spec release
151-
run: for DS in eyetracking_* atlas-* emg_*; do touch $DS/.SKIP_VALIDATION; done
151+
run: for DS in ${EMPTY}; do touch $DS/.SKIP_VALIDATION; done
152152
if: matrix.bids-validator != 'dev'
153153

154154
- name: Set BIDS_SCHEMA variable for dev version

.github/workflows/validation.yml

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,19 @@ jobs:
1717
runs-on: ubuntu-latest
1818
steps:
1919
- uses: actions/checkout@v6
20-
- uses: codespell-project/actions-codespell@master
20+
- uses: codespell-project/actions-codespell@master
21+
22+
build:
23+
name: validata dataset listing
24+
runs-on: ubuntu-latest
25+
steps:
26+
- name: Checkout
27+
uses: actions/checkout@v6
28+
29+
- name: Set up Python
30+
uses: astral-sh/setup-uv@v7
31+
with:
32+
python-version: 3.12
33+
34+
- name: check listing datasets
35+
run: uv run tools/print_dataset_listing.py

CONTRIBUTING.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,13 @@ We release `bids-examples` in sync with `bids-specification`.
6969
configured as a git remote called "upstream")
7070
1. Tag the `master` branch: `git tag -a -m "X.X.X" X.X.X upstream/master`
7171
(replace `X.X.X` with the version to be released)
72-
1. Push the tag upstream: `git push upstream X.X.X`
72+
1. Make a schema tag: `git tag -a -m "BIDS Schema Y.Y.Y" schema-Y.Y.Y upstream/master`
73+
(replace `Y.Y.Y` with the version to be released)
74+
1. Push the tags upstream: `git push upstream --tags`
75+
1. Create a maintenance branch to track changes induced by updates to the
76+
validator or schema: `git push upstream upstream/master:refs/heads/maint/X.X.X`
77+
This branch allows for future schema releases on the corresponding specification
78+
branch, without accumulating datasets with new features.
7379
1. Create a GitHub release using the new tag. Fill the title of the release
7480
with the name of the tag. Fill the description of the release with a sentence like
7581
> "Microscopy" BEP was merged into BIDS-specification (2022-02-15).

README.md

Lines changed: 51 additions & 23 deletions
Large diffs are not rendered by default.

dataset_listing.tsv

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -83,16 +83,27 @@ mrs_biggaba MEGA-PRESS and PRESS MRS data from 12 subjects from one site from th
8383
mrs_fmrs Functional MRS data involving a pain stimulus task from 15 subjects [link](https://www.nitrc.org/projects/fmrs_2020) [@markmikkelsen](https://github.com/markmikkelsen) anat, mrs T1w, events, mrsref, svs
8484
2d_mb_pcasl Siemens 2D MultiBand Multi-delay PCASL (m0 and noRF included within timeseries) [link](https://osf.io/xrkc4/) [@aptinis](https://github.com/aptinis) anat, fmap, perf T1w, asl, aslcontext, epi
8585
xeeg_hed_score EEG and iEEG data with annotations of artifacts, seizures and modulators using HED-SCORE [@dorahermes](https://github.com/dorahermes) anat, eeg, ieeg T1w, channels, coordsystem, eeg, electrodes, events, ieeg
86-
dwi_deriv exemplifies the storage of diffusion MRI derivates that may be generated on the Siemens XA platform. dwi dwi
86+
dwi_deriv exemplifies the storage of diffusion MRI derivates that may be generated on the Siemens XA platform. dwi ADC, FA, S0map, colFA, dwi, expADC, trace
8787
pheno004 Minimal dataset with subjects with imaging and/or phenotype data [@ericearl](https://github.com/ericearl) phenotype, anat T1w
8888
emg_ConcurrentIndependentUnits Concurrent EMG recording with multiple independent recording units at different sampling rates n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg, events
8989
emg_CustomBipolar Custom-made bipolar EMG recording setup with electrodes on flexors of the lower arm n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, emg
9090
emg_CustomBipolarFace EMG recording from facial muscles with many-to-many mapping between sensors and muscles n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg
9191
emg_IndependentMod Commercial bipolar EMG modules recording multiple muscles with wireless sensors n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg
9292
emg_Multimodal Integration of EEG, EMG, and motion capture data n/a [@neuromechanist](https://github.com/neuromechanist) eeg, emg, motion channels, electrodes, coordsystem, eeg, emg, motion, scans, events
93-
emg_MultiBodyParts EMG recording from multiple body parts with different electrode types n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg
93+
emg_MultiBodyParts EMG recording from multiple body parts with different electrode types n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg, physio
9494
emg_TwoHDsEMG High-density EMG grid recordings from two body parts demonstrating grid placement documentation n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg
9595
emg_TwoWristbands EMG recordings using two wristbands with dry electrodes to capture forearm muscle activity n/a [@neuromechanist](https://github.com/neuromechanist) emg channels, electrodes, coordsystem, emg
9696
mri_chunk Example MRI dataset to illustrate BIDS chunk entity. A single subject, two chunks. [@valosekj](https://github.com/valosekj) anat T1w
97+
atlas-Destrieux n/a n/a
98+
atlas-Schaefer n/a n/a
99+
eyetracking_binocular beh events, physio
97100
eyetracking_eeg Example dataset of simultaneously collected EEG and eyetreacking. [@scott-huberty](https://github.com/scott-huberty) eeg, eyetracking eeg, eyetracking, channels, electrodes
98-
101+
atlas-HarvardOxford n/a n/a
102+
atlas-Talairach n/a n/a
103+
atlas-4S n/a n/a
104+
atlas-HOSPA n/a n/a
105+
atlas-AAL n/a n/a
106+
atlas-DiFuMo n/a n/a
107+
eyetracking_fmri anat, fmap, func T1w, T2w, bold, epi, events, fieldmap, magnitude, physio
108+
atlas-suit n/a T1w
109+
atlas-Juelich n/a n/a
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
{
2+
"Columns": ["resp", "cardio"],
3+
"StartTime": 0,
4+
"SamplingFrequency": 50,
5+
"resp": {
6+
"Description": "Respiratory signal",
7+
"Unit": "mV"
8+
},
9+
"cardio": {
10+
"Description": "Cardiac signal",
11+
"Unit": "mV"
12+
}
13+
}

emg_MultiBodyParts/sub-01/emg/sub-01_task-mechPerturbations_physio.tsv.gz

Whitespace-only changes.

tools/print_dataset_listing.py

Lines changed: 23 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
You can pass an argument to insert the content in another file.
1616
Otherwise the content will be added to the README of this repository.
1717
"""
18-
18+
import warnings
1919
import sys
2020
from pathlib import Path
2121
import pandas as pd
@@ -43,6 +43,7 @@
4343

4444
tables_order = {
4545
"ASL": "perf",
46+
"Atlas": "",
4647
"Behavioral": "beh",
4748
"EEG": "^eeg$",
4849
"EMG": "emg",
@@ -56,9 +57,10 @@
5657
"MRS": "mrs",
5758
"NIRS": "nirs",
5859
"PET": "pet",
59-
"qMRI": "",
6060
"Phenotype": "phenotype",
61+
"Physio": "",
6162
"Provenance": "",
63+
"qMRI": "",
6264
}
6365

6466
DELIMITER = "<!-- ADD EXAMPLE LISTING HERE -->"
@@ -76,13 +78,13 @@ def main(output_file=None):
7678

7779
names = df["name"].copy()
7880

79-
check_missing_folders(df, root)
80-
8181
if update_content:
8282
df = update_datatypes_and_suffixes(df, root)
8383
df.to_csv(input_file, sep="\t", index=False)
8484
df = pd.read_csv(input_file, sep="\t")
8585

86+
check_missing_folders(df, root)
87+
8688
df = add_links(df)
8789

8890
clean_previous_run(output_file)
@@ -135,11 +137,12 @@ def add_links(df):
135137
if not isinstance(row[1][col], str):
136138
continue
137139
if col == "name":
138-
row[1][col] = f"[{row[1][col]}]({UPSTREAM_REPO}{row[1][col]})"
140+
tmp = row[1][col]
141+
df.loc[row[0], col]= f"[{tmp}]({UPSTREAM_REPO}{tmp})"
139142
if col == "link to full data" and row[1][col].startswith("http"):
140-
row[1][col] = f"[link]({row[1][col]})"
143+
df.loc[row[0], col] = f"[link]({row[1][col]})"
141144
if col == "maintained by" and row[1][col].startswith("@"):
142-
row[1][col] = f"[{row[1][col]}](https://github.com/{row[1][col][1:]})"
145+
df.loc[row[0], col] = f"[{row[1][col]}](https://github.com/{row[1][col][1:]})"
143146
return df
144147

145148

@@ -170,16 +173,17 @@ def add_tables(df: pd.DataFrame, output_file: Path, names) -> None:
170173
print("Writing markdown tables...")
171174
df.fillna("n/a", inplace=True)
172175
for table_name, table_datatypes in tables_order.items():
173-
with output_file.open("a") as f:
174-
f.write(f"\n### {table_name}\n\n")
175-
add_warning(f)
176176

177177
if table_name == "qMRI":
178178
mask = names.str.contains("qmri_")
179179
elif table_name == "HED":
180180
mask = names.str.contains("_hed_")
181+
elif table_name == "Physio":
182+
mask = df["suffixes"].str.contains("physio", regex=True)
181183
elif table_name == "Provenance":
182184
mask = names.str.contains("provenance_")
185+
elif table_name == "Atlas":
186+
mask = names.str.contains("atlas-")
183187
else:
184188
mask = df["datatypes"].str.contains(table_datatypes, regex=True)
185189

@@ -188,9 +192,15 @@ def add_tables(df: pd.DataFrame, output_file: Path, names) -> None:
188192

189193
print(sub_df)
190194

191-
sub_df.to_markdown(output_file, index=False, mode="a")
192-
with output_file.open("a") as f:
193-
f.write("\n")
195+
if len(sub_df) > 0:
196+
with output_file.open("a") as f:
197+
f.write(f"\n### {table_name}\n\n")
198+
add_warning(f)
199+
with output_file.open("a") as f:
200+
sub_df.to_markdown(output_file, index=False, mode="a")
201+
f.write("\n")
202+
else:
203+
warnings.warn(f"No dataset for '{table_name}'", stacklevel=2)
194204

195205

196206
def stringify_list(l):

0 commit comments

Comments
 (0)