Skip to content

Commit bc046ea

Browse files
Sylvain ChevallierDiv12345dependabot[bot]robintiborjsosulski
authored
Merge 0.4.5 into master (#270)
* Set download dir test and example (#249) * Update to dataset_search call in FilterBank Motor Imagery * Removing completed #fixme * Removing total_classes argument from dataset_search call in FilterBank MI This was earlier deprecated in 55f77ae * set_download_dir test and example * adding pre-commit modifications * Update whats_new.rst * Update examples/changing_download_directory.py Co-authored-by: Sylvain Chevallier <[email protected]> * Update examples/changing_download_directory.py Co-authored-by: Sylvain Chevallier <[email protected]> * Bump pillow from 8.4.0 to 9.0.0 (#253) Bumps [pillow](https://github.com/python-pillow/Pillow) from 8.4.0 to 9.0.0. - [Release notes](https://github.com/python-pillow/Pillow/releases) - [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst) - [Commits](python-pillow/Pillow@8.4.0...9.0.0) --- updated-dependencies: - dependency-name: pillow dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix Schirrmeister2017 error (#255) * correct event loading error, renaming session and runs * add whats new * Removing dependency of Physionet MI download on mne method (#257) * Update physionet_mi.py * consistency of runs numbering * Update whats_new.rst * f-string edits Co-authored-by: Sylvain Chevallier <[email protected]> * f-string edits Co-authored-by: Sylvain Chevallier <[email protected]> Co-authored-by: Sylvain Chevallier <[email protected]> * Correct MAMEM issues (#256) * switch mamem session to runs, use predictable names * update docstring in evaluation, for building documentation * update Lee2017 docstring for correct documentation. * update whats new * switch SSVEP example to within session * correct typo and rebase * correct typos on examples * Progress bars (#258) * Progress bars for downloads using pooch functionality * Rectification of f-string in PhysionetMI * Evaluations subject level progress bar CV test subject level in the case of CrossSubjectEvaluation * Update poetry.lock * Update pyproject.toml * dependencies * Apply suggestions from code review (mne.utils to tqdm direct) Co-authored-by: Sylvain Chevallier <[email protected]> * Update poetry.lock * tqdm arg * Update whats_new.rst * Update mistune dep Co-authored-by: Sylvain Chevallier <[email protected]> * fix doc url in readme (#262) * fix doc url in readme * correct links in the docs * Schirrmeister2017 High-Gamma Dataset from EDF (#265) * loading Schirrmeister2017 High-Gamma Dataset from EDF * remove commented import of requests module * rename to session_0 * added 13 + 12 subjects speller datasets by huebner (#260) * added 13 + 12 subjects speller datasets by huebner * clean up legacy run splitting code * added use_blocks_as_sessions parameter for data Co-authored-by: Sylvain Chevallier <[email protected]> * added Spot Auditory oddball dataset (#266) * added Spot Auditory oddball dataset * replaced usage of deprecated dl.data_path Co-authored-by: Sylvain Chevallier <[email protected]> * Visualize all ERP datasets (#261) * Visualize all ERP datasets * * use paradigm.datasets instead of manual list * more verbose sanity check script * fix epo data leak + remove title bf * moved data visualization added disclaimer regarding data size Co-authored-by: Sylvain Chevallier <[email protected]> * update to v0.4.5 (#269) * update to v0.4.5 * update poetry and requirements * correct pre-commit error and add code coverage (#271) Co-authored-by: Divyesh Narayanan <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: robintibor <[email protected]> Co-authored-by: Jan Sosulski <[email protected]>
1 parent af9fc57 commit bc046ea

38 files changed

+1938
-920
lines changed

.github/workflows/test-devel.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ jobs:
2727
python-version: ${{ matrix.python-version }}
2828

2929
- name: Install Poetry
30-
uses: snok/install-poetry@v1.1.6
30+
uses: snok/install-poetry@v1
3131
with:
3232
virtualenvs-create: true
3333
virtualenvs-in-project: true

.github/workflows/test.yml

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,10 +26,8 @@ jobs:
2626
with:
2727
python-version: ${{ matrix.python-version }}
2828

29-
- uses: pre-commit/[email protected]
30-
3129
- name: Install Poetry
32-
uses: snok/install-poetry@v1.1.6
30+
uses: snok/install-poetry@v1
3331
with:
3432
virtualenvs-create: true
3533
virtualenvs-in-project: true
@@ -62,3 +60,11 @@ jobs:
6260
run: |
6361
source $VENV
6462
poetry run python -m moabb.run --pipelines=./moabb/tests/test_pipelines/ --verbose
63+
64+
- name: Upload Coverage to Codecov
65+
uses: codecov/codecov-action@v2
66+
if: success()
67+
with:
68+
verbose: true
69+
directory: /home/runner/work/moabb/moabb
70+
files: ./.coverage

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ can upgrade your pip version using: `pip install -U pip` before installing `moab
129129
## Supported datasets
130130

131131
The list of supported datasets can be found here :
132-
http://moabb.neurotechx.com/docs/datasets.html
132+
https://neurotechx.github.io/moabb/datasets.html
133133

134134
### Submit a new dataset
135135

@@ -256,6 +256,6 @@ BCI algorithms applied on an extensive list of freely available EEG datasets.
256256
[link_sylvain]: https://sylvchev.github.io/
257257
[link_neurotechx_signup]: https://neurotechx.com/
258258
[link_gitter]: https://gitter.im/moabb_dev/community
259-
[link_moabb_docs]: http://moabb.neurotechx.com/docs/index.html
259+
[link_moabb_docs]: https://neurotechx.github.io/moabb/
260260
[link_arxiv]: https://arxiv.org/abs/1805.06427
261261
[link_jne]: http://iopscience.iop.org/article/10.1088/1741-2552/aadea0/meta

docs/source/README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -128,8 +128,7 @@ can upgrade your pip version using: `pip install -U pip` before installing `moab
128128

129129
## Supported datasets
130130

131-
The list of supported datasets can be found here :
132-
http://moabb.neurotechx.com/docs/datasets.html
131+
The list of supported datasets can be found here : https://neurotechx.github.io/moabb/
133132

134133
### Submit a new dataset
135134

@@ -258,6 +257,6 @@ BCI algorithms applied on an extensive list of freely available EEG datasets.
258257
[link_sylvain]: https://sylvchev.github.io/
259258
[link_neurotechx_signup]: https://neurotechx.com/
260259
[link_gitter]: https://gitter.im/moabb_dev/community
261-
[link_moabb_docs]: http://moabb.neurotechx.com/docs/index.html
260+
[link_moabb_docs]: https://neurotechx.github.io/moabb/
262261
[link_arxiv]: https://arxiv.org/abs/1805.06427
263262
[link_jne]: http://iopscience.iop.org/article/10.1088/1741-2552/aadea0/meta

docs/source/whats_new.rst

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,29 @@ API changes
3131
- None
3232

3333

34-
Version - 0.4.4 (Stable - PyPi)
34+
Version - 0.4.5 (Stable - PyPi)
35+
---------------------------------
36+
37+
Enhancements
38+
~~~~~~~~~~~~
39+
40+
- Progress bars, pooch, tqdm (:gh:`258` by `Divyesh Narayanan`_ and `Sylvain Chevallier`_)
41+
- Adding test and example for set_download_dir (:gh:`249` by `Divyesh Narayanan`_)
42+
- Update to newer version of Schirrmeister2017 dataset (:gh:`265` by `Robin Schirrmeister`_)
43+
- Adding Huebner2017 and Huebner2018 P300 datasets (:gh:`260` by `Jan Sosulski`_)
44+
- Adding Sosulski2019 auditory P300 datasets (:gh:`266` by `Jan Sosulski`_)
45+
- New script to visualize ERP on all datasets, as a sanity check (:gh:`261` by `Jan Sosulski`_)
46+
47+
Bugs
48+
~~~~
49+
50+
- Removing dependency on mne method for PhysionetMI data downloading, renaming runs (:gh:`257` by `Divyesh Narayanan`_)
51+
- Correcting events management in Schirrmeister2017, renaming session and run (:gh:`255` by `Pierre Guetschel`_ and `Sylvain Chevallier`_)
52+
- Switch session and runs in MAMEM1, 2 and 3 to avoid error in WithinSessionEvaluation (:gh:`256` by `Sylvain Chevallier`_)
53+
- Correct doctstrings for the documentation, incuding Lee2017 (:gh:`256` by `Sylvain Chevallier`_)
54+
55+
56+
Version - 0.4.4
3557
---------------
3658

3759
Enhancements

examples/advanced_examples/plot_filterbank_csp_vs_csp.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
FilterBank CSP versus CSP
44
=========================
55
6-
This Example show a comparison of CSP versus FilterBank CSP on the
6+
This example show a comparison of CSP versus FilterBank CSP on the
77
very popular dataset 2a from the BCI competition IV.
88
"""
99
# Authors: Alexandre Barachant <[email protected]>
@@ -27,7 +27,7 @@
2727
moabb.set_log_level("info")
2828

2929
##############################################################################
30-
# Create pipelines
30+
# Create Pipelines
3131
# ----------------
3232
#
3333
# The CSP implementation from MNE is used. We selected 8 CSP components, as
@@ -51,7 +51,7 @@
5151
# ----------
5252
#
5353
# Since two different preprocessing will be applied, we have two different
54-
# paradigm objects. We have to make sure their filter matchs so the comparison
54+
# paradigm objects. We have to make sure their filter matches so the comparison
5555
# will be fair.
5656
#
5757
# The first one is a standard `LeftRightImagery` with a 8 to 35 Hz broadband
@@ -75,7 +75,7 @@
7575
)
7676
results = evaluation.process(pipelines)
7777

78-
# bank of 6 filter, by 4 Hz increment
78+
# Bank of 6 filters, by 4 Hz increment
7979
filters = [[8, 12], [12, 16], [16, 20], [20, 24], [24, 28], [28, 35]]
8080
paradigm = FilterBankLeftRightImagery(filters=filters)
8181
evaluation = CrossSessionEvaluation(
@@ -93,10 +93,10 @@
9393
# Plot Results
9494
# ----------------
9595
#
96-
# Here we plot the results via normal methods. We the first plot is a pointplot
96+
# Here we plot the results via seaborn. We first display a pointplot
9797
# with the average performance of each pipeline across session and subjects.
9898
# The second plot is a paired scatter plot. Each point representing the score
99-
# of a single session. An algorithm will outperforms another is most of the
99+
# of a single session. An algorithm will outperform another is most of the
100100
# points are in its quadrant.
101101

102102
fig, axes = plt.subplots(1, 2, figsize=[8, 4], sharey=True)

examples/advanced_examples/plot_mne_and_scikit_estimators.py

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
"""
2-
=========================
3-
MNE Epochs-based piplines
4-
=========================
2+
==========================
3+
MNE Epochs-based pipelines
4+
==========================
55
66
This example shows how to use machine learning pipeline based on MNE Epochs
7-
instead of numpy arrays. This is useful to make the most of the MNE code base
7+
instead of Numpy arrays. This is useful to make the most of the MNE code base
88
and to embed EEG specific code inside sklearn pipelines.
99
10-
We will compare compare different pipelines for P300:
11-
- Logistic Regression, based on MNE Epochs
10+
We will compare different pipelines for P300:
11+
- Logistic regression, based on MNE Epochs
1212
- XDAWN and Logistic Regression (LR), based on MNE Epochs
13-
- XDAWN extended covariance and LR on tangent space, based on numpy
13+
- XDAWN extended covariance and LR on tangent space, based on Numpy
1414
1515
"""
1616
# Authors: Sylvain Chevallier
@@ -47,7 +47,7 @@
4747
moabb.set_log_level("info")
4848

4949
###############################################################################
50-
# Loading dataset
50+
# Loading Dataset
5151
# ---------------
5252
#
5353
# Load 2 subjects of BNCI 2014-009 dataset, with 3 session each
@@ -58,15 +58,15 @@
5858
paradigm = P300()
5959

6060
##############################################################################
61-
# Get data (optional)
61+
# Get Data (optional)
6262
# -------------------
6363
#
6464
# To get access to the EEG signals downloaded from the dataset, you could
6565
# use ``dataset.get_data([subject_id)`` to obtain the EEG as MNE Epochs, stored
6666
# in a dictionary of sessions and runs.
6767
# The ``paradigm.get_data(dataset=dataset, subjects=[subject_id])`` allows to
6868
# obtain the preprocessed EEG data, the labels and the meta information. By
69-
# default, the EEG is return as a numpy array. With ``return_epochs=True``, MNE
69+
# default, the EEG is return as a Numpy array. With ``return_epochs=True``, MNE
7070
# Epochs are returned.
7171

7272
subject_list = [1]
@@ -77,14 +77,14 @@
7777
)
7878

7979
##############################################################################
80-
# A simple MNE pipeline
80+
# A Simple MNE Pipeline
8181
# ---------------------
8282
#
8383
# Using ``return_epochs=True`` in the evaluation, it is possible to design a
8484
# pipeline based on MNE Epochs input. Let's create a simple one, that
8585
# reshape the input data from epochs, rescale the data and uses a logistic
8686
# regression to classify the data. We will need to write a basic Transformer
87-
# estimator, that comply with
87+
# estimator, that complies with
8888
# `sklearn convention <https://scikit-learn.org/stable/developers/develop.html>`_.
8989
# This transformer will extract the data from an input Epoch, and reshapes into
9090
# 2D array.
@@ -124,13 +124,13 @@ def transform(self, X, y=None):
124124
mne_res = mne_eval.process(mne_ppl)
125125

126126
##############################################################################
127-
# Advanced MNE pipeline
127+
# Advanced MNE Pipeline
128128
# ---------------------
129129
#
130130
# In some case, the MNE pipeline should have access to the original labels from
131131
# the dataset. This is the case for the XDAWN code of MNE. One could pass
132132
# `mne_labels` to evaluation in order to keep this label.
133-
# As an example, we will define a pipeline that compute an XDAWN filter, rescale,
133+
# As an example, we will define a pipeline that computes an XDAWN filter, rescale,
134134
# then apply a logistic regression.
135135

136136
mne_adv = {}
@@ -151,10 +151,10 @@ def transform(self, X, y=None):
151151
adv_res = mne_eval.process(mne_adv)
152152

153153
###############################################################################
154-
# Numpy-based pipeline
154+
# Numpy-based Pipeline
155155
# --------------------
156156
#
157-
# For the comparison, we will define a numpy-based pipeline that relies on
157+
# For the comparison, we will define a Numpy-based pipeline that relies on
158158
# pyriemann to estimate XDAWN-extended covariance matrices that are projected
159159
# on the tangent space and classified with a logistic regression.
160160

@@ -173,11 +173,12 @@ def transform(self, X, y=None):
173173
sk_res = sk_eval.process(sk_ppl)
174174

175175
###############################################################################
176-
# Combining results
176+
# Combining Results
177177
# -----------------
178178
#
179179
# Even if the results have been obtained by different evaluation processes, it
180-
# possible to combine the resulting dataframes to analyze and plot the results.
180+
# is possible to combine the resulting DataFrames to analyze and plot the
181+
# results.
181182

182183
all_res = pd.concat([mne_res, adv_res, sk_res])
183184

examples/advanced_examples/plot_select_electrodes_resample.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
================================
3-
Select electrodes and resampling
3+
Select Electrodes and Resampling
44
================================
55
66
Within paradigm, it is possible to restrict analysis only to a subset of
@@ -30,7 +30,7 @@
3030
# Datasets
3131
# --------
3232
#
33-
# Load 2 subjects of BNCI 2014-004 and Zhou2016 datasets, with 2 session each
33+
# Load 2 subjects of BNCI 2014-004 and Zhou2016 datasets, with 2 sessions each
3434

3535
subj = [1, 2]
3636
datasets = [Zhou2016(), BNCI2014001()]
@@ -63,7 +63,7 @@
6363
print(results.head())
6464

6565
##############################################################################
66-
# Electrode selection
66+
# Electrode Selection
6767
# -------------------
6868
#
6969
# It is possible to select the electrodes that are shared by all datasets
@@ -79,7 +79,7 @@
7979
print(results.head())
8080

8181
##############################################################################
82-
# Plot results
82+
# Plot Results
8383
# ------------
8484
#
8585
# Compare the obtained results with the two pipelines, CSP+LDA and logistic

examples/advanced_examples/plot_statistical_analysis.py

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
"""=======================
1+
"""
2+
=======================
23
Statistical Analysis
34
=======================
45
@@ -40,20 +41,20 @@
4041
# ---------------------
4142
#
4243
# First we need to set up a paradigm, dataset list, and some pipelines to
43-
# test. This is explored more in the examples -- we choose a left vs right
44+
# test. This is explored more in the examples -- we choose left vs right
4445
# imagery paradigm with a single bandpass. There is only one dataset here but
4546
# any number can be added without changing this workflow.
4647
#
47-
# Create pipelines
48+
# Create Pipelines
4849
# ----------------
4950
#
5051
# Pipelines must be a dict of sklearn pipeline transformer.
5152
#
52-
# The csp implementation from MNE is used. We selected 8 CSP components, as
53-
# usually done in the litterature.
53+
# The CSP implementation from MNE is used. We selected 8 CSP components, as
54+
# usually done in the literature.
5455
#
55-
# The riemannian geometry pipeline consists in covariance estimation, tangent
56-
# space mapping and finaly a logistic regression for the classification.
56+
# The Riemannian geometry pipeline consists in covariance estimation, tangent
57+
# space mapping and finally a logistic regression for the classification.
5758

5859
pipelines = {}
5960

@@ -70,7 +71,7 @@
7071
# ----------
7172
#
7273
# We define the paradigm (LeftRightImagery) and the dataset (BNCI2014001).
73-
# The evaluation will return a dataframe containing a single AUC score for
74+
# The evaluation will return a DataFrame containing a single AUC score for
7475
# each subject / session of the dataset, and for each pipeline.
7576
#
7677
# Results are saved into the database, so that if you add a new pipeline, it
@@ -89,7 +90,7 @@
8990
results = evaluation.process(pipelines)
9091

9192
##############################################################################
92-
# MOABB plotting
93+
# MOABB Plotting
9394
# ----------------
9495
#
9596
# Here we plot the results using some of the convenience methods within the
@@ -109,7 +110,7 @@
109110
plt.show()
110111

111112
###############################################################################
112-
# Statistical testing and further plots
113+
# Statistical Testing and Further Plots
113114
# ----------------------------------------
114115
#
115116
# If the statistical significance of results is of interest, the method
@@ -124,13 +125,13 @@
124125
###############################################################################
125126
# The meta-analysis style plot shows the standardized mean difference within
126127
# each tested dataset for the two algorithms in question, in addition to a
127-
# meta-effect and significances both per-dataset and overall.
128+
# meta-effect and significance both per-dataset and overall.
128129
fig = moabb_plt.meta_analysis_plot(stats, "CSP+LDA", "RG+LDA")
129130
plt.show()
130131

131132
###############################################################################
132133
# The summary plot shows the effect and significance related to the hypothesis
133-
# that the algorithm on the y-axis significantly out-performed the algorithm on
134+
# that the algorithm on the y-axis significantly outperformed the algorithm on
134135
# the x-axis over all datasets
135136
moabb_plt.summary_plot(P, T)
136137
plt.show()

0 commit comments

Comments
 (0)