Skip to content

[DOC] Fix double back tick inconsistencies in classification module docstrings #2695

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/pr_precommit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ jobs:
repository: ${{ github.event.pull_request.head.repo.full_name }}
ref: ${{ github.head_ref }}
token: ${{ steps.app-token.outputs.token }}
fetch-depth: 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't make this edit here. See #2723 if you want to help create help solve this issue


- name: Setup Python 3.10
uses: actions/setup-python@v5
Expand Down
2 changes: 1 addition & 1 deletion aeon/classification/dictionary_based/_redcomets.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ class REDCOMETS(BaseClassifier):
``-1`` means using all processors.
parallel_backend : str, ParallelBackendBase instance or None, default=None
Specify the parallelisation backend implementation in joblib,
if ``None`` a 'prefer' value of "threads" is used by default.
if ``None`` a ``prefer`` value of "threads" is used by default.
Valid options are "loky", "multiprocessing", "threading" or a custom backend.
See the joblib Parallel documentation for more details.

Expand Down
6 changes: 3 additions & 3 deletions aeon/classification/dictionary_based/_tde.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ class TemporalDictionaryEnsemble(BaseClassifier):
Implementation of the dictionary based Temporal Dictionary Ensemble as described
in [1]_.

Overview: Input 'n' series length 'm' with 'd' dimensions
TDE searches 'k' parameter values selected using a Gaussian processes
regressor, evaluating each with a LOOCV. It then retains 's'
Overview: Input ``n`` series length ``m`` with ``d`` dimensions
TDE searches ``k`` parameter values selected using a Gaussian processes
regressor, evaluating each with a LOOCV. It then retains ``s``
Comment on lines +35 to +37
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are fine as is. This are the notation used in the paper, not the code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed "lcoefficients" below. could you fix that also?

ensemble members.
There are six primary parameters for individual classifiers:
- alpha: alphabet size
Expand Down
8 changes: 4 additions & 4 deletions aeon/classification/dictionary_based/_weasel.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class WEASEL(BaseClassifier):
"""
Word Extraction for Time Series Classification (WEASEL).

As described in [1]_. Overview: Input 'n' series length 'm'
As described in [1]_. Overview: Input ``n`` series length ``m``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

WEASEL is a dictionary classifier that builds a bag-of-patterns using SFA
for different window lengths and learns a logistic regression classifier
on this bag.
Expand Down Expand Up @@ -74,10 +74,10 @@ class WEASEL(BaseClassifier):
Sets the feature selections strategy to be used. One of {"chi2", "none",
"random"}. Large amounts of memory may beneeded depending on the setting of
bigrams (true is more) or alpha (larger is more).
'chi2' reduces the number of words, keeping those above the 'p_threshold'.
'random' reduces the number to at most 'max_feature_count',
``chi2`` reduces the number of words, keeping those above the ``p_threshold``.
``random`` reduces the number to at most ``max_feature_count``,
by randomly selecting features.
'none' does not apply any feature selection and yields large bag of words.
``none`` does not apply any feature selection and yields large bag of words.
Comment on lines +77 to +80
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

String parameters should still contain quotation marks even in code style.

support_probabilities : bool, default: False
If set to False, a RidgeClassifierCV will be trained, which has higher accuracy
and is faster, yet does not support predict_proba.
Expand Down
14 changes: 7 additions & 7 deletions aeon/classification/dictionary_based/_weasel_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class WEASEL_V2(BaseClassifier):
"""
Word Extraction for Time Series Classification (WEASEL) v2.0.

Overview: Input 'n' series length 'm'
Overview: Input ``n`` series length ``m``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again

WEASEL is a dictionary classifier that builds a bag-of-patterns using SFA
for different window lengths and learns a logistic regression classifier
on this bag.
Expand Down Expand Up @@ -72,11 +72,11 @@ class WEASEL_V2(BaseClassifier):
Sets the feature selections strategy to be used. Options from {"chi2_top_k",
"none", "random"}. Large amounts of memory may be needed depending on the
setting of bigrams (true is more) or alpha (larger is more).
'chi2_top_k' reduces the number of words to at most 'max_feature_count',
``chi2_top_k`` reduces the number of words to at most 'max_feature_count',
dropping values based on p-value.
'random' reduces the number to at most 'max_feature_count', by randomly
``random`` reduces the number to at most ``max_feature_count``, by randomly
selecting features.
'none' does not apply any feature selection and yields large bag of words
``none`` does not apply any feature selection and yields large bag of words
Comment on lines +75 to +79
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as other weasel

max_feature_count : int, default=30_000
size of the dictionary - number of words to use - if feature_selection set to
"chi2" or "random". Else ignored.
Expand Down Expand Up @@ -290,11 +290,11 @@ class WEASELTransformerV2:
Sets the feature selections strategy to be used. Large amounts of memory may be
needed depending on the setting of bigrams (true is more) or
alpha (larger is more).
'chi2_top_k' reduces the number of words to at most 'max_feature_count',
``chi2_top_k`` reduces the number of words to at most ``max_feature_count``,
dropping values based on p-value.
'random' reduces the number to at most 'max_feature_count',
``random`` reduces the number to at most ``max_feature_count``,
by randomly selecting features.
'none' does not apply any feature selection and yields large bag of words
``none`` does not apply any feature selection and yields large bag of words
max_feature_count : int, default=30_000
size of the dictionary - number of words to use - if feature_selection set to
"chi2" or "random". Else ignored.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ def test_proportion_train_in_param_finding():


def test_all_distance_measures():
"""Test the 'all' option of the distance_measures parameter."""
"""Test the ``all`` option of the distance_measures parameter."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to edit tests

X = np.random.random(size=(10, 1, 10))
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
ee = ElasticEnsemble(distance_measures="all", proportion_train_in_param_finding=0.2)
Expand Down
2 changes: 1 addition & 1 deletion aeon/classification/feature_based/_catch22.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ class Catch22Classifier(BaseClassifier):
``-1`` means using all processors.
parallel_backend : str, ParallelBackendBase instance or None, default=None
Specify the parallelisation backend implementation in joblib for Catch22,
if None a 'prefer' value of "threads" is used by default.
if None a ``prefer`` value of "threads" is used by default.
Valid options are "loky", "multiprocessing", "threading" or a custom backend.
See the joblib Parallel documentation for more details.
class_weight{“balanced”, “balanced_subsample”}, dict or list of dicts, default=None
Expand Down
2 changes: 1 addition & 1 deletion aeon/classification/hybrid/_hivecote_v1.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class HIVECOTEV1(BaseClassifier):
``-1`` means using all processors.
parallel_backend : str, ParallelBackendBase instance or None, default=None
Specify the parallelisation backend implementation in joblib for Catch22,
if None a 'prefer' value of "threads" is used by default.
if None a ``prefer`` value of "threads" is used by default.
Valid options are "loky", "multiprocessing", "threading" or a custom backend.
See the joblib Parallel documentation for more details.

Expand Down
2 changes: 1 addition & 1 deletion aeon/classification/hybrid/_hivecote_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ class HIVECOTEV2(BaseClassifier):
``-1`` means using all processors.
parallel_backend : str, ParallelBackendBase instance or None, default=None
Specify the parallelisation backend implementation in joblib for Catch22,
if None a 'prefer' value of "threads" is used by default.
if None a ``prefer`` value of "threads" is used by default.
Valid options are "loky", "multiprocessing", "threading" or a custom backend.
See the joblib Parallel documentation for more details.

Expand Down