-
Notifications
You must be signed in to change notification settings - Fork 208
[DOC] Fix double back tick inconsistencies in classification module docstrings #2695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 4 commits
0961796
c30b3d2
9f82db3
b551f5c
f59542f
9a0eed6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -32,9 +32,9 @@ class TemporalDictionaryEnsemble(BaseClassifier): | |
Implementation of the dictionary based Temporal Dictionary Ensemble as described | ||
in [1]_. | ||
|
||
Overview: Input 'n' series length 'm' with 'd' dimensions | ||
TDE searches 'k' parameter values selected using a Gaussian processes | ||
regressor, evaluating each with a LOOCV. It then retains 's' | ||
Overview: Input ``n`` series length ``m`` with ``d`` dimensions | ||
TDE searches ``k`` parameter values selected using a Gaussian processes | ||
regressor, evaluating each with a LOOCV. It then retains ``s`` | ||
Comment on lines
+35
to
+37
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These are fine as is. This are the notation used in the paper, not the code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just noticed "lcoefficients" below. could you fix that also? |
||
ensemble members. | ||
There are six primary parameters for individual classifiers: | ||
- alpha: alphabet size | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -23,7 +23,7 @@ class WEASEL(BaseClassifier): | |
""" | ||
Word Extraction for Time Series Classification (WEASEL). | ||
|
||
As described in [1]_. Overview: Input 'n' series length 'm' | ||
As described in [1]_. Overview: Input ``n`` series length ``m`` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. same here |
||
WEASEL is a dictionary classifier that builds a bag-of-patterns using SFA | ||
for different window lengths and learns a logistic regression classifier | ||
on this bag. | ||
|
@@ -74,10 +74,10 @@ class WEASEL(BaseClassifier): | |
Sets the feature selections strategy to be used. One of {"chi2", "none", | ||
"random"}. Large amounts of memory may beneeded depending on the setting of | ||
bigrams (true is more) or alpha (larger is more). | ||
'chi2' reduces the number of words, keeping those above the 'p_threshold'. | ||
'random' reduces the number to at most 'max_feature_count', | ||
``chi2`` reduces the number of words, keeping those above the ``p_threshold``. | ||
``random`` reduces the number to at most ``max_feature_count``, | ||
by randomly selecting features. | ||
'none' does not apply any feature selection and yields large bag of words. | ||
``none`` does not apply any feature selection and yields large bag of words. | ||
Comment on lines
+77
to
+80
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. String parameters should still contain quotation marks even in code style. |
||
support_probabilities : bool, default: False | ||
If set to False, a RidgeClassifierCV will be trained, which has higher accuracy | ||
and is faster, yet does not support predict_proba. | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -34,7 +34,7 @@ class WEASEL_V2(BaseClassifier): | |
""" | ||
Word Extraction for Time Series Classification (WEASEL) v2.0. | ||
|
||
Overview: Input 'n' series length 'm' | ||
Overview: Input ``n`` series length ``m`` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Again |
||
WEASEL is a dictionary classifier that builds a bag-of-patterns using SFA | ||
for different window lengths and learns a logistic regression classifier | ||
on this bag. | ||
|
@@ -72,11 +72,11 @@ class WEASEL_V2(BaseClassifier): | |
Sets the feature selections strategy to be used. Options from {"chi2_top_k", | ||
"none", "random"}. Large amounts of memory may be needed depending on the | ||
setting of bigrams (true is more) or alpha (larger is more). | ||
'chi2_top_k' reduces the number of words to at most 'max_feature_count', | ||
``chi2_top_k`` reduces the number of words to at most 'max_feature_count', | ||
dropping values based on p-value. | ||
'random' reduces the number to at most 'max_feature_count', by randomly | ||
``random`` reduces the number to at most ``max_feature_count``, by randomly | ||
selecting features. | ||
'none' does not apply any feature selection and yields large bag of words | ||
``none`` does not apply any feature selection and yields large bag of words | ||
Comment on lines
+75
to
+79
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same as other weasel |
||
max_feature_count : int, default=30_000 | ||
size of the dictionary - number of words to use - if feature_selection set to | ||
"chi2" or "random". Else ignored. | ||
|
@@ -290,11 +290,11 @@ class WEASELTransformerV2: | |
Sets the feature selections strategy to be used. Large amounts of memory may be | ||
needed depending on the setting of bigrams (true is more) or | ||
alpha (larger is more). | ||
'chi2_top_k' reduces the number of words to at most 'max_feature_count', | ||
``chi2_top_k`` reduces the number of words to at most ``max_feature_count``, | ||
dropping values based on p-value. | ||
'random' reduces the number to at most 'max_feature_count', | ||
``random`` reduces the number to at most ``max_feature_count``, | ||
by randomly selecting features. | ||
'none' does not apply any feature selection and yields large bag of words | ||
``none`` does not apply any feature selection and yields large bag of words | ||
max_feature_count : int, default=30_000 | ||
size of the dictionary - number of words to use - if feature_selection set to | ||
"chi2" or "random". Else ignored. | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -83,7 +83,7 @@ def test_proportion_train_in_param_finding(): | |
|
||
|
||
def test_all_distance_measures(): | ||
"""Test the 'all' option of the distance_measures parameter.""" | ||
"""Test the ``all`` option of the distance_measures parameter.""" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. no need to edit tests |
||
X = np.random.random(size=(10, 1, 10)) | ||
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) | ||
ee = ElasticEnsemble(distance_measures="all", proportion_train_in_param_finding=0.2) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't make this edit here. See #2723 if you want to help create help solve this issue