Skip to content

Commit 0ab30bf

Browse files
Release v0.10.0 (#1782)
* classification checks in progress * changelog start * changelog and fixes * highlights * organising class weights
1 parent 0b52cdf commit 0ab30bf

File tree

16 files changed

+384
-66
lines changed

16 files changed

+384
-66
lines changed

.github/workflows/pr_precommit.yml

+18-11
Original file line numberDiff line numberDiff line change
@@ -17,19 +17,8 @@ jobs:
1717
runs-on: ubuntu-20.04
1818

1919
steps:
20-
- name: Create app token
21-
uses: actions/create-github-app-token@v1
22-
id: app-token
23-
with:
24-
app-id: ${{ vars.PR_APP_ID }}
25-
private-key: ${{ secrets.PR_APP_KEY }}
26-
2720
- name: Checkout
2821
uses: actions/checkout@v4
29-
with:
30-
repository: ${{ github.event.pull_request.head.repo.full_name }}
31-
ref: ${{ github.head_ref }}
32-
token: ${{ steps.app-token.outputs.token }}
3322

3423
- name: Setup Python 3.10
3524
uses: actions/setup-python@v5
@@ -43,6 +32,7 @@ jobs:
4332
- name: List changed files
4433
run: echo '${{ steps.changed-files.outputs.all_changed_files }}'
4534

35+
# only check the full repository if PR and correctly labelled
4636
- if: ${{ github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'full pre-commit') }}
4737
name: Full pre-commit
4838
uses: pre-commit/[email protected]
@@ -54,6 +44,23 @@ jobs:
5444
with:
5545
extra_args: --files ${{ steps.changed-files.outputs.all_changed_files }}
5646

47+
# push fixes if pre-commit fails and PR is eligible
48+
- if: ${{ failure() && github.event_name == 'pull_request_target' && !github.event.pull_request.draft && !contains(github.event.pull_request.labels.*.name, 'stop pre-commit fixes') }}
49+
name: Create app token
50+
uses: actions/create-github-app-token@v1
51+
id: app-token
52+
with:
53+
app-id: ${{ vars.PR_APP_ID }}
54+
private-key: ${{ secrets.PR_APP_KEY }}
55+
56+
- if: ${{ failure() && github.event_name == 'pull_request_target' && !github.event.pull_request.draft && !contains(github.event.pull_request.labels.*.name, 'stop pre-commit fixes') }}
57+
name: Checkout
58+
uses: actions/checkout@v4
59+
with:
60+
repository: ${{ github.event.pull_request.head.repo.full_name }}
61+
ref: ${{ github.head_ref }}
62+
token: ${{ steps.app-token.outputs.token }}
63+
5764
- if: ${{ failure() && github.event_name == 'pull_request_target' && !github.event.pull_request.draft && !contains(github.event.pull_request.labels.*.name, 'stop pre-commit fixes') }}
5865
name: Push pre-commit fixes
5966
uses: stefanzweifel/git-auto-commit-action@v5

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ We strive to provide a broad library of time series algorithms including the
1313
latest advances, offer efficient implementations using numba, and interfaces with other
1414
time series packages to provide a single framework for algorithm comparison.
1515

16-
The latest `aeon` release is `v0.9.0`. You can view the full changelog
16+
The latest `aeon` release is `v0.10.0`. You can view the full changelog
1717
[here](https://www.aeon-toolkit.org/en/stable/changelog.html).
1818

1919
Our webpage and documentation is available at https://aeon-toolkit.org.

aeon/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""aeon toolkit."""
22

3-
__version__ = "0.9.0"
3+
__version__ = "0.10.0"
44

55
__all__ = ["show_versions"]
66

aeon/classification/convolution_based/_arsenal.py

+13-13
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,17 @@ class Arsenal(BaseClassifier):
5151
Default of 0 means n_estimators is used.
5252
contract_max_n_estimators : int, default=100
5353
Max number of estimators when time_limit_in_minutes is set.
54+
class_weight{“balanced”, “balanced_subsample”}, dict or list of dicts, default=None
55+
From sklearn documentation:
56+
If not given, all classes are supposed to have weight one.
57+
The “balanced” mode uses the values of y to automatically adjust weights
58+
inversely proportional to class frequencies in the input data as
59+
n_samples / (n_classes * np.bincount(y))
60+
The “balanced_subsample” mode is the same as “balanced” except that weights
61+
are computed based on the bootstrap sample for every tree grown.
62+
For multi-output, the weights of each column of y will be multiplied.
63+
Note that these weights will be multiplied with sample_weight (passed through
64+
the fit method) if sample_weight is specified.
5465
n_jobs : int, default=1
5566
The number of jobs to run in parallel for both `fit` and `predict`.
5667
``-1`` means using all processors.
@@ -76,17 +87,6 @@ class Arsenal(BaseClassifier):
7687
The collections of estimators trained in fit.
7788
weights_ : list of shape (n_estimators) of float
7889
Weight of each estimator in the ensemble.
79-
class_weight{“balanced”, “balanced_subsample”}, dict or list of dicts, default=None
80-
From sklearn documentation:
81-
If not given, all classes are supposed to have weight one.
82-
The “balanced” mode uses the values of y to automatically adjust weights
83-
inversely proportional to class frequencies in the input data as
84-
n_samples / (n_classes * np.bincount(y))
85-
The “balanced_subsample” mode is the same as “balanced” except that weights
86-
are computed based on the bootstrap sample for every tree grown.
87-
For multi-output, the weights of each column of y will be multiplied.
88-
Note that these weights will be multiplied with sample_weight (passed through
89-
the fit method) if sample_weight is specified.
9090
n_estimators_ : int
9191
The number of estimators in the ensemble.
9292
@@ -147,10 +147,10 @@ def __init__(
147147
self.n_features_per_kernel = n_features_per_kernel
148148
self.time_limit_in_minutes = time_limit_in_minutes
149149
self.contract_max_n_estimators = contract_max_n_estimators
150-
self.class_weight = class_weight
151150

152-
self.random_state = random_state
151+
self.class_weight = class_weight
153152
self.n_jobs = n_jobs
153+
self.random_state = random_state
154154

155155
self.n_cases_ = 0
156156
self.n_channels_ = 0

aeon/classification/convolution_based/_hydra.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ class HydraClassifier(BaseClassifier):
8888
}
8989

9090
def __init__(
91-
self, n_kernels=8, n_groups=64, n_jobs=1, class_weight=None, random_state=None
91+
self, n_kernels=8, n_groups=64, class_weight=None, n_jobs=1, random_state=None
9292
):
9393
self.n_kernels = n_kernels
9494
self.n_groups = n_groups

aeon/classification/convolution_based/_mr_hydra.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ class MultiRocketHydraClassifier(BaseClassifier):
8282
}
8383

8484
def __init__(
85-
self, n_kernels=8, n_groups=64, n_jobs=1, class_weight=None, random_state=None
85+
self, n_kernels=8, n_groups=64, class_weight=None, n_jobs=1, random_state=None
8686
):
8787
self.n_kernels = n_kernels
8888
self.n_groups = n_groups

aeon/classification/convolution_based/_rocket_classifier.py

+8-7
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ class RocketClassifier(BaseClassifier):
5757
For multi-output, the weights of each column of y will be multiplied.
5858
Note that these weights will be multiplied with sample_weight (passed through
5959
the fit method) if sample_weight is specified.
60+
n_jobs : int, default=1
61+
The number of jobs to run in parallel for both `fit` and `predict`.
62+
``-1`` means using all processors.
6063
random_state : int, RandomState instance or None, default=None
6164
If `int`, random_state is the seed used by the random number generator;
6265
If `RandomState` instance, random_state is the random number generator;
6366
If `None`, the random number generator is the `RandomState` instance used
6467
by `np.random`.
65-
n_jobs : int, default=1
66-
The number of jobs to run in parallel for both `fit` and `predict`.
67-
``-1`` means using all processors.
6868
6969
Attributes
7070
----------
@@ -116,19 +116,20 @@ def __init__(
116116
rocket_transform="rocket",
117117
max_dilations_per_kernel=32,
118118
n_features_per_kernel=4,
119-
class_weight=None,
120119
estimator=None,
121-
random_state=None,
120+
class_weight=None,
122121
n_jobs=1,
122+
random_state=None,
123123
):
124124
self.num_kernels = num_kernels
125125
self.rocket_transform = rocket_transform
126126
self.max_dilations_per_kernel = max_dilations_per_kernel
127127
self.n_features_per_kernel = n_features_per_kernel
128-
self.random_state = random_state
129-
self.class_weight = class_weight
130128
self.estimator = estimator
129+
130+
self.class_weight = class_weight
131131
self.n_jobs = n_jobs
132+
self.random_state = random_state
132133

133134
super().__init__()
134135

aeon/classification/dictionary_based/_muse.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -162,19 +162,20 @@ def __init__(
162162
self.word_lengths = [4, 6]
163163
self.bigrams = bigrams
164164
self.binning_strategies = ["equi-width", "equi-depth"]
165-
self.random_state = random_state
166165
self.min_window = 6
167166
self.max_window = 100
168167
self.window_inc = window_inc
169168
self.window_sizes = []
170169
self.SFA_transformers = []
171170
self.clf = None
172-
self.n_jobs = n_jobs
173171
self.support_probabilities = support_probabilities
174172
self.total_features_count = 0
175-
self.class_weight = class_weight
176173
self.feature_selection = feature_selection
177174

175+
self.class_weight = class_weight
176+
self.n_jobs = n_jobs
177+
self.random_state = random_state
178+
178179
super().__init__()
179180

180181
def _fit(self, X, y):

aeon/classification/dictionary_based/_weasel.py

+6-3
Original file line numberDiff line numberDiff line change
@@ -144,10 +144,10 @@ def __init__(
144144
window_inc=2,
145145
p_threshold=0.05,
146146
alphabet_size=4,
147-
n_jobs=1,
148147
feature_selection="chi2",
149148
support_probabilities=False,
150149
class_weight=None,
150+
n_jobs=1,
151151
random_state=None,
152152
):
153153
self.alphabet_size = alphabet_size
@@ -158,7 +158,6 @@ def __init__(
158158
self.word_lengths = [4, 6]
159159
self.bigrams = bigrams
160160
self.binning_strategy = binning_strategy
161-
self.random_state = random_state
162161
self.min_window = 6
163162
self.max_window = 100
164163
self.feature_selection = feature_selection
@@ -169,10 +168,14 @@ def __init__(
169168
self.n_cases = 0
170169
self.SFA_transformers = []
171170
self.clf = None
172-
self.n_jobs = n_jobs
173171
self.support_probabilities = support_probabilities
172+
173+
self.random_state = random_state
174+
self.n_jobs = n_jobs
174175
self.class_weight = class_weight
176+
175177
set_num_threads(n_jobs)
178+
176179
super().__init__()
177180

178181
def _fit(self, X, y):

aeon/classification/dictionary_based/_weasel_v2.py

+5-8
Original file line numberDiff line numberDiff line change
@@ -138,24 +138,21 @@ def __init__(
138138
use_first_differences=(True, False),
139139
feature_selection="chi2_top_k",
140140
max_feature_count=30_000,
141-
random_state=None,
142141
class_weight=None,
143-
n_jobs=4,
142+
n_jobs=1,
143+
random_state=None,
144144
):
145145
self.norm_options = norm_options
146146
self.word_lengths = word_lengths
147-
148-
self.random_state = random_state
149-
150147
self.min_window = min_window
151-
152148
self.max_feature_count = max_feature_count
153149
self.use_first_differences = use_first_differences
154150
self.feature_selection = feature_selection
155-
self.class_weight = class_weight
156-
157151
self.clf = None
152+
153+
self.class_weight = class_weight
158154
self.n_jobs = n_jobs
155+
self.random_state = random_state
159156

160157
super().__init__()
161158

aeon/classification/interval_based/_quant.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -91,14 +91,14 @@ def __init__(
9191
interval_depth=6,
9292
quantile_divisor=4,
9393
estimator=None,
94-
random_state=None,
9594
class_weight=None,
95+
random_state=None,
9696
):
9797
self.interval_depth = interval_depth
9898
self.quantile_divisor = quantile_divisor
9999
self.estimator = estimator
100-
self.random_state = random_state
101100
self.class_weight = class_weight
101+
self.random_state = random_state
102102
super().__init__()
103103

104104
def _fit(self, X, y):

aeon/classification/shapelet_based/_rdst.py

+5-4
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,8 @@ class RDSTClassifier(BaseClassifier):
6666
estimator : BaseEstimator or None, default=None
6767
Base estimator for the ensemble, can be supplied a sklearn `BaseEstimator`. If
6868
`None` a default `RidgeClassifierCV` classifier is used with standard scalling.
69+
save_transformed_data : bool, default=False
70+
If True, the transformed training dataset for all classifiers will be saved.
6971
class_weight{“balanced”, “balanced_subsample”}, dict or list of dicts, default=None
7072
Only applies if estimator is None, and the default is used.
7173
From sklearn documentation:
@@ -78,8 +80,6 @@ class RDSTClassifier(BaseClassifier):
7880
For multi-output, the weights of each column of y will be multiplied.
7981
Note that these weights will be multiplied with sample_weight (passed through
8082
the fit method) if sample_weight is specified.
81-
save_transformed_data : bool, default=False
82-
If True, the transformed training dataset for all classifiers will be saved.
8383
n_jobs : int, default=1
8484
The number of jobs to run in parallel for both ``fit`` and ``predict``.
8585
`-1` means using all processors.
@@ -147,10 +147,10 @@ def __init__(
147147
threshold_percentiles=None,
148148
alpha_similarity: float = 0.5,
149149
use_prime_dilations: bool = False,
150+
distance: str = "manhattan",
150151
estimator=None,
151152
save_transformed_data: bool = False,
152153
class_weight=None,
153-
distance: str = "manhattan",
154154
n_jobs: int = 1,
155155
random_state: Union[int, Type[np.random.RandomState], None] = None,
156156
) -> None:
@@ -160,12 +160,13 @@ def __init__(
160160
self.threshold_percentiles = threshold_percentiles
161161
self.alpha_similarity = alpha_similarity
162162
self.use_prime_dilations = use_prime_dilations
163-
self.class_weight = class_weight
164163
self.distance = distance
165164
self.estimator = estimator
166165
self.save_transformed_data = save_transformed_data
166+
self.class_weight = class_weight
167167
self.random_state = random_state
168168
self.n_jobs = n_jobs
169+
169170
self.transformed_data_ = []
170171

171172
self._transformer = None

build_tools/pr_labeler.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -31,17 +31,17 @@
3131
title = pr.title
3232

3333
title_regex_to_labels = [
34-
(r"\bENH\b", "enhancement"),
35-
(r"\bMNT\b", "maintenance"),
36-
(r"\bBUG\b", "bug"),
37-
(r"\bDOC\b", "documentation"),
38-
(r"\bREF\b", "refactor"),
39-
(r"\bDEP\b", "deprecation"),
40-
(r"\bGOV\b", "governance"),
34+
(r"\benh\b", "enhancement"),
35+
(r"\bmnt\b", "maintenance"),
36+
(r"\bbug\b", "bug"),
37+
(r"\bdoc\b", "documentation"),
38+
(r"\bref\b", "refactor"),
39+
(r"\bdep\b", "deprecation"),
40+
(r"\bgov\b", "governance"),
4141
]
4242

4343
title_labels = [
44-
label for regex, label in title_regex_to_labels if re.search(regex, title)
44+
label for regex, label in title_regex_to_labels if re.search(regex, title.lower())
4545
]
4646
title_labels_to_add = list(set(title_labels) - set(labels))
4747

docs/changelog.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
# Changelog
22

33
All notable changes to this project will be documented in this file. The format is
4-
based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and we adhere
4+
based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and we adhere
55
to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The source code for
66
all [releases](https://github.com/aeon-toolkit/aeon/releases) is available on GitHub.
77

8-
To stay up-to-date with aeon releases, subscribe to aeon
8+
To stay up to date with aeon releases, subscribe to aeon
99
[here](https://libraries.io/pypi/aeon) or follow us on
1010
[Twitter](https://twitter.com/aeon_toolbox).
1111

12+
- [Version 0.10.0](changelogs/v0.10.md)
1213
- [Version 0.9.0](changelogs/v0.9.md)
1314
- [Version 0.8.0](changelogs/v0.8.md)
1415
- [Version 0.7.0](changelogs/v0.7.md)

0 commit comments

Comments
 (0)