Skip to content

Commit a75496a

Browse files
authored
fix(docs): Improve the tracking example (#825)
Fix #821
1 parent b6e7e49 commit a75496a

File tree

1 file changed

+26
-11
lines changed

1 file changed

+26
-11
lines changed

examples/plot_04_tracking_items.py

Lines changed: 26 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -156,25 +156,34 @@
156156
# Here, let us see how we can use the tracking of items with this function.
157157

158158
# %%
159-
# We run several cross-validations using several values of a hyperparameter:
159+
# Let us load some data:
160160

161161
# %%
162162
from sklearn import datasets
163-
from sklearn.linear_model import Lasso
163+
164+
X, y = datasets.load_diabetes(return_X_y=True)
165+
X, y = X[:150], y[:150]
166+
167+
# %%
168+
# Suppose that some users are coding in their draft notebook and are iterating on some
169+
# cross-validations in separate cells and forgot to store the intermediate results:
170+
171+
# %%
164172
import skore
173+
from sklearn.linear_model import Lasso
165174

166-
diabetes = datasets.load_diabetes()
167-
X = diabetes.data[:150]
168-
y = diabetes.target[:150]
169-
lasso = Lasso()
175+
skore.cross_validate(Lasso(alpha=0.5), X, y, cv=5, project=my_project)
170176

171-
for alpha in [0.5, 1, 2]:
172-
cv_results = skore.cross_validate(
173-
Lasso(alpha=alpha), X, y, cv=5, project=my_project
174-
)
177+
# %%
178+
skore.cross_validate(Lasso(alpha=1), X, y, cv=5, project=my_project)
179+
180+
# %%
181+
skore.cross_validate(Lasso(alpha=2), X, y, cv=5, project=my_project)
175182

176183
# %%
177-
# We can compare the metrics of each run of the cross-validation (on all splits):
184+
# Thanks to the storage of the results by :func:`skore.cross_validate` (done when
185+
# specifying the ``project`` argument), we can compare the metrics of each
186+
# cross-validation run (on all splits):
178187

179188
# %%
180189
fig_plotly = my_project.get_item("cross_validation_aggregated").plot
@@ -183,6 +192,12 @@
183192
# %%
184193
# Hence, we can observe that the first run, with ``alpha=0.5``, works better.
185194

195+
# %%
196+
# .. note::
197+
# The good practice, instead of running several cross-validations with different
198+
# values of the hyperparameter, would have been to use a
199+
# :func:`sklearn.model_selection.GridSearchCV`.
200+
186201
# %%
187202
# Cleanup the project
188203
# -------------------

0 commit comments

Comments
 (0)