Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions keras/src/callbacks/early_stopping.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ def on_train_begin(self, logs=None):
# Allow instances to be re-used
self.wait = 0
self.stopped_epoch = 0
self.best = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this change is correct, for consistency with other callbacks like ReduceLROnPlateau, consider extracting the state-resetting logic from on_train_begin (lines 89-93) into a private _reset() method. This would align with the Keras API design guideline of having similar APIs for objects that perform similar functions, improving code clarity and maintainability.

For example:

def on_train_begin(self, logs=None):
    self._reset()

def _reset(self):
    # Allow instances to be re-used
    self.wait = 0
    self.stopped_epoch = 0
    self.best = None
    self.best_weights = None
    self.best_epoch = 0
References
  1. Objects that do interchangeable things should have identical or very close APIs. Both EarlyStopping and ReduceLROnPlateau are callbacks that monitor metrics and reset state. Using a _reset method in both would improve consistency. (link)

self.best_weights = None
self.best_epoch = 0

Expand Down
25 changes: 25 additions & 0 deletions keras/src/callbacks/early_stopping_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,31 @@ def test_early_stopping_reuse(self):
)
self.assertGreaterEqual(len(history2.epoch), patience)

@pytest.mark.requires_trainable_backend
def test_early_stopping_reuse_across_models(self):
# Regression test for https://github.com/keras-team/keras/issues/20256
# When reusing EarlyStopping across multiple model.fit() calls with
# different models, self.best must be reset so that the new model
# isn't compared against a stale best value from a previous run.
patience = 5
data = np.random.random((100, 1))
labels = np.where(data > 0.5, 1, 0)

stopper = callbacks.EarlyStopping(monitor="loss", patience=patience)

for _ in range(3):
model = models.Sequential(
[
layers.Dense(10, activation="relu"),
layers.Dense(1, activation="sigmoid"),
]
)
model.compile(optimizer="sgd", loss="binary_crossentropy")
history = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=50
)
self.assertGreater(len(history.epoch), patience)

@pytest.mark.requires_trainable_backend
def test_early_stopping_with_baseline(self):
baseline = 0.6
Expand Down
1 change: 1 addition & 0 deletions keras/src/callbacks/reduce_lr_on_plateau.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ def __init__(

def _reset(self):
"""Resets wait counter and cooldown counter."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

With the addition of resetting self.best, the docstring for the _reset method is now incomplete. Please update it to reflect that best is also reset.

Suggested change
"""Resets wait counter and cooldown counter."""
"""Resets wait counter, cooldown counter, and best value."""
References
  1. Docstrings should be kept up-to-date and accurately describe what the code does. The docstring for _reset should be updated to include the fact that self.best is also being reset. (link)

self.best = None
self.cooldown_counter = 0
self.wait = 0

Expand Down