Skip to content

Conversation

@kathyfan
Copy link

... a model.

This is useful for collecting information on how long model checkpointing takes, together with existing verbose logging on initiating the save. This is similar to the "calling checkpoint listeners before/after saving checkpoint xxx" logging that existed in the TF1 session run hook.

…del.

This is useful for collecting information on how long model checkpointing takes, together with existing verbose logging on initiating the save.
@google-cla
Copy link

google-cla bot commented Oct 31, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kathyfan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the ModelCheckpoint callback by introducing a new verbose logging statement. This log message will be displayed upon the successful completion of a model save operation, providing users with clearer feedback on the checkpointing process. This addition is particularly useful for monitoring the duration of model saving, complementing existing logs that indicate the initiation of a save.

Highlights

  • Verbose Logging: A new verbose log message has been added to the ModelCheckpoint callback, which will now explicitly indicate when a model has finished saving, including the epoch number and the save path.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a verbose log message that is displayed after a model checkpoint is saved. This is a useful addition for monitoring and timing the checkpointing process. My review identifies one issue with the implementation where the new "finished saving" log message is not always paired with a "starting to save" message, leading to inconsistent output in certain edge cases. I've left a comment with details on the issue and a suggestion for how to resolve it to ensure the logging behavior is consistent and intuitive for users.

Comment on lines 286 to 289
if self.verbose > 0:
io_utils.print_msg(
f"\nEpoch {epoch + 1}: finished saving model to {filepath}"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This new log message is useful for timing the save operation. However, in its current form, it can lead to inconsistent logging output.

The "finished saving" message will be printed whenever a model is saved and verbose > 0. However, the corresponding "starting to save" message is not always printed under the same conditions.

Specifically, in _should_save_model, when save_best_only=True and the monitor metric is unavailable or not a scalar, the method returns True (triggering a save) but only issues a warning, not a verbose log message. This results in an "end" log without a "start" log, which could be confusing for users.

To ensure consistent logging, a "starting to save" message should also be logged in these fallback cases within _should_save_model. This would ensure that every "finished saving" log is paired with a "saving" log when verbose > 0.

@codecov-commenter
Copy link

codecov-commenter commented Oct 31, 2025

Codecov Report

❌ Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.66%. Comparing base (adbfd13) to head (6d52d6e).
⚠️ Report is 12 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/callbacks/model_checkpoint.py 0.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21805      +/-   ##
==========================================
+ Coverage   82.63%   82.66%   +0.02%     
==========================================
  Files         577      577              
  Lines       59318    59421     +103     
  Branches     9300     9314      +14     
==========================================
+ Hits        49019    49121     +102     
+ Misses       7911     7899      -12     
- Partials     2388     2401      +13     
Flag Coverage Δ
keras 82.48% <0.00%> (+0.02%) ⬆️
keras-jax 63.32% <0.00%> (+<0.01%) ⬆️
keras-numpy 57.56% <0.00%> (+0.01%) ⬆️
keras-openvino 34.34% <0.00%> (+0.04%) ⬆️
keras-tensorflow 64.13% <0.00%> (+0.01%) ⬆️
keras-torch 63.62% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Updated formatting for line length.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants