Skip to content

Conversation

@0xManan
Copy link

@0xManan 0xManan commented Jan 20, 2026

TFSMLayer currently loads external TensorFlow SavedModels during deserialization
without respecting Keras safe_mode. Although TensorFlow does not execute
SavedModel functions at load time, the attacker-controlled graph is registered
during deserialization and executed during normal model invocation, violating
the security guarantees of safe_mode=True.

This change disallows instantiation of TFSMLayer when safe_mode is enabled,
both during direct construction and during deserialization via from_config().
This matches the existing security model used by other potentially unsafe Keras
components (e.g. Lambda layers) and prevents loading of untrusted executable
graph artifacts without explicit user opt-in.

Specifically:

  • Add a safe_mode check in TFSMLayer.__init__ to prevent loading external
    SavedModels when unsafe deserialization is not explicitly enabled.
  • Override from_config() to block deserialization of TFSMLayer when
    safe_mode=True, preventing config-based gadget abuse.
  • Provide clear error messages guiding users to explicitly opt out via
    safe_mode=False or keras.config.enable_unsafe_deserialization() when the
    source is trusted.

This change preserves backward compatibility for trusted workflows while
closing a safe_mode bypass that could otherwise lead to attacker-controlled
graph execution during inference.

Security report:
https://huntr.com/bounties/7e78d6f1-6977-4300-b595-e81bdbda331c

0xManan and others added 5 commits January 20, 2026 15:08
Added safe mode checks for loading TFSMLayer from external SavedModels.
…avedModel execution

TFSMLayer currently loads external TensorFlow SavedModels during deserialization
without respecting Keras `safe_mode`. Although TensorFlow does not execute
SavedModel functions at load time, the attacker-controlled graph is registered
during deserialization and executed during normal model invocation, violating
the security guarantees of `safe_mode=True`.

This change disallows instantiation of `TFSMLayer` when `safe_mode` is enabled,
both during direct construction and during deserialization via `from_config()`.
This matches the existing security model used by other potentially unsafe Keras
components (e.g. Lambda layers) and prevents loading of untrusted executable
graph artifacts without explicit user opt-in.

Specifically:
- Add a `safe_mode` check in `TFSMLayer.__init__` to prevent loading external
  SavedModels when unsafe deserialization is not explicitly enabled.
- Override `from_config()` to block deserialization of `TFSMLayer` when
  `safe_mode=True`, preventing config-based gadget abuse.
- Provide clear error messages guiding users to explicitly opt out via
  `safe_mode=False` or `keras.config.enable_unsafe_deserialization()` when the
  source is trusted.

This change preserves backward compatibility for trusted workflows while
closing a safe_mode bypass that could otherwise lead to attacker-controlled
graph execution during inference.

Security report:
https://huntr.com/bounties/7e78d6f1-6977-4300-b595-e81bdbda331c
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @0xManan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical security vulnerability in TFSMLayer where external TensorFlow SavedModels could be deserialized and executed even when Keras safe_mode was enabled. The changes enforce safe_mode during both direct instantiation and config-based deserialization of TFSMLayer, preventing the loading of untrusted executable graph artifacts by default. This enhances the security guarantees of Keras without breaking backward compatibility for trusted workflows.

Highlights

  • TFSMLayer safe_mode enforcement: TFSMLayer deserialization now respects Keras safe_mode to prevent loading external TensorFlow SavedModels that could contain attacker-controlled executable graph code, addressing a security vulnerability.
  • Direct instantiation protection: A safe_mode check has been added to TFSMLayer.__init__ to disallow direct construction of the layer when safe_mode is enabled, ensuring immediate protection.
  • Config-based deserialization: The from_config() method for TFSMLayer has been overridden to block deserialization when safe_mode=True, mirroring the security model of other Keras components like Lambda layers.
  • User guidance: Clear error messages are provided to guide users on how to explicitly opt out of safe_mode (via safe_mode=False or keras.config.enable_unsafe_deserialization()) when the source of the SavedModel is trusted.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a security enhancement by disallowing the deserialization of TFSMLayer when safe_mode is enabled. This prevents the execution of potentially malicious code from external TensorFlow SavedModels. The changes are implemented in both the __init__ method for direct instantiation and a new from_config method for deserialization flows. The implementation is clean, the logic for handling safe_mode is robust, and the error messages provided are clear and helpful for users. Overall, this is a solid improvement that strengthens the security posture of Keras.

@codecov-commenter
Copy link

codecov-commenter commented Jan 20, 2026

Codecov Report

❌ Patch coverage is 42.85714% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.70%. Comparing base (0dd27da) to head (fc58998).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/export/tfsm_layer.py 42.85% 4 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (0dd27da) and HEAD (fc58998). Click for more details.

HEAD has 2 uploads less than BASE
Flag BASE (0dd27da) HEAD (fc58998)
keras 5 4
keras-tensorflow 1 0
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #22035      +/-   ##
==========================================
- Coverage   82.74%   76.70%   -6.04%     
==========================================
  Files         592      592              
  Lines       62142    62127      -15     
  Branches     9735     9732       -3     
==========================================
- Hits        51417    47656    -3761     
- Misses       8200    11914    +3714     
- Partials     2525     2557      +32     
Flag Coverage Δ
keras 76.57% <42.85%> (-5.99%) ⬇️
keras-jax 62.46% <42.85%> (-0.01%) ⬇️
keras-numpy 56.55% <42.85%> (-0.01%) ⬇️
keras-openvino 37.42% <42.85%> (+0.01%) ⬆️
keras-tensorflow ?
keras-torch 62.46% <42.85%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@0xManan
Copy link
Author

0xManan commented Jan 20, 2026

@hertschuh I have made changes in current implementation for the fix. Please review it. I have tested on my end. It will throw this error.
image

Enable unsafe deserialization for TFSM Layer tests.
The safe_mode check should only be in from_config(), not __init__().

Direct instantiation (TFSMLayer(filepath=...)) is a legitimate use case
where the user explicitly creates the layer. The security concern is
only during deserialization of untrusted .keras files, which goes
through from_config().

This allows attackers to create malicious .keras files while still
blocking victims from loading them with safe_mode=True.
Add comprehensive tests for TFSMLayer safe_mode behavior:
- test_safe_mode_direct_instantiation_allowed: Verifies direct
  TFSMLayer instantiation works as expected
- test_safe_mode_from_config_blocked: Verifies from_config() raises
  ValueError when safe_mode=True
- test_safe_mode_from_config_allowed_when_disabled: Verifies
  from_config() works with safe_mode=False
- test_safe_mode_model_loading_blocked: Tests the full attack scenario
  where loading a .keras file with safe_mode=True is blocked
Updated test docstrings for clarity on instantiation and loading behavior.
Added model invocation with random input to tests for TFSMLayer.
Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the fix!

You'll also need to change the unit tests:

Comment on lines 175 to 180
"Loading a TFSMLayer from config is disallowed when "
"`safe_mode=True` because it loads an external SavedModel "
"that may contain attacker-controlled executable graph code. "
"If you trust the source, pass `safe_mode=False` to the "
"loading function, or call "
"`keras.config.enable_unsafe_deserialization()`."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make the error message a bit more consistent with the Lambda layer one:

                "Requested the deserialization of a `TFSMLayer`, which "
                "loads an external SavedModel. This carries a potential risk "
                "of arbitrary code execution and thus it is disallowed by "
                "default. If you trust the source of the artifact, you can "
                "override this error by passing `safe_mode=False` to the "
                "loading function, or calling "
                "`keras.config.enable_unsafe_deserialization()."

@keras_export("keras.layers.TFSMLayer")
class TFSMLayer(layers.Layer):
"""Reload a Keras model/layer that was saved via SavedModel / ExportArchive.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Undo all the spurious changes here: these empty lines that were removed should stay.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please undo all the empty line removal in this file. This is what is making the "code format" test fail.

saved_model.export_saved_model(model, temp_filepath)
reloaded_layer = tfsm_layer.TFSMLayer(temp_filepath)
self.assertAllClose(reloaded_layer(ref_input), ref_output, atol=1e-7)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost all of the changes in this file are spurious changes (line re-wrapping, comments being removed).
Can you undo all of those?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a critical security vulnerability by preventing the deserialization of TFSMLayer in safe_mode. The implementation of the from_config method with robust safe_mode checks and clear, actionable error messages significantly enhances the security posture of the Keras API. The added tests thoroughly validate the new behavior, ensuring that the safe_mode mechanism works as intended, blocking untrusted SavedModel execution by default while allowing explicit opt-out for trusted sources. The changes align well with the repository's guidelines for API design and error handling.

Comment on lines +134 to +165
@classmethod
def from_config(cls, config, custom_objects=None, safe_mode=True):
"""Creates a TFSMLayer from its config.
Args:
config: A Python dictionary, typically the output of `get_config`.
custom_objects: Optional dictionary mapping names to custom objects.
safe_mode: Boolean, whether to disallow loading TFSMLayer.
When `safe_mode=True`, loading is disallowed because TFSMLayer
loads external SavedModels that may contain attacker-controlled
executable graph code. Defaults to `True`.
Returns:
A TFSMLayer instance.
"""
# Follow the same pattern as Lambda layer for safe_mode handling
effective_safe_mode = (
safe_mode
if safe_mode is not None
else serialization_lib.in_safe_mode()
)

if effective_safe_mode is not False:
raise ValueError(
"Requested the deserialization of a `TFSMLayer`, which "
"loads an external SavedModel. This carries a potential risk "
"of arbitrary code execution and thus it is disallowed by "
"default. If you trust the source of the artifact, you can "
"override this error by passing `safe_mode=False` to the "
"loading function, or calling "
"`keras.config.enable_unsafe_deserialization()."
)

return cls(**config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The from_config class method is a critical addition for enforcing safe_mode during deserialization of TFSMLayer. The logic correctly checks the effective_safe_mode and raises a ValueError if unsafe deserialization is attempted without explicit opt-out. This directly addresses the security vulnerability.

The error message provided is excellent, clearly stating what happened, what was expected, and how the user can fix the issue, which aligns with the Repository Style Guide (line 137, 154).

Comment on lines +133 to +163
def test_safe_mode_blocks_model_loading(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")

# Create and export a model
model = get_model()
model(tf.random.normal((1, 10)))
saved_model.export_saved_model(model, temp_filepath)

# Wrap SavedModel in TFSMLayer and save as .keras
reloaded_layer = tfsm_layer.TFSMLayer(temp_filepath)
wrapper_model = models.Sequential([reloaded_layer])

model_path = os.path.join(self.get_temp_dir(), "tfsm_model.keras")
wrapper_model.save(model_path)

# Default safe_mode=True should block loading
with self.assertRaisesRegex(
ValueError,
"arbitrary code execution",
):
saving_lib.load_model(model_path)

# Explicit opt-out should allow loading
loaded_model = saving_lib.load_model(
model_path,
custom_objects={"TFSMLayer": tfsm_layer.TFSMLayer},
safe_mode=False,
)

x = tf.random.normal((2, 10))
self.assertAllClose(loaded_model(x), wrapper_model(x))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The new test_safe_mode_blocks_model_loading test case is essential. It thoroughly verifies the core security fix by asserting that TFSMLayer deserialization is blocked by default when safe_mode=True and successfully proceeds when safe_mode=False is explicitly provided. This provides strong confidence in the effectiveness of the implemented security measure.

from keras.src import layers
from keras.src.api_export import keras_export
from keras.src.export.saved_model import _list_variables_used_by_fns
from keras.src.saving import serialization_lib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The addition of from keras.src.saving import serialization_lib is necessary for implementing the safe_mode checks during deserialization, ensuring proper integration with Keras's serialization framework.

Comment on lines +117 to +119
rereloaded_layer = tfsm_layer.TFSMLayer.from_config(
config, safe_mode=False
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Updating the from_config call to explicitly pass safe_mode=False is crucial for maintaining the functionality of existing serialization tests while adhering to the new security measures. This ensures that trusted deserialization paths are still testable.

Comment on lines +129 to 130
safe_mode=False,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the from_config change, explicitly setting safe_mode=False for saving_lib.load_model in this test is necessary to confirm that model loading works as expected when the source is trusted and the security check is bypassed.

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you check why the tests are failing?

You can test locally with:

KERAS_BACKEND=tensorflow pytest keras/src/export/tfsm_layer_test.py

@keras_export("keras.layers.TFSMLayer")
class TFSMLayer(layers.Layer):
"""Reload a Keras model/layer that was saved via SavedModel / ExportArchive.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please undo all the empty line removal in this file. This is what is making the "code format" test fail.

# Explicit opt-out should allow loading
loaded_model = saving_lib.load_model(
model_path,
custom_objects={"TFSMLayer": tfsm_layer.TFSMLayer},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need the custom_objects argument.

@0xManan
Copy link
Author

0xManan commented Jan 21, 2026

Can you check why the tests are failing?

You can test locally with:

KERAS_BACKEND=tensorflow pytest keras/src/export/tfsm_layer_test.py

Will check and get back to you shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants