Skip to content

Fix #19127 — Improve error message for named inputs mismatch in Functional model#22356

Draft
pctablet505 wants to merge 2 commits intokeras-team:masterfrom
pctablet505:fix/19127-named-inputs-error-message
Draft

Fix #19127 — Improve error message for named inputs mismatch in Functional model#22356
pctablet505 wants to merge 2 commits intokeras-team:masterfrom
pctablet505:fix/19127-named-inputs-error-message

Conversation

@pctablet505
Copy link
Collaborator

Fixes: #19127
This pull request improves the input validation logic for models with dictionary-based inputs in the Keras Functional API. The main change is to provide clearer error messages when users supply inputs in the wrong format, making it easier to diagnose and fix input-related issues. The corresponding test has also been updated to match the new error message.

Input validation improvements:

  • Updated the _assert_input_compatibility method in Model (in functional.py) to check for cases where the model expects a dictionary of inputs but receives a non-dictionary (like a list, tuple, or array). If a list or tuple of matching length is provided, it falls back to positional matching; otherwise, it raises a clearer ValueError with guidance on the expected input format.

Testing updates:

  • Modified the test test_bad_input_spec in functional_test.py to expect the new, more descriptive error message when a non-dictionary input is passed to a model expecting a dictionary.

Problem

When a Functional model is built with dict inputs (e.g. keras.Input({"a": ..., "b": ...})), passing a plain array or list to model.fit() / model(...) raised a confusing error: "expects 2 input(s)". This message gave no hint that the model requires named dict inputs, making it very hard for users to diagnose the problem (see thread in #19127).

Root Cause

Functional._assert_input_compatibility unconditionally delegated to the parent Model implementation, which only checks the input count. It had no awareness that its _inputs_struct is a dict, so the error path never mentioned dict/named inputs at all.

Fix

Override _assert_input_compatibility in Functional to detect the mismatch early. If the model's input struct is a dict but the user passes a non-dict:

  • A list/tuple of the correct length is still accepted (positional matching, same as before).
  • Anything else raises a clear ValueError that names the expected keys and shows the correct calling syntax:
    Model "my_model" expects inputs as a `dict` with the following keys: ['a', 'b'].
    Instead received ndarray. Pass your data as `model.fit({'a': ..., 'b': ...}, ...)`.
    

Files Changed

  • keras/src/models/functional.py — override _assert_input_compatibility with dict-aware check
  • keras/src/models/functional_test.py — update test to match new error text

When a Functional model expects dict inputs (e.g., {'a': input_a, 'b':
input_b}) but receives a single tensor or a list/tuple with the wrong
count, the previous error was a generic 'expects N input(s), but
received M'. This gives no hint that the model expects named dict inputs.

Override _assert_input_compatibility in Functional to check for this
case early and provide a clear, actionable message: 'Model expects
inputs as a dict with the following keys: [keys]. Pass your data as
model.fit({...}).'

List/tuple inputs with matching length are still allowed through for
backward compatibility (with existing warning about struct mismatch).

Fixes keras-team#19127
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the user experience when working with Keras Functional API models that expect named dictionary inputs. By introducing more specific error messages for input type mismatches, it helps users quickly understand and correct issues related to providing data in the wrong format, reducing debugging time and frustration.

Highlights

  • Enhanced Input Validation Error Messages: The _assert_input_compatibility method in Functional models has been overridden to provide clearer, more actionable ValueError messages when a model expecting dictionary inputs receives a non-dictionary type, guiding users on the correct input format.
  • Updated Test Case: The test_bad_input_spec in functional_test.py was modified to reflect the new, more descriptive error message, ensuring the improved validation logic is correctly tested.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • keras/src/models/functional.py
    • Overrode _assert_input_compatibility to include logic for detecting when a Functional model expects dictionary inputs but receives a non-dictionary type.
    • Implemented a fallback to positional matching for lists/tuples of correct length.
    • Added a detailed ValueError message specifying expected keys and correct usage for dictionary inputs.
  • keras/src/models/functional_test.py
    • Updated the assertRaisesRegex pattern in test_bad_input_spec to match the new, more informative error message for dictionary input validation failures.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the user experience by providing a more informative and actionable error message when a Functional model with named inputs receives an incorrect input type. This change aligns well with the Keras API design guidelines. The implementation in functional.py correctly identifies the input mismatch and raises a helpful ValueError, and the updated test in functional_test.py properly validates this new behavior. I have one minor suggestion to enhance the readability of the error message construction.

Comment on lines +226 to +234
keys = list(self._inputs_struct.keys())
raise ValueError(
f'Model "{self.name}" expects inputs as a `dict` with '
f"the following keys: {keys}. Instead received "
f"{type(inputs).__name__}. Pass your data as "
"`model.fit({"
+ ", ".join(f"'{k}': ..." for k in keys)
+ "}, ...)`."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability, you can construct the example string for the error message in a separate variable. This avoids using + for string concatenation within the f-string, making the code cleaner.

            keys = list(self._inputs_struct.keys())
            example_fit_kwargs = ", ".join(f"'{k}': ..." for k in keys)
            raise ValueError(
                f'Model "{self.name}" expects inputs as a `dict` with '
                f"the following keys: {keys}. Instead received "
                f"{type(inputs).__name__}. Pass your data as "
                f"`model.fit({{{example_fit_kwargs}}}, ...)`."
            )

@codecov-commenter
Copy link

codecov-commenter commented Mar 5, 2026

Codecov Report

❌ Patch coverage is 87.50000% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 82.95%. Comparing base (95e74a9) to head (8d0e90e).

Files with missing lines Patch % Lines
keras/src/models/functional.py 87.50% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master   #22356   +/-   ##
=======================================
  Coverage   82.95%   82.95%           
=======================================
  Files         595      595           
  Lines       66040    66046    +6     
  Branches    10305    10308    +3     
=======================================
+ Hits        54785    54790    +5     
  Misses       8639     8639           
- Partials     2616     2617    +1     
Flag Coverage Δ
keras 82.78% <87.50%> (+<0.01%) ⬆️
keras-jax 60.84% <87.50%> (+<0.01%) ⬆️
keras-numpy 55.02% <50.00%> (-0.01%) ⬇️
keras-openvino 49.10% <50.00%> (-0.01%) ⬇️
keras-tensorflow 62.06% <87.50%> (+<0.01%) ⬆️
keras-torch 60.87% <87.50%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants