Skip to content

fix: normalize None timesteps and use .get() for safe optim_conf reads#842

Open
hossamnagy wants to merge 1 commit into
davidusb-geek:masterfrom
hossamnagy:fix/none-timestep-crash-in-perform-optimization
Open

fix: normalize None timesteps and use .get() for safe optim_conf reads#842
hossamnagy wants to merge 1 commit into
davidusb-geek:masterfrom
hossamnagy:fix/none-timestep-crash-in-perform-optimization

Conversation

@hossamnagy
Copy link
Copy Markdown

@hossamnagy hossamnagy commented May 11, 2026

Summary

Three targeted fixes for a TypeError crash in naive_mpc_optim, verified live on EMHASS 0.17.2 running as a standalone Docker container against Home Assistant.


1. utils.py — normalize None elements in check_def_loads (root cause fix)

When /set-config receives a partial config payload it can persist a list like [null, 0] for start_timesteps_of_each_deferrable_load. check_def_loads only pads a short list — it never replaces None values that are already inside the list. After this fix, any None element is replaced with the supplied default before the value is returned and re-saved.

2. optimization.py — defensive None→0 after pad_list (second layer)

Adds a guard immediately after the pad_list() calls in perform_optimization() that replaces any residual None with 0 (= "no time restriction"). Protects installations whose params.pkl was already corrupted before fix 1 was applied.

Without fixes 1 & 2 the crash is:

TypeError: '<=' not supported between instances of 'NoneType' and 'int'
  File "emhass/optimization.py", in validate_def_timewindow
    if start <= end or start <= 0 or end <= 0:

Reproduces when operating_hours_of_each_deferrable_load = [0, 0] (both loads disabled) and start_timesteps contains a None element.

3. command_line.py — use .get() for timestep reads (KeyError prevention)

def_start_timestep and def_end_timestep were read from optim_conf using bare [] which raises KeyError if the key is absent. Changed to .get() — consistent with how operating_hours_of_each_deferrable_load and operating_timesteps_of_each_deferrable_load are already read in the same block.


Reproduction (fixes 1–3)

  1. Two deferrable loads (num_def_loads = 2).
  2. POST to /set-config with start_timesteps_of_each_deferrable_load: [null, 0].
  3. check_def_loads sees len == num_def_loads, skips padding, returns [None, 0] unchanged.
  4. pad_list in perform_optimization also skips (length already correct).
  5. POST to /action/naive-mpc-optim with operating_hours_of_each_deferrable_load: [0, 0]validate_def_timewindow(None, 0, 0, n)crash.

Testing

Verified on a live Home Assistant installation (EMHASS 0.17.2, Docker, InfluxDB retrieval):

  • Reproduced the crash; confirmed params.pkl contained [None, 0] for start_timesteps_of_each_deferrable_load.
  • Applied all three patches; subsequent MPC cycles with operating_hours = [0, 0] complete with Status: Optimal.
  • No regression observed on cycles where operating_hours > 0.

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented May 11, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Normalizes deferrable timestep parameter lists by replacing None values with safe defaults both at validation time and during optimization, preventing TypeError crashes when corrupted or partially specified configurations are processed.

Sequence diagram for deferrable timestep normalization and validation

sequenceDiagram
    actor ExternalCaller
    participant set_config_endpoint
    participant check_def_loads
    participant params_store
    participant perform_optimization
    participant validate_def_timewindow

    ExternalCaller->>set_config_endpoint: send partial_config ([null, 0])
    set_config_endpoint->>check_def_loads: check_def_loads(parameter, parameter_name, num_def_loads, default)
    check_def_loads-->>set_config_endpoint: normalized_list ([0, 0])
    set_config_endpoint->>params_store: write params.pkl (normalized_list)

    ExternalCaller->>perform_optimization: trigger optimization
    perform_optimization->>params_store: read params.pkl
    perform_optimization->>perform_optimization: pad_list(def_start_timestep, num_def_loads)
    perform_optimization->>perform_optimization: pad_list(def_end_timestep, num_def_loads)
    perform_optimization->>perform_optimization: normalize None to 0 in def_start_timestep
    perform_optimization->>perform_optimization: normalize None to 0 in def_end_timestep
    perform_optimization->>validate_def_timewindow: validate_def_timewindow(start, end, operating_hours, idx)
    validate_def_timewindow-->>perform_optimization: validation_result (no TypeError)
Loading

File-Level Changes

Change Details Files
Harden deferrable load parameter validation to sanitize None entries after padding.
  • After padding the parameter list in the deferrable load checker, assign it to a local result variable for further processing.
  • Add a guard to ensure post-processing only runs when the parameter is a list.
  • Map over the list to replace any None values with the provided default value, ensuring a fully normalized list.
  • Write the normalized list back into the parameter dict so sanitized values are persisted for future use.
  • Return the normalized list instead of the possibly corrupted original reference.
src/emhass/utils.py
Normalize deferrable start/end timestep lists in the optimizer to avoid crashes with pre-existing corrupted params.pkl files.
  • After padding deferrable total hours, start, and end timestep lists to num_deferrable_loads, add a normalization step.
  • Replace any None elements in the def_start_timestep list with 0 to represent no time restriction.
  • Replace any None elements in the def_end_timestep list with 0 for consistency with the start list normalization.
  • Document via comments that this normalization protects against partial set-config calls that produced [None, 0] values and caused TypeError in validate_def_timewindow.
src/emhass/optimization.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • In perform_optimization, consider moving the None→0 normalization into pad_list (or a small shared helper) so that all padded configuration lists benefit from the same defensive handling rather than only def_start_timestep and def_end_timestep.
  • In check_def_loads, the isinstance(result, list) guard suggests parameter[parameter_name] might not always be a list; if that's truly unexpected for this helper, you may want to assert or raise early to catch misuses instead of silently returning a non-normalized value.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `perform_optimization`, consider moving the `None`→0 normalization into `pad_list` (or a small shared helper) so that all padded configuration lists benefit from the same defensive handling rather than only `def_start_timestep` and `def_end_timestep`.
- In `check_def_loads`, the `isinstance(result, list)` guard suggests `parameter[parameter_name]` might not always be a list; if that's truly unexpected for this helper, you may want to assert or raise early to catch misuses instead of silently returning a non-normalized value.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@hossamnagy hossamnagy force-pushed the fix/none-timestep-crash-in-perform-optimization branch from 7610100 to fe90fd9 Compare May 11, 2026 22:07
@hossamnagy hossamnagy changed the title fix: normalize None elements in deferrable timestep lists to prevent TypeError crash fix: normalize None timesteps, safe .get() reads, and robust df resampling in naive-mpc-optim May 11, 2026
@hossamnagy hossamnagy force-pushed the fix/none-timestep-crash-in-perform-optimization branch from fe90fd9 to 16cf11f Compare May 11, 2026 22:13
@hossamnagy hossamnagy changed the title fix: normalize None timesteps, safe .get() reads, and robust df resampling in naive-mpc-optim fix: normalize None timesteps, safe .get() reads, remove stage_timer overhead May 11, 2026
Three targeted fixes for a TypeError crash in naive_mpc_optim, verified
live on EMHASS 0.17.2.

**utils.py — normalize None in check_def_loads (root cause)**
When /set-config receives a partial payload it can persist [null, 0] for
start_timesteps_of_each_deferrable_load. check_def_loads only pads a
short list — it never replaces None values already inside the list. After
this fix, any None element is replaced with the supplied default before
the value is returned and re-saved to params.pkl.

**optimization.py — defensive None→0 after pad_list (second layer)**
Adds a guard after pad_list() in perform_optimization() that replaces any
residual None with 0 (= "no time restriction"). Protects installations
whose params.pkl was already corrupted before the utils fix.

Without these two fixes the crash is:
  TypeError: '<=' not supported between instances of 'NoneType' and 'int'
  File "emhass/optimization.py", in validate_def_timewindow
    if start <= end or start <= 0 or end <= 0:
Reproduces when operating_hours_of_each_deferrable_load = [0, 0] (both
loads disabled) and start_timesteps contains a None element.

**command_line.py — use .get() for timestep reads (KeyError prevention)**
def_start_timestep and def_end_timestep were read with bare [] which
raises KeyError if the key is absent from optim_conf. Changed to .get(),
consistent with how operating_hours and operating_timesteps are already
read in the same block.

Tested on a live HA installation: reproduced the crash, confirmed
params.pkl contained [None, 0], applied patch, subsequent cycles with
operating_hours=[0,0] complete with Status: Optimal.
@hossamnagy hossamnagy force-pushed the fix/none-timestep-crash-in-perform-optimization branch from 16cf11f to 3398bd2 Compare May 11, 2026 22:41
@hossamnagy hossamnagy changed the title fix: normalize None timesteps, safe .get() reads, remove stage_timer overhead fix: normalize None timesteps and use .get() for safe optim_conf reads May 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant