Skip to content

fix: deduplicate DataFrame index before asfreq() in _apply_df_freq_horizon#843

Open
hossamnagy wants to merge 1 commit into
davidusb-geek:masterfrom
hossamnagy:fix/duplicate-index-crash-naive-mpc-optim
Open

fix: deduplicate DataFrame index before asfreq() in _apply_df_freq_horizon#843
hossamnagy wants to merge 1 commit into
davidusb-geek:masterfrom
hossamnagy:fix/duplicate-index-crash-naive-mpc-optim

Conversation

@hossamnagy
Copy link
Copy Markdown

@hossamnagy hossamnagy commented May 11, 2026

Problem

naive-mpc-optim raises ValueError: cannot reindex on an axis with duplicate labels inside _apply_df_freq_horizon when df.asfreq(step) is called on the concatenated PV + load forecast DataFrame.

Root cause: get_days_list(n) spans two UTC calendar dates (due to timezone offsets), so InfluxDB returns roughly 2x the expected data points for a nominal "1 day" window. After the PV and load forecast Series are concatenated on axis=1, the resulting DatetimeIndex can contain duplicate timestamps, which causes asfreq() to fail.

Traceback:

File "src/emhass/command_line.py", line 737, in _apply_df_freq_horizon
    df = df.asfreq(step)
ValueError: cannot reindex on an axis with duplicate labels

Fix

Deduplicate the index immediately before calling asfreq, using the same ~df.index.duplicated() pattern already present in set_df_index_freq() and prepare_data() elsewhere in the codebase:

# before
df = df.asfreq(step)

# after
df = df[~df.index.duplicated(keep="last")]
df = df.asfreq(step)

Tested on

EMHASS 0.17.2, InfluxDB backend, 30-min optimization time step, naive load forecast method, Australia/Sydney timezone (UTC+10, no DST at time of testing).

Summary by Sourcery

Bug Fixes:

  • Prevent ValueError in _apply_df_freq_horizon by removing duplicate index entries before calling asfreq on forecast data.

Prevents ValueError: cannot reindex on an axis with duplicate labels
when naive-mpc-optim is called with an InfluxDB backend whose time
window (due to UTC offsets) spans two calendar days, producing ~2x
the expected data points. The resulting concat of PV + load forecast
Series can carry duplicate timestamps that cause asfreq() to fail.

Uses the same ~df.index.duplicated() pattern already present in
set_df_index_freq() and prepare_data() elsewhere in the codebase.
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented May 11, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR hardens the _apply_df_freq_horizon pipeline by deduplicating the DataFrame index before resampling with asfreq(), preventing failures when concatenated forecast data contains duplicate timestamps due to timezone-spanning query ranges.

Flow diagram for DataFrame index deduplication before asfreq in _apply_df_freq_horizon

flowchart LR
    A[_apply_df_freq_horizon called with df] --> B[Check retrieve_hass_conf optimization_time_step]
    B --> C[Convert step to Timedelta if needed]
    C --> D[Filter df with ~df.index.duplicated keep=last]
    D --> E[Call df.asfreq step]
    E --> F[Return resampled df]
Loading

File-Level Changes

Change Details Files
Deduplicate DataFrame index before applying asfreq() in _apply_df_freq_horizon to avoid duplicate-label reindex errors.
  • Normalize optimization_time_step to a pandas Timedelta when needed, as before.
  • Filter the DataFrame to drop duplicate index entries, keeping the last occurrence for each timestamp, immediately before resampling.
  • Call df.asfreq(step) on the now-unique DatetimeIndex so reindexing cannot fail due to duplicates.
  • Fall back to existing utils.set_df_index_freq(df) behavior when no optimization_time_step is configured.
src/emhass/command_line.py

Possibly linked issues

  • #ValueError: cannot reindex on an axis with duplicate labels in _apply_df_freq_horizon (naive MPC + InfluxDB + mixed-frequency forecasts): They address the same duplicate-index bug in _apply_df_freq_horizon by deduplicating the index before asfreq().

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Since this duplicate-index filtering pattern now appears in several places (here, set_df_index_freq(), and prepare_data()), consider extracting a small helper (e.g., utils.drop_duplicate_index(df, keep='last')) to keep behavior consistent and easier to maintain.
  • If the upstream concatenation does not guarantee the index is sorted, consider calling df = df.sort_index() before dropping duplicates so that keep='last' is deterministic and not dependent on concatenation order.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Since this duplicate-index filtering pattern now appears in several places (here, `set_df_index_freq()`, and `prepare_data()`), consider extracting a small helper (e.g., `utils.drop_duplicate_index(df, keep='last')`) to keep behavior consistent and easier to maintain.
- If the upstream concatenation does not guarantee the index is sorted, consider calling `df = df.sort_index()` before dropping duplicates so that `keep='last'` is deterministic and not dependent on concatenation order.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant