Skip to content

Genetic Algorithm fails to converge in numpy-optimization.ipynb due to unscaled target variable in workshop-6 #7

@xingyi1145

Description

@xingyi1145

Describe the bug
In workshop-6/numpy-optimization.ipynb, the Genetic Algorithm optimizer fails to learn a good model for the Energy Consumption dataset. The loss remains extremely high and "flatlines" compared to the Gradient Descent optimizer, as seen in the comparison plots.

Reason
The issue is caused by a massive scale mismatch between the algorithm's initialization and the target variable:

Target Scale: The target variable Energy Consumption has values in the thousands (mean ≈ 4187).

Initialization: The Genetic Algorithm initializes weights with small random numbers (approx 0.1) and uses small mutation steps.

Result: It is mathematically impossible for the GA to "grow" the weights large enough to predict values in the thousands within the given number of generations (100). The weights stay close to 0, resulting in poor predictions.

Suggested Fix
Normalize the target variable y (standard scaling) before training, similar to how the features X are normalized. This brings the target values into a small range (e.g., -2 to +2), which aligns with the GA's initialization logic.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions