Skip to content

Inaccurate Float-to-Decimal Conversion in parse_value of Decimal Scalar #1593

Open
@mak626

Description

Current Behavior

The parse_value function currently converts the input using _Decimal(value). When a float is passed as input, this conversion may lead to precision loss due to the inherent imprecision of floating-point representations.

Steps to Reproduce

  1. Call Decimal.parse_value(0.01).
  2. Observe that the resulting Decimal does not accurately represent the value 0.01 instead gives0.01000000000000000020816681711721685132943093776702880859375 which is inaccurate.

Expected Behavior

The function should convert the input to a string before creating the Decimal object (i.e., using _Decimal(str(value))). This approach ensures that decimal values are accurately converted, even if not provided as string, preserving their intended precision.

see Strawberry implementation

Suggested Fix

Modify the parse_value function as follows:

@staticmethod
def parse_value(value):
    try:
        return _Decimal(str(value))
    except Exception:
        return Undefined

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions