Skip to content

Benchmarks that do not return output / updated arrays are not validated #26

@hardik01shah

Description

@hardik01shah

Certain benchmarks like lu, cavity_flow, scattering_self_energies and many more do not return the output arrays that the benchmark computes, or the input arrays that are updated during the computation. These benchmarks are not validated!

In the validation function in utilities.py, the zip call between the output arguments of the reference implementation (numpy) and the framework implementation constrains the validation to the minimum of the two arguments which would be an empty list for the reference implementation if the numpy implementation returns None. So, returning the output arrays in the framework implementation would also not validate the implementation, despite the message <Framework> - <impl> - validation: SUCCESS in the terminal.

To fix this, I suggest:

  • Raise an error or atleast a warning if the length of output arrays of the reference and framework implementation do not match
  • If the length of the output arrays is zero i.e. a None is returned, again raise an error.

Happy to put in a PR addressing this.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions