Skip to content

Benchmark proceeds even when JSON parsing fails and dashboard does not record them as errors #546

@Maze999

Description

@Maze999

PFA snip for one instance which shows the JSON parsing errors for the use case of ISL 7K and OSL 1K
model being used is Llam3.1-70B with 2 replicas deployed( fp8-tp2-pp1-latency NIM profile)

  1. Is there way to specify that the benchmark be terminated instead of continuing when JSON parsing errors are encountered?
  2. Why are these errors not accounted and being shown under errors in the dashboard?
  3. please let me know if there are any inputs to avoid getting into these JSON parsing errors

aiperf --version
0.3.0

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions