Skip to content

Clarification on key statistics #50

@jasonwebb

Description

@jasonwebb

Hello! I've been reading through the results and the awesome summary article, and I'm looking to verify my understanding of the key statistics presented. Apologies for not posting this elsewhere - it looks like comments are disabled on the blog, and the Google Group is private. Just let me know if there is another preferred avenue for questions like this :)

The biggest data point that I'm interested in understanding better is the total percentage of barriers that were not caught by the tools tested. There are two complementary statements that are made in the article that state that the tools tested found 71% of barriers in the test page, and missed 29% of them.

Under the heading "Lots of the barriers weren't found ...":

[...] a large proportion of the barriers we created weren’t picked up by any of the 10 tools we tested – 29% in fact.

Under the final heading, "How best to use automated tools":

[...] the tools picked up the majority of the accessibility barriers we created – 71% – [...]

When I look into the detailed audit results page, and the How did each tool do? section, I see that the best-performing tools found 40% of issues in the test setup.

I have a feeling that I'm missing a nuanced difference between the two numbers. Could someone provide some guidance on what those two different numbers (71% vs 40%) are referring to? They seem like a contradiction, which makes me think I may be misunderstanding something.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions