Open
Description
In large test suites there are several challenges that emerge when you want to look at them over time:
- which tests were added, changed and removed?
- how did each test fare?
- what is the overall state of our test suite?
seeing this information clearly in the report can help:
- spot tests that tend to break more often than others, which indicates a fragile implementation
- spot tests that are being changed frequently, which indicates a code hotspot that can benefit from refactoring
- spot tests that did not change for a long time, implying that feature might be less used or was expanded without tests
- see if our trend is positive, stagnant, or negative
- possibly deduce other insights
bonus: graphs are pretty to look at 😃
to do this we need to:
- save every test run results in some way that will enable us to parse it afterwards
- aggregate test results per test, this will enable us to see a trend also for any grouped tests and the entire test suite
- display a trend graph for the scope we are focusing on showing either results (pass, fail, etc..) over time or volume (added, changed, removed) over time
although, it is unclear how to account for trend breaking changes, for example, when a tests changes its name or its flow, how should we address this?
perhaps we can get inspired by how allure report implemented their history feature which divides the tests into several categories according to their results (passed, failed, broken, ignored, other)