suggestion: Increase invalid test coverage using the framework
BAL validation can be broken down into three dimensions:
- Correctness: each entry has the right value
- Exactness: exactly the right entries exist (no more, no less)
- Sequence: entries are in canonical order
BAL equivalence = Correctness + Exactness + Sequence.
Our tests should not assume how clients verify equivalence of computed vs provided BAL (hash, item-by-item, or otherwise).
A client that does zero BAL validation (one that simply passes through the provided BAL) will pass every happy path test. For every happy path test, we should have a negative test and ensure client rejects them:
- happy path: Alice sends bob 1 ETH. BAL Balance change 1 ETH. accept block. test pass
- invalid path: Alice sends bob 1 ETH. BAL Balance change 2 ETH. reject block. test pass
However, instead of writing invalid tests by hand, we should automate them using the framework. We can derive these invalid tests from a single valid one:
- Correctnes
- corrupting an existing entry's value,
- Exactness
- adding a bogus entry,
- removing an entry,
- duplicating an entry,
- Sequence
- (if there are multiple items) swapping entries.
The required modifiers for these mutations already exists. So some kind of pytest hook over
existing tests that introspects a valid BAL expectation, enumerates its entries, produces N invalid variants,
one per applicable mutation.
this way our invalid coverage grows organically with valid tests.
Originally posted by @raxhvl in #2653 (comment)
suggestion: Increase invalid test coverage using the framework
BAL validation can be broken down into three dimensions:
BAL equivalence = Correctness + Exactness + Sequence.
Our tests should not assume how clients verify equivalence of computed vs provided BAL (hash, item-by-item, or otherwise).
A client that does zero BAL validation (one that simply passes through the provided BAL) will pass every happy path test. For every happy path test, we should have a negative test and ensure client rejects them:
However, instead of writing invalid tests by hand, we should automate them using the framework. We can derive these invalid tests from a single valid one:
The required modifiers for these mutations already exists. So some kind of pytest hook over
existing tests that introspects a valid BAL expectation, enumerates its entries, produces N invalid variants,
one per applicable mutation.
this way our invalid coverage grows organically with valid tests.
Originally posted by @raxhvl in #2653 (comment)