-
Notifications
You must be signed in to change notification settings - Fork 11
Minutes 20 Nov 2025
Host: Paul Albertella
Participants: Daniel Krippner, Igor Stoppa (NVIDIA), Pete Brink, Mikel (Canonical), Nagamahesh Gamidi (Timesys)
Agenda
-
The goal(s) of software testing
-
Verification against requirements
-
"Attempts to falsify our engineering hypotheses"
-
Detecting defects and regressions
-
Using tests as requirements
-
Testing as measurement
Notes
Paul: A framing to think about “A test can’t prove that something is true, but it can show that something is false.”
Pete: This prompts the question of what testing is for.
Igor: And are we testing the tests?
Pete: Many people say that quality comes from passing the tests, but then we have to ask “But how do we know the quality of the tests?”
Igor: Isn’t this over-complicating or being too detached from reality? We define requirements, we specify our use case and then we develop a set of positive and negative test cases. And you test at different layers e.g. starting with unit testing
Pete: If we take this out of the realm of software it can be clearer how this applies. e.g. If we think about building a bridge and then devising a set of empirical tests e.g. testing its load-bearing.
The root of the question is what are we testing for and what should it do or not do.
Paul: The point of this framing is to try to identify a credible set of negative tests for your use cases.
Igor: To be practical, you have to choose your ground and the right tools to generalise the problem in a way that allows you to define classes of problems that are quantifiable and addressable.
Pete: An example to consider is out-of-bounds testing: what happens if your software is passed values that are outside its expected parameters.
Igor: You can break this down by defining classes of interference and then showing how your chosen solutions deal with it.
This feels so philosophical because it starts top-down. You have too large a space for your hypotheses to run if you have not done the engineering work to understand the building blocks.
Pete: One of the foundations of testing is coming up with a limited set of tests to cover a potentially unbounded set of possibilities. This means that there is an inherent assumption that you will do a level of analysis of the system you are testing to inform the set of tests that you will need to cover it sufficiently.
Igor: The question is: Can I reduce the set of components that I need to test by partitioning and analysing - including analysing your partitioning approach - and introduce mechanisms into the system to enforce / achieve the desired behaviour or prevent / detect the identified types of interference.
Igor: I don’t believe that it is possible to show you have sufficient testing coverage unless you can reduce the problem into a tractable size. It is not possible to achieve this without an understanding of the system and the types of interference that you wish to guard against. And when it comes to spatial interference, the only provable solution is a hybrid solution involving a hardware feature.
Topics for next time:
- Examples from Paul about testing and engineering hypotheses
- How can we more effectively share the outcomes of our discussions to the wider community
There will be no meeting next week (Paul on holiday)