Right now we have the Github actions running the examples and the workflow fails when any of the examples fails to complete. This is good and will catch changes in Python, PyHELICS or the HELICS API that are fundamentally breaking. What we probably need is some way of verifying the data actually coming out of the co-simulations is correct, though.