Skip to content

Commit e246300

Browse files
ax3lEZoni
andauthored
Checksum Reset via Env Variable (#5105)
* Checksum Reset via Env Variable In preparation for CTest adoption, we want to make it easier to quickly run and reset checksums locally. This adds the feature to reset instead of evaluate in regular test runs. * Improve wording Co-authored-by: Edoardo Zoni <[email protected]>
1 parent 4b04701 commit e246300

File tree

2 files changed

+29
-3
lines changed

2 files changed

+29
-3
lines changed

Docs/source/developers/checksum.rst

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,21 @@ Since this will automatically change the JSON file stored on the repo, make a se
6565
git add <test name>.json
6666
git commit -m "reset benchmark for <test name> because ..." --author="Tools <[email protected]>"
6767
68+
Automated reset of a list of test benchmarks
69+
--------------------------------------------
70+
71+
If you set the environment variable ``export CHECKSUM_RESET=ON`` before running tests that are compared against existing benchmarks, the test analysis will reset the benchmarks to the new values, skipping the comparison.
72+
73+
With `CTest <https://cmake.org/cmake/help/latest/manual/ctest.1.html>`__ (coming soon), select the test(s) to reset by `name <https://cmake.org/cmake/help/latest/manual/ctest.1.html#run-tests>`__ or `label <https://cmake.org/cmake/help/latest/manual/ctest.1.html#label-matching>`__.
74+
75+
.. code-block:: bash
76+
77+
# regex filter: matched names
78+
CHECKSUM_RESET=ON ctest --test-dir build -R "Langmuir_multi|LaserAcceleration"
79+
80+
# ... check and commit changes ...
81+
82+
6883
Reset a benchmark from the Azure pipeline output on Github
6984
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
7085

Regression/Checksum/checksumAPI.py

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,9 @@ def evaluate_checksum(test_name, output_file, output_format='plotfile', rtol=1.e
4242
Read checksum from output file, read benchmark
4343
corresponding to test_name, and assert their equality.
4444
45+
If the environment variable CHECKSUM_RESET is set while this function is run,
46+
the evaluation will be replaced with a call to reset_benchmark (see below).
47+
4548
Parameters
4649
----------
4750
test_name: string
@@ -65,9 +68,17 @@ def evaluate_checksum(test_name, output_file, output_format='plotfile', rtol=1.e
6568
do_particles: bool, default=True
6669
Whether to compare particles in the checksum.
6770
"""
68-
test_checksum = Checksum(test_name, output_file, output_format,
69-
do_fields=do_fields, do_particles=do_particles)
70-
test_checksum.evaluate(rtol=rtol, atol=atol)
71+
# Reset benchmark?
72+
reset = ( os.getenv('CHECKSUM_RESET', 'False').lower() in
73+
['true', '1', 't', 'y', 'yes', 'on'] )
74+
75+
if reset:
76+
print(f"Environment variable CHECKSUM_RESET is set, resetting benchmark for {test_name}")
77+
reset_benchmark(test_name, output_file, output_format, do_fields, do_particles)
78+
else:
79+
test_checksum = Checksum(test_name, output_file, output_format,
80+
do_fields=do_fields, do_particles=do_particles)
81+
test_checksum.evaluate(rtol=rtol, atol=atol)
7182

7283

7384
def reset_benchmark(test_name, output_file, output_format='plotfile', do_fields=True, do_particles=True):

0 commit comments

Comments
 (0)