Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 39 additions & 25 deletions utils/omega/ctest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,13 @@ CTests and can optionally submit the job script.
source load_dev_polaris_0.3.0-alpha.1_chrysalis_intel_openmpi.sh
```

3. Run the utility:
3. Initialize the submodule if you are using it (as opposed to a different
Omega development branch) and you have not already done so:
```
git submodule update --init e3sm_submodules/Omega
```

4. Run the utility:
```
./utils/omega/ctest/omega_ctest.py
```
Expand Down Expand Up @@ -60,7 +66,7 @@ CTests and can optionally submit the job script.

* `--cmake_flags="<flags>"`: Extra flags to pass to the `cmake` command

4. If you are on a login node and didn't use the `-s` flag, you will need
5. If you are on a login node and didn't use the `-s` flag, you will need
to submit the batch job to run CTests yourself (perhaps after editing the
job script), e.g.:
```
Expand All @@ -70,27 +76,35 @@ CTests and can optionally submit the job script.
If all goes well, you will see something like:
```
$ cat omega_ctest_chrysalis_intel.o464153
Test project /gpfs/fs1/home/ac.xylar/e3sm_work/polaris/add-omega-ctest-util/build_omega/build_chrysalis_intel
Start 1: DATA_TYPES_TEST
1/9 Test #1: DATA_TYPES_TEST .................. Passed 0.38 sec
Start 2: MACHINE_ENV_TEST
2/9 Test #2: MACHINE_ENV_TEST ................. Passed 0.98 sec
Start 3: BROADCAST_TEST
3/9 Test #3: BROADCAST_TEST ................... Passed 1.13 sec
Start 4: LOGGING_TEST
4/9 Test #4: LOGGING_TEST ..................... Passed 0.03 sec
Start 5: DECOMP_TEST
5/9 Test #5: DECOMP_TEST ...................... Passed 1.20 sec
Start 6: HALO_TEST
6/9 Test #6: HALO_TEST ........................ Passed 1.08 sec
Start 7: IO_TEST
7/9 Test #7: IO_TEST .......................... Passed 2.94 sec
Start 8: CONFIG_TEST
8/9 Test #8: CONFIG_TEST ...................... Passed 1.01 sec
Start 9: YAKL_TEST
9/9 Test #9: YAKL_TEST ........................ Passed 0.03 sec

100% tests passed, 0 tests failed out of 9

Total Test time (real) = 8.91 sec
Test project /gpfs/fs1/home/ac.xylar/e3sm_work/polaris/improve-omega-ctest-output/build_omega/build_chrysalis_intel
Start 1: DATA_TYPES_TEST
1/33 Test #1: DATA_TYPES_TEST .................... Passed 0.44 sec
Start 2: MACHINE_ENV_TEST
2/33 Test #2: MACHINE_ENV_TEST ................... Passed 0.88 sec
Start 3: BROADCAST_TEST
3/33 Test #3: BROADCAST_TEST ..................... Passed 0.89 sec
Start 4: LOGGING_TEST
...
32/33 Test #32: GSWC_CALL_TEST ..................... Passed 0.06 sec
Start 33: EOS_TEST
33/33 Test #33: EOS_TEST ........................... Passed 1.20 sec

100% tests passed, 0 tests failed out of 33


Label Time Summary:
OPENMP = 63.42 sec*proc (32 tests)
Omega-0 = 63.42 sec*proc (32 tests)

Total Test time (real) = 63.51 sec

(Copy the following to a comment in your Omega PR)

CTest unit tests:
- Machine: chrysalis
- Compiler: intel
- Build type: Release
- Result: All tests passed
- Log: /path/to/ctests.log

```
20 changes: 18 additions & 2 deletions utils/omega/ctest/omega_ctest.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,14 +130,29 @@ def download_meshes(config):
return download_targets


def write_omega_ctest_job_script(config, machine, compiler, nodes=1):
def write_omega_ctest_job_script(config, machine, compiler, debug, nodes=1):
"""
Write a job script for running Omega CTest using the generalized template.
"""
build_omega_dir = os.path.abspath('build_omega')
build_dir = os.path.join(build_omega_dir, f'build_{machine}_{compiler}')

run_command = f'cd {build_dir}\n./omega_ctest.sh'
build_type = 'Debug' if debug else 'Release'

this_dir = os.path.realpath(
os.path.join(os.getcwd(), os.path.dirname(__file__))
)
template_filename = os.path.join(this_dir, 'run_command.template')

with open(template_filename, 'r', encoding='utf-8') as f:
template = Template(f.read())

run_command = template.render(
build_dir=build_dir,
machine=machine,
compiler=compiler,
build_type=build_type,
)

job_script_filename = f'job_build_and_ctest_omega_{machine}_{compiler}.sh'
job_script_filename = os.path.join(build_omega_dir, job_script_filename)
Expand Down Expand Up @@ -255,6 +270,7 @@ def main():
config=config,
machine=machine,
compiler=compiler,
debug=debug,
)

if submit:
Expand Down
58 changes: 58 additions & 0 deletions utils/omega/ctest/run_command.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
cd {{ build_dir }}
./omega_ctest.sh 2>&1 | tee ctests.log
log_path="$(pwd)/ctests.log"

echo ""
echo "(Copy the following to a comment in your Omega PR)"
echo ""
echo "### CTest unit tests:"
echo "- Machine: `{{ machine }}`"
echo "- Compiler: `{{ compiler }}`"
echo "- Build type: `{{ build_type }}`"

# Summarize CTest results
if grep -q "100% tests passed" ctests.log; then
echo "- Result: All tests passed"
else
# Parse counts if present: "XX% tests passed, N tests failed out of T"
counts_line=$(grep -E '[0-9]+% tests passed, [0-9]+ tests failed out of [0-9]+' ctests.log | tail -n 1)
if [ -n "$counts_line" ]; then
failed_count=$(echo "$counts_line" | sed -E 's/.* ([0-9]+) tests failed out of ([0-9]+).*/\1/')
total_count=$(echo "$counts_line" | sed -E 's/.* ([0-9]+) tests failed out of ([0-9]+).*/\2/')
failures_header="- Failures (${failed_count} of ${total_count}):"
else
failures_header="- Failures:"
fi

# Try to parse the canonical summary block first, only collecting lines like: " 6 - NAME (Failed)"
failed_from_block=$(awk '
/The following tests FAILED:/ {flag=1; next}
flag {
if ($0 ~ /^[[:space:]]*[0-9]+[[:space:]]+-[[:space:]]+.+\(Failed\)/) { print; next }
# Stop when we reach a line that does not match the expected failure entry pattern
exit
}
' ctests.log \
| sed -E 's/^[[:space:]]*[0-9]+[[:space:]]+-[[:space:]]+//' \
| sed -E 's/[[:space:]]*\(Failed\).*//')

if [ -n "${failed_from_block}" ]; then
echo "${failures_header}"
while IFS= read -r testname; do
[ -n "$testname" ] && echo " - `$testname`"
done <<< "${failed_from_block}"
else
# Fallback: parse per-test lines that include ***Failed
failed_from_lines=$(grep -E '\*\*\*Failed' ctests.log \
| sed -E 's/.*Test #[0-9]+: (.+)[.[:space:]]+\*\*\*Failed.*/\1/')
if [ -n "${failed_from_lines}" ]; then
echo "${failures_header}"
while IFS= read -r testname; do
[ -n "$testname" ] && echo " - `$testname`"
done <<< "${failed_from_lines}"
else
echo "- Result: Some tests failed"
fi
fi
fi
echo "- Log: `${log_path}`"
Loading