Skip to content

test_sys_bless_tests_results is failing for me #4874

@billsacks

Description

@billsacks

I'm trying to find somewhere where I can run scripts_regression_tests. However, everywhere I try it, I'm getting failures in the first set of tests, test_sys_bless_tests_results (and others, but I'll start with this first set...).

The failures differ depending on the machine I run them on:

Failures on my Mac with cime6.1.33
$ pytest ./CIME/tests/test_sys_bless_tests_results.py
Testing commit c64260ed94aff167a59635ed4a9ae5af16824e91
Using cime_model = cesm
Testing machine = green
Test root: /Users/sacks/projects/scratch/scripts_regression_test.20251010_114748
Test driver: nuopc
Python version 3.11.12 (main, Apr  8 2025, 14:15:29) [Clang 16.0.0 (clang-1600.0.26.6)]

============================================================================================================================ test session starts =============================================================================================================================
platform darwin -- Python 3.11.12, pytest-7.3.1, pluggy-1.0.0
rootdir: /Users/sacks/cesm/cesm2/cime
configfile: setup.cfg
plugins: json-report-1.5.0, metadata-3.0.0, anyio-4.4.0, cov-4.0.0
collected 2 items

CIME/tests/test_sys_bless_tests_results.py FF                                                                                                                                                                                                                          [100%]

================================================================================================================================== FAILURES ==================================================================================================================================
________________________________________________________________________________________________________________ TestBlessTestResults.test_bless_test_results ________________________________________________________________________________________________________________

self = <CIME.tests.test_sys_bless_tests_results.TestBlessTestResults testMethod=test_bless_test_results>

    def test_bless_test_results(self):
        if self.NO_FORTRAN_RUN:
            self.skipTest("Skipping fortran test")
        # Test resubmit scenario if Machine has a batch system
        if self.MACHINE.has_batch_system():
            test_names = [
                "TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A",
                "TESTRUNDIFF_Mmpi-serial.f19_g16.A",
            ]
        else:
            test_names = ["TESTRUNDIFF_P1.f19_g16.A"]

        # Generate some baselines
        for test_name in test_names:
            if self._config.create_test_flag_mode == "e3sm":
                genargs = ["-g", "-o", "-b", self._baseline_name, test_name]
                compargs = ["-c", "-b", self._baseline_name, test_name]
            else:
                genargs = [
                    "-g",
                    self._baseline_name,
                    "-o",
                    test_name,
                    "--baseline-root ",
                    self._baseline_area,
                ]
                compargs = [
                    "-c",
                    self._baseline_name,
                    test_name,
                    "--baseline-root ",
                    self._baseline_area,
                ]

            self._create_test(genargs)
            # Hist compare should pass
            self._create_test(compargs)
            # Change behavior
            os.environ["TESTRUNDIFF_ALTERNATE"] = "True"

            # Hist compare should now fail
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            self._create_test(compargs, test_id=test_id, run_errors=True)

            # compare_test_results should detect the fail
            cpr_cmd = "{}/compare_test_results --test-root {} -t {} ".format(
                self.TOOLS_DIR, self._testroot, test_id
            )
            output = self.run_cmd_assert_result(
                cpr_cmd, expected_stat=utils.TESTS_FAILED_ERR_CODE
            )

            # use regex
            expected_pattern = re.compile(r"FAIL %s[^\s]* BASELINE" % test_name)
            the_match = expected_pattern.search(output)
            self.assertNotEqual(
                the_match,
                None,
                msg="Cmd '%s' failed to display failed test %s in output:\n%s"
                % (cpr_cmd, test_name, output),
            )
            # Bless
            utils.run_cmd_no_fail(
                "{}/bless_test_results --test-root {} --hist-only --force -t {}".format(
                    self.TOOLS_DIR, self._testroot, test_id
                )
            )
            # Hist compare should now pass again
            self._create_test(compargs)
>           self.verify_perms(self._baseline_area)

CIME/tests/test_sys_bless_tests_results.py:102:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CIME/tests/base.py:305: in verify_perms
    self.assertTrue(
E   AssertionError: 0 is not true : file /Users/sacks/projects/scratch/scripts_regression_test.20251010_114748/baselines/fake_testing_only_20251010_114748/TESTRUNDIFF_P1.f19_g16.A.green_gnu/cpl.hi.0.nc is not group writeable
_________________________________________________________________________________________________________________ TestBlessTestResults.test_rebless_namelist _________________________________________________________________________________________________________________

self = <CIME.tests.test_sys_bless_tests_results.TestBlessTestResults testMethod=test_rebless_namelist>

        def test_rebless_namelist(self):
            # Generate some namelist baselines
            if self.NO_FORTRAN_RUN:
                self.skipTest("Skipping fortran test")
            test_to_change = "TESTRUNPASS_P1.f19_g16.A"
            if self._config.create_test_flag_mode == "e3sm":
                genargs = ["-g", "-o", "-b", self._baseline_name, "cime_test_only_pass"]
                compargs = ["-c", "-b", self._baseline_name, "cime_test_only_pass"]
            else:
                genargs = ["-g", self._baseline_name, "-o", "cime_test_only_pass"]
                compargs = ["-c", self._baseline_name, "cime_test_only_pass"]

            self._create_test(genargs)

            # Basic namelist compare
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            cases = self._create_test(compargs, test_id=test_id)
            casedir = self.get_casedir(test_to_change, cases)

            # Check standalone case.cmpgen_namelists
            self.run_cmd_assert_result("./case.cmpgen_namelists", from_dir=casedir)

            # compare_test_results should pass
            cpr_cmd = "{}/compare_test_results --test-root {} -n -t {} ".format(
                self.TOOLS_DIR, self._testroot, test_id
            )
            output = self.run_cmd_assert_result(cpr_cmd)

            # use regex
            expected_pattern = re.compile(r"PASS %s[^\s]* NLCOMP" % test_to_change)
            the_match = expected_pattern.search(output)
            msg = f"Cmd {cpr_cmd} failed to display passed test in output:\n{output}"
            self.assertNotEqual(
                the_match,
                None,
                msg=msg,
            )

            # Modify namelist
            fake_nl = """
     &fake_nml
       fake_item = 'fake'
       fake = .true.
    /"""
            baseline_area = self._baseline_area
            baseline_glob = glob.glob(
                os.path.join(baseline_area, self._baseline_name, "TEST*")
            )
            self.assertEqual(
                len(baseline_glob),
                3,
                msg="Expected three matches, got:\n%s" % "\n".join(baseline_glob),
            )

            for baseline_dir in baseline_glob:
                nl_path = os.path.join(baseline_dir, "CaseDocs", "datm_in")
                self.assertTrue(os.path.isfile(nl_path), msg="Missing file %s" % nl_path)

                os.chmod(nl_path, stat.S_IRUSR | stat.S_IWUSR)
                with open(nl_path, "a") as nl_file:
                    nl_file.write(fake_nl)

            # Basic namelist compare should now fail
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            self._create_test(compargs, test_id=test_id, run_errors=True)
            casedir = self.get_casedir(test_to_change, cases)

            # Unless namelists are explicitly ignored
            test_id2 = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            self._create_test(compargs + ["--ignore-namelists"], test_id=test_id2)

            self.run_cmd_assert_result(
                "./case.cmpgen_namelists", from_dir=casedir, expected_stat=100
            )

            # preview namelists should work
            self.run_cmd_assert_result("./preview_namelists", from_dir=casedir)

            # This should still fail
            self.run_cmd_assert_result(
                "./case.cmpgen_namelists", from_dir=casedir, expected_stat=100
            )

            # compare_test_results should fail
            cpr_cmd = "{}/compare_test_results --test-root {} -n -t {} ".format(
                self.TOOLS_DIR, self._testroot, test_id
            )
            output = self.run_cmd_assert_result(
                cpr_cmd, expected_stat=utils.TESTS_FAILED_ERR_CODE
            )

            # use regex
            expected_pattern = re.compile(r"FAIL %s[^\s]* NLCOMP" % test_to_change)
            the_match = expected_pattern.search(output)
            self.assertNotEqual(
                the_match,
                None,
                msg="Cmd '%s' failed to display passed test in output:\n%s"
                % (cpr_cmd, output),
            )

            # Bless
            new_test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            utils.run_cmd_no_fail(
                "{}/bless_test_results --test-root {} -n --force -t {} --new-test-root={} --new-test-id={}".format(
                    self.TOOLS_DIR, self._testroot, test_id, self._testroot, new_test_id
                )
            )

            # Basic namelist compare should now pass again
            self._create_test(compargs)

>           self.verify_perms(self._baseline_area)

CIME/tests/test_sys_bless_tests_results.py:218:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CIME/tests/base.py:305: in verify_perms
    self.assertTrue(
E   AssertionError: 0 is not true : file /Users/sacks/projects/scratch/scripts_regression_test.20251010_114748/baselines/fake_testing_only_20251010_114802/TESTRUNPASS_P1.f19_g16.A.green_gnu/cpl.hi.0.nc is not group writeable
========================================================================================================================== short test summary info ===========================================================================================================================
FAILED CIME/tests/test_sys_bless_tests_results.py::TestBlessTestResults::test_bless_test_results - AssertionError: 0 is not true : file /Users/sacks/projects/scratch/scripts_regression_test.20251010_114748/baselines/fake_testing_only_20251010_114748/TESTRUNDIFF_P1.f19_g16.A.green_gnu/cpl.hi.0.nc is not group writeable
FAILED CIME/tests/test_sys_bless_tests_results.py::TestBlessTestResults::test_rebless_namelist - AssertionError: 0 is not true : file /Users/sacks/projects/scratch/scripts_regression_test.20251010_114748/baselines/fake_testing_only_20251010_114802/TESTRUNPASS_P1.f19_g16.A.green_gnu/cpl.hi.0.nc is not group writeable
============================================================================================================================= 2 failed in 43.32s =============================================================================================================================
Failures on derecho with cime6.1.27
$ pytest ./CIME/tests/test_sys_bless_tests_results.py
Testing commit c65c0c4cc33468a276cc4eba5ef663fdad92d5b7
Using cime_model = cesm
Testing machine = derecho
Test root: /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031
Test driver: nuopc
Python version 3.13.1 | packaged by conda-forge | (main, Jan 13 2025, 09:53:10) [GCC 13.3.0]

============================================================================================================================ test session starts =============================================================================================================================
platform linux -- Python 3.13.1, pytest-8.3.4, pluggy-1.6.0
rootdir: /glade/derecho/scratch/sacks/cesm_code/CESM3/cime
configfile: setup.cfg
plugins: json-report-1.5.0, metadata-3.1.1
collected 2 items

CIME/tests/test_sys_bless_tests_results.py FF                                                                                                                                                                                                                          [100%]

================================================================================================================================== FAILURES ==================================================================================================================================
________________________________________________________________________________________________________________ TestBlessTestResults.test_bless_test_results ________________________________________________________________________________________________________________

self = <CIME.tests.test_sys_bless_tests_results.TestBlessTestResults testMethod=test_bless_test_results>

    def test_bless_test_results(self):
        if self.NO_FORTRAN_RUN:
            self.skipTest("Skipping fortran test")
        # Test resubmit scenario if Machine has a batch system
        if self.MACHINE.has_batch_system():
            test_names = [
                "TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A",
                "TESTRUNDIFF_Mmpi-serial.f19_g16.A",
            ]
        else:
            test_names = ["TESTRUNDIFF_P1.f19_g16.A"]

        # Generate some baselines
        for test_name in test_names:
            if self._config.create_test_flag_mode == "e3sm":
                genargs = ["-g", "-o", "-b", self._baseline_name, test_name]
                compargs = ["-c", "-b", self._baseline_name, test_name]
            else:
                genargs = [
                    "-g",
                    self._baseline_name,
                    "-o",
                    test_name,
                    "--baseline-root ",
                    self._baseline_area,
                ]
                compargs = [
                    "-c",
                    self._baseline_name,
                    test_name,
                    "--baseline-root ",
                    self._baseline_area,
                ]

            self._create_test(genargs)
            # Hist compare should pass
            self._create_test(compargs)
            # Change behavior
            os.environ["TESTRUNDIFF_ALTERNATE"] = "True"

            # Hist compare should now fail
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
>           self._create_test(compargs, test_id=test_id, run_errors=True)

CIME/tests/test_sys_bless_tests_results.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CIME/tests/base.py:249: in _create_test
    output = self.run_cmd_assert_result(
CIME/tests/base.py:142: in run_cmd_assert_result
    self.assertEqual(stat, expected_stat, msg=msg)
E   AssertionError: 0 != 100 :
E       COMMAND:  /glade/derecho/scratch/sacks/cesm_code/CESM3/cime/scripts/create_test -c fake_testing_only_20251010_115032 TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A --baseline-root  /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/baselines -t fake_testing_only_20251010_115032-20251010_115125 --baseline-root /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/baselines --machine derecho --test-root=/glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031 --output-root=/glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031 --wait -t fake_testing_only_20251010_115032-20251010_115228 --baseline-root /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/baselines --test-root=/glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031 --output-root=/glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031 --wait
E       FROM_DIR: /glade/derecho/scratch/sacks/cesm_code/CESM3/cime
E       EXPECTED STAT 100, INSTEAD GOT STAT 0
E       OUTPUT: Testnames: ['TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel']
E   Using project from env PROJECT: P93300606
E   create_test will do up to 1 tasks simultaneously
E   create_test will use up to 160 cores simultaneously
E   Creating test directory /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel.C.fake_testing_only_20251010_115032-20251010_115228
E   RUNNING TESTS:
E     TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel
E   Starting CREATE_NEWCASE for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel with 1 procs
E   Finished CREATE_NEWCASE for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel in 0.567991 seconds (PASS)
E   Starting XML for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel with 1 procs
E   Finished XML for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel in 0.182300 seconds (PASS)
E   Starting SETUP for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel with 1 procs
E   Finished SETUP for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel in 2.715725 seconds (PASS)
E   Starting SHAREDLIB_BUILD for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel with 1 procs
E   Finished SHAREDLIB_BUILD for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel in 0.249583 seconds (PASS)
E   Starting MODEL_BUILD for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel with 4 procs
E   Finished MODEL_BUILD for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel in 0.271929 seconds (PASS)
E   Starting RUN for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel with 1 proc on interactive node and 1 procs on compute nodes
E   Finished RUN for test TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel in 0.961034 seconds (PEND). [COMPLETED 1 of 1]
E   Waiting for tests to finish
E   PASS TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel RUN
E       Case dir: /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.derecho_intel.C.fake_testing_only_20251010_115032-20251010_115228
E   test-scheduler took 45.490031480789185 seconds
E       ERRPUT:
_________________________________________________________________________________________________________________ TestBlessTestResults.test_rebless_namelist _________________________________________________________________________________________________________________

self = <CIME.tests.test_sys_bless_tests_results.TestBlessTestResults testMethod=test_rebless_namelist>

        def test_rebless_namelist(self):
            # Generate some namelist baselines
            if self.NO_FORTRAN_RUN:
                self.skipTest("Skipping fortran test")
            test_to_change = "TESTRUNPASS_P1.f19_g16.A"
            if self._config.create_test_flag_mode == "e3sm":
                genargs = ["-g", "-o", "-b", self._baseline_name, "cime_test_only_pass"]
                compargs = ["-c", "-b", self._baseline_name, "cime_test_only_pass"]
            else:
                genargs = ["-g", self._baseline_name, "-o", "cime_test_only_pass"]
                compargs = ["-c", self._baseline_name, "cime_test_only_pass"]

            self._create_test(genargs)

            # Basic namelist compare
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            cases = self._create_test(compargs, test_id=test_id)
            casedir = self.get_casedir(test_to_change, cases)

            # Check standalone case.cmpgen_namelists
            self.run_cmd_assert_result("./case.cmpgen_namelists", from_dir=casedir)

            # compare_test_results should pass
            cpr_cmd = "{}/compare_test_results --test-root {} -n -t {} ".format(
                self.TOOLS_DIR, self._testroot, test_id
            )
            output = self.run_cmd_assert_result(cpr_cmd)

            # use regex
            expected_pattern = re.compile(r"PASS %s[^\s]* NLCOMP" % test_to_change)
            the_match = expected_pattern.search(output)
            msg = f"Cmd {cpr_cmd} failed to display passed test in output:\n{output}"
            self.assertNotEqual(
                the_match,
                None,
                msg=msg,
            )

            # Modify namelist
            fake_nl = """
     &fake_nml
       fake_item = 'fake'
       fake = .true.
    /"""
            baseline_area = self._baseline_area
            baseline_glob = glob.glob(
                os.path.join(baseline_area, self._baseline_name, "TEST*")
            )
            self.assertEqual(
                len(baseline_glob),
                3,
                msg="Expected three matches, got:\n%s" % "\n".join(baseline_glob),
            )

            for baseline_dir in baseline_glob:
                nl_path = os.path.join(baseline_dir, "CaseDocs", "datm_in")
                self.assertTrue(os.path.isfile(nl_path), msg="Missing file %s" % nl_path)

                os.chmod(nl_path, stat.S_IRUSR | stat.S_IWUSR)
                with open(nl_path, "a") as nl_file:
                    nl_file.write(fake_nl)

            # Basic namelist compare should now fail
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            self._create_test(compargs, test_id=test_id, run_errors=True)
            casedir = self.get_casedir(test_to_change, cases)

            # Unless namelists are explicitly ignored
            test_id2 = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            self._create_test(compargs + ["--ignore-namelists"], test_id=test_id2)

            self.run_cmd_assert_result(
                "./case.cmpgen_namelists", from_dir=casedir, expected_stat=100
            )

            # preview namelists should work
            self.run_cmd_assert_result("./preview_namelists", from_dir=casedir)

            # This should still fail
            self.run_cmd_assert_result(
                "./case.cmpgen_namelists", from_dir=casedir, expected_stat=100
            )

            # compare_test_results should fail
            cpr_cmd = "{}/compare_test_results --test-root {} -n -t {} ".format(
                self.TOOLS_DIR, self._testroot, test_id
            )
            output = self.run_cmd_assert_result(
                cpr_cmd, expected_stat=utils.TESTS_FAILED_ERR_CODE
            )

            # use regex
            expected_pattern = re.compile(r"FAIL %s[^\s]* NLCOMP" % test_to_change)
            the_match = expected_pattern.search(output)
            self.assertNotEqual(
                the_match,
                None,
                msg="Cmd '%s' failed to display passed test in output:\n%s"
                % (cpr_cmd, output),
            )

            # Bless
            new_test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            utils.run_cmd_no_fail(
                "{}/bless_test_results --test-root {} -n --force -t {} --new-test-root={} --new-test-id={}".format(
                    self.TOOLS_DIR, self._testroot, test_id, self._testroot, new_test_id
                )
            )

            # Basic namelist compare should now pass again
            self._create_test(compargs)

>           self.verify_perms(self._baseline_area)

CIME/tests/test_sys_bless_tests_results.py:218:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CIME/tests/base.py:305: in verify_perms
    self.assertTrue(
E   AssertionError: 0 is not true : file /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/baselines/fake_testing_only_20251010_115319/TESTRUNPASS_P1.f45_g37.A.derecho_intel/TestStatus is not group writeable
========================================================================================================================== short test summary info ===========================================================================================================================
FAILED CIME/tests/test_sys_bless_tests_results.py::TestBlessTestResults::test_bless_test_results - AssertionError: 0 != 100 :
FAILED CIME/tests/test_sys_bless_tests_results.py::TestBlessTestResults::test_rebless_namelist - AssertionError: 0 is not true : file /glade/derecho/scratch/sacks/scripts_regression_test.20251010_115031/baselines/fake_testing_only_20251010_115319/TESTRUNPASS_P1.f45_g37.A.derecho_intel/TestStatus is not group writeable
======================================================================================================================= 2 failed in 523.82s (0:08:43) ========================================================================================================================
Failures on izumi with cime6.1.27
$ pytest ./CIME/tests/test_sys_bless_tests_results.py
Testing commit c65c0c4cc33468a276cc4eba5ef663fdad92d5b7
Using cime_model = cesm
Testing machine = izumi
Test root: /scratch/cluster/sacks/scripts_regression_test.20251010_115240
Test driver: nuopc
Python version 3.7.0 (default, Jun 28 2018, 13:15:42)
[GCC 7.2.0]

============================================================================================================================ test session starts =============================================================================================================================
platform linux -- Python 3.7.0, pytest-3.8.0, py-1.6.0, pluggy-0.7.1
rootdir: /home/sacks/cesm_code/CESM2/cime, inifile: setup.cfg
plugins: remotedata-0.3.0, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
collected 2 items

CIME/tests/test_sys_bless_tests_results.py FF                                                                                                                                                                                                                          [100%]

================================================================================================================================== FAILURES ==================================================================================================================================
________________________________________________________________________________________________________________ TestBlessTestResults.test_bless_test_results ________________________________________________________________________________________________________________

self = <CIME.tests.test_sys_bless_tests_results.TestBlessTestResults testMethod=test_bless_test_results>

    def test_bless_test_results(self):
        if self.NO_FORTRAN_RUN:
            self.skipTest("Skipping fortran test")
        # Test resubmit scenario if Machine has a batch system
        if self.MACHINE.has_batch_system():
            test_names = [
                "TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A",
                "TESTRUNDIFF_Mmpi-serial.f19_g16.A",
            ]
        else:
            test_names = ["TESTRUNDIFF_P1.f19_g16.A"]

        # Generate some baselines
        for test_name in test_names:
            if self._config.create_test_flag_mode == "e3sm":
                genargs = ["-g", "-o", "-b", self._baseline_name, test_name]
                compargs = ["-c", "-b", self._baseline_name, test_name]
            else:
                genargs = [
                    "-g",
                    self._baseline_name,
                    "-o",
                    test_name,
                    "--baseline-root ",
                    self._baseline_area,
                ]
                compargs = [
                    "-c",
                    self._baseline_name,
                    test_name,
                    "--baseline-root ",
                    self._baseline_area,
                ]

            self._create_test(genargs)
            # Hist compare should pass
            self._create_test(compargs)
            # Change behavior
            os.environ["TESTRUNDIFF_ALTERNATE"] = "True"

            # Hist compare should now fail
            test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp())
            self._create_test(compargs, test_id=test_id, run_errors=True)

            # compare_test_results should detect the fail
            cpr_cmd = "{}/compare_test_results --test-root {} -t {} ".format(
                self.TOOLS_DIR, self._testroot, test_id
            )
            output = self.run_cmd_assert_result(
                cpr_cmd, expected_stat=utils.TESTS_FAILED_ERR_CODE
            )

            # use regex
            expected_pattern = re.compile(r"FAIL %s[^\s]* BASELINE" % test_name)
            the_match = expected_pattern.search(output)
            self.assertNotEqual(
                the_match,
                None,
                msg="Cmd '%s' failed to display failed test %s in output:\n%s"
                % (cpr_cmd, test_name, output),
            )
            # Bless
            utils.run_cmd_no_fail(
                "{}/bless_test_results --test-root {} --hist-only --force -t {}".format(
                    self.TOOLS_DIR, self._testroot, test_id
                )
            )
            # Hist compare should now pass again
            self._create_test(compargs)
>           self.verify_perms(self._baseline_area)

CIME/tests/test_sys_bless_tests_results.py:102:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CIME/tests/base.py:307: in verify_perms
    msg="file {} is not group writeable".format(full_path),
E   AssertionError: 0 is not true : file /scratch/cluster/sacks/scripts_regression_test.20251010_115240/baselines/fake_testing_only_20251010_115240/TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16.A.izumi_intel/user_nl_docn is not group writeable
_________________________________________________________________________________________________________________ TestBlessTestResults.test_rebless_namelist _________________________________________________________________________________________________________________

self = <CIME.tests.test_sys_bless_tests_results.TestBlessTestResults testMethod=test_rebless_namelist>

    def test_rebless_namelist(self):
        # Generate some namelist baselines
        if self.NO_FORTRAN_RUN:
            self.skipTest("Skipping fortran test")
        test_to_change = "TESTRUNPASS_P1.f19_g16.A"
        if self._config.create_test_flag_mode == "e3sm":
            genargs = ["-g", "-o", "-b", self._baseline_name, "cime_test_only_pass"]
            compargs = ["-c", "-b", self._baseline_name, "cime_test_only_pass"]
        else:
            genargs = ["-g", self._baseline_name, "-o", "cime_test_only_pass"]
            compargs = ["-c", self._baseline_name, "cime_test_only_pass"]

>       self._create_test(genargs)

CIME/tests/test_sys_bless_tests_results.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CIME/tests/base.py:253: in _create_test
    expected_stat=expected_stat,
CIME/tests/base.py:142: in run_cmd_assert_result
    self.assertEqual(stat, expected_stat, msg=msg)
E   AssertionError: 100 != 0 :
E       COMMAND:  /home/sacks/cesm_code/CESM2/cime/scripts/create_test -g fake_testing_only_20251010_115723 -o cime_test_only_pass -t fake_testing_only_20251010_115723-20251010_115723 --baseline-root /scratch/cluster/sacks/scripts_regression_test.20251010_115240/baselines --machine izumi --test-root=/scratch/cluster/sacks/scripts_regression_test.20251010_115240 --output-root=/scratch/cluster/sacks/scripts_regression_test.20251010_115240 --wait
E       FROM_DIR: /home/sacks/cesm_code/CESM2/cime
E       SHOULD HAVE WORKED, INSTEAD GOT STAT 100
E       OUTPUT: Python 3.8 is recommended to run CIME. You have 3.7.
E   Testnames: ['TESTRUNPASS_P1.f19_g16.A.izumi_intel', 'TESTRUNPASS_P1.f45_g37.A.izumi_intel', 'TESTRUNPASS_P1.ne30_g16.A.izumi_intel']
E   No project info available
E   create_test will do up to 3 tasks simultaneously
E   create_test will use up to 60 cores simultaneously
E   Creating test directory /scratch/cluster/sacks/scripts_regression_test.20251010_115240/TESTRUNPASS_P1.f19_g16.A.izumi_intel.G.fake_testing_only_20251010_115723-20251010_115723
E   Creating test directory /scratch/cluster/sacks/scripts_regression_test.20251010_115240/TESTRUNPASS_P1.f45_g37.A.izumi_intel.G.fake_testing_only_20251010_115723-20251010_115723
E   Creating test directory /scratch/cluster/sacks/scripts_regression_test.20251010_115240/TESTRUNPASS_P1.ne30_g16.A.izumi_intel.G.fake_testing_only_20251010_115723-20251010_115723
E   RUNNING TESTS:
E     TESTRUNPASS_P1.f19_g16.A.izumi_intel
E     TESTRUNPASS_P1.f45_g37.A.izumi_intel
E     TESTRUNPASS_P1.ne30_g16.A.izumi_intel
E   Starting CREATE_NEWCASE for test TESTRUNPASS_P1.f19_g16.A.izumi_intel with 1 procs
E   Starting CREATE_NEWCASE for test TESTRUNPASS_P1.f45_g37.A.izumi_intel with 1 procs
E   Starting CREATE_NEWCASE for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel with 1 procs
E   Finished CREATE_NEWCASE for test TESTRUNPASS_P1.f19_g16.A.izumi_intel in 1.279424 seconds (PASS)
E   Finished CREATE_NEWCASE for test TESTRUNPASS_P1.f45_g37.A.izumi_intel in 1.286392 seconds (PASS)
E   Finished CREATE_NEWCASE for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel in 1.289378 seconds (PASS)
E   Starting XML for test TESTRUNPASS_P1.f19_g16.A.izumi_intel with 1 procs
E   Starting XML for test TESTRUNPASS_P1.f45_g37.A.izumi_intel with 1 procs
E   Starting XML for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel with 1 procs
E   Finished XML for test TESTRUNPASS_P1.f19_g16.A.izumi_intel in 0.746019 seconds (PASS)
E   Finished XML for test TESTRUNPASS_P1.f45_g37.A.izumi_intel in 0.747441 seconds (PASS)
E   Finished XML for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel in 0.777735 seconds (PASS)
E   Starting SETUP for test TESTRUNPASS_P1.f19_g16.A.izumi_intel with 1 procs
E   Starting SETUP for test TESTRUNPASS_P1.f45_g37.A.izumi_intel with 1 procs
E   Starting SETUP for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel with 1 procs
E   Finished SETUP for test TESTRUNPASS_P1.f45_g37.A.izumi_intel in 6.301063 seconds (PASS)
E   Finished SETUP for test TESTRUNPASS_P1.f19_g16.A.izumi_intel in 6.315292 seconds (PASS)
E   Finished SETUP for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel in 6.365919 seconds (PASS)
E   Starting SHAREDLIB_BUILD for test TESTRUNPASS_P1.f19_g16.A.izumi_intel with 1 procs
E   Finished SHAREDLIB_BUILD for test TESTRUNPASS_P1.f19_g16.A.izumi_intel in 0.693401 seconds (PASS)
E   Starting MODEL_BUILD for test TESTRUNPASS_P1.f19_g16.A.izumi_intel with 4 procs
E   Starting SHAREDLIB_BUILD for test TESTRUNPASS_P1.f45_g37.A.izumi_intel with 1 procs
E   Finished SHAREDLIB_BUILD for test TESTRUNPASS_P1.f45_g37.A.izumi_intel in 0.849429 seconds (PASS)
E   Finished MODEL_BUILD for test TESTRUNPASS_P1.f19_g16.A.izumi_intel in 0.913951 seconds (PASS)
E   Starting RUN for test TESTRUNPASS_P1.f19_g16.A.izumi_intel with 1 proc on interactive node and 1 procs on compute nodes
E   Starting MODEL_BUILD for test TESTRUNPASS_P1.f45_g37.A.izumi_intel with 4 procs
E   Starting SHAREDLIB_BUILD for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel with 1 procs
E   Finished SHAREDLIB_BUILD for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel in 0.847564 seconds (PASS)
E   Finished MODEL_BUILD for test TESTRUNPASS_P1.f45_g37.A.izumi_intel in 0.923528 seconds (PASS)
E   Starting RUN for test TESTRUNPASS_P1.f45_g37.A.izumi_intel with 1 proc on interactive node and 1 procs on compute nodes
E   Starting MODEL_BUILD for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel with 4 procs
E   Finished RUN for test TESTRUNPASS_P1.f19_g16.A.izumi_intel in 1.501414 seconds (PEND). [COMPLETED 1 of 3]
E   Finished MODEL_BUILD for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel in 0.880538 seconds (PASS)
E   Starting RUN for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel with 1 proc on interactive node and 1 procs on compute nodes
E   Finished RUN for test TESTRUNPASS_P1.f45_g37.A.izumi_intel in 1.543738 seconds (PEND). [COMPLETED 2 of 3]
E   Finished RUN for test TESTRUNPASS_P1.ne30_g16.A.izumi_intel in 1.637663 seconds (PEND). [COMPLETED 3 of 3]
E   Waiting for tests to finish
E   FAIL TESTRUNPASS_P1.f19_g16.A.izumi_intel (phase RUN)
E       Case dir: /scratch/cluster/sacks/scripts_regression_test.20251010_115240/TESTRUNPASS_P1.f19_g16.A.izumi_intel.G.fake_testing_only_20251010_115723-20251010_115723
E   PASS TESTRUNPASS_P1.f45_g37.A.izumi_intel RUN
E       Case dir: /scratch/cluster/sacks/scripts_regression_test.20251010_115240/TESTRUNPASS_P1.f45_g37.A.izumi_intel.G.fake_testing_only_20251010_115723-20251010_115723
E   PASS TESTRUNPASS_P1.ne30_g16.A.izumi_intel RUN
E       Case dir: /scratch/cluster/sacks/scripts_regression_test.20251010_115240/TESTRUNPASS_P1.ne30_g16.A.izumi_intel.G.fake_testing_only_20251010_115723-20251010_115723
E   test-scheduler took 43.693169593811035 seconds
E       ERRPUT:
========================================================================================================================= 2 failed in 348.44 seconds =========================================================================================================================

Any thoughts on what might be going on? Is this user error on my part? (I haven't run CIME testing in a long time.) Or are there some true issues with this test?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions