Skip to content

[MAINTENANCE] Remove test_expectations_v3_api.py #11098

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

billdirks
Copy link
Contributor

  • Description of PR changes above includes a link to an existing GitHub issue
  • PR title is prefixed with one of: [BUGFIX], [FEATURE], [DOCS], [MAINTENANCE], [CONTRIB], [MINORBUMP]
  • Code is linted - run invoke lint (uses ruff format + ruff check)
  • Appropriate tests and docs have been updated

For more information about contributing, visit our community resources.

After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!

Copy link

netlify bot commented Apr 14, 2025

Deploy Preview for niobium-lead-7998 ready!

Name Link
🔨 Latest commit 7f572a0
🔍 Latest deploy log https://app.netlify.com/sites/niobium-lead-7998/deploys/67fede1958fa6c00086b4f08
😎 Deploy Preview https://deploy-preview-11098.docs.greatexpectations.io
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link

codecov bot commented Apr 14, 2025

❌ 42 Tests Failed:

Tests completed Failed Passed Skipped
17662 42 17620 3200
View the top 3 failed test(s) by shortest run time
tests.render.test_render_BulletListContentBlock::test_all_expectations_using_test_definitions
Stack Traces | 0.002s run time
@pytest.mark.filesystem
    def test_all_expectations_using_test_definitions():
        dir_path = os.path.dirname(os.path.abspath(__file__))  # noqa: PTH120, PTH100 # FIXME CoP
        pattern = os.path.join(  # noqa: PTH118 # FIXME CoP
            dir_path, "..", "..", "tests", "test_definitions", "*", "expect*.json"
        )
        test_files = glob.glob(pattern)  # noqa: PTH207 # FIXME CoP
    
        # Historically, collecting all the JSON tests was an issue - this step ensures we actually have test data.  # noqa: E501 # FIXME CoP
>       assert len(test_files) == 61, (
            "Something went wrong when collecting JSON Expectation test fixtures"
        )
E       AssertionError: Something went wrong when collecting JSON Expectation test fixtures
E       assert 0 == 61
E        +  where 0 = len([])

tests/render/test_render_BulletListContentBlock.py:80: AssertionError
tests.render.test_render_BulletListContentBlock::test_all_expectations_using_test_definitions
Stack Traces | 0.003s run time
@pytest.mark.filesystem
    def test_all_expectations_using_test_definitions():
        dir_path = os.path.dirname(os.path.abspath(__file__))  # noqa: PTH120, PTH100 # FIXME CoP
        pattern = os.path.join(  # noqa: PTH118 # FIXME CoP
            dir_path, "..", "..", "tests", "test_definitions", "*", "expect*.json"
        )
        test_files = glob.glob(pattern)  # noqa: PTH207 # FIXME CoP
    
        # Historically, collecting all the JSON tests was an issue - this step ensures we actually have test data.  # noqa: E501 # FIXME CoP
>       assert len(test_files) == 61, (
            "Something went wrong when collecting JSON Expectation test fixtures"
        )
E       AssertionError: Something went wrong when collecting JSON Expectation test fixtures
E       assert 0 == 61
E        +  where 0 = len([])

tests/render/test_render_BulletListContentBlock.py:80: AssertionError
tests.integration.common_workflows.test_sql_asset_workflows::test_get_batch_identifiers_list__respects_order
Stack Traces | 0.034s run time
self = <sqlalchemy.engine.base.Connection object at 0x7fd5df065d60>
dialect = <sqlalchemy.dialects.postgresql.psycopg2.PGDialect_psycopg2 object at 0x7fd5dec86220>
context = <sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2 object at 0x7fd5df135a00>
statement = <sqlalchemy.dialects.postgresql.base.PGCompiler object at 0x7fd5decf78e0>
parameters = [{'param_1': 1}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:1964: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <sqlalchemy.dialects.postgresql.psycopg2.PGDialect_psycopg2 object at 0x7fd5dec86220>
cursor = <cursor object at 0x7fd5deea16d0; closed: -1>
statement = 'SELECT 1 \nFROM ct_column_values_to_be_between__evaluation_parameters_dataset_1 \n LIMIT %(param_1)s'
parameters = {'param_1': 1}
context = <sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2 object at 0x7fd5df135a00>

    def do_execute(self, cursor, statement, parameters, context=None):
>       cursor.execute(statement, parameters)
E       psycopg2.errors.UndefinedTable: relation "ct_column_values_to_be_between__evaluation_parameters_dataset_1" does not exist
E       LINE 2: FROM ct_column_values_to_be_between__evaluation_parameters_d...
E                    ^

.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/default.py:945: UndefinedTable

The above exception was the direct cause of the following exception:

self = TableAsset(name='ten trips', type='table', id=None, order_by=[], batch_metadata={}, batch_definitions=[], table_name='ct_column_values_to_be_between__evaluation_parameters_dataset_1', schema_name=None)

    @override
    def test_connection(self) -> None:
        """Test the connection for the TableAsset.
    
        Raises:
            TestConnectionError: If the connection test fails.
        """
        datasource: SQLDatasource = self.datasource
        engine: sqlalchemy.Engine = datasource.get_engine()
        inspector: sqlalchemy.Inspector = sa.inspect(engine)
    
        if self.schema_name and self.schema_name not in inspector.get_schema_names():
            raise TestConnectionError(  # noqa: TRY003 # FIXME CoP
                f'Attempt to connect to table: "{self.qualified_name}" failed because the schema '
                f'"{self.schema_name}" does not exist.'
            )
    
        try:
            with engine.connect() as connection:
                table = sa.table(self.table_name, schema=self.schema_name)
                # don't need to fetch any data, just want to make sure the table is accessible
>               connection.execute(sa.select(1, table).limit(1))

.../datasource/fluent/sql_datasource.py:1068: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:1416: in execute
    return meth(
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/sql/elements.py:523: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:1638: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:1843: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:1983: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:2352: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/base.py:1964: in _exec_single_context
    self.dialect.do_execute(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <sqlalchemy.dialects.postgresql.psycopg2.PGDialect_psycopg2 object at 0x7fd5dec86220>
cursor = <cursor object at 0x7fd5deea16d0; closed: -1>
statement = 'SELECT 1 \nFROM ct_column_values_to_be_between__evaluation_parameters_dataset_1 \n LIMIT %(param_1)s'
parameters = {'param_1': 1}
context = <sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2 object at 0x7fd5df135a00>

    def do_execute(self, cursor, statement, parameters, context=None):
>       cursor.execute(statement, parameters)
E       sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "ct_column_values_to_be_between__evaluation_parameters_dataset_1" does not exist
E       LINE 2: FROM ct_column_values_to_be_between__evaluation_parameters_d...
E                    ^
E       
E       [SQL: SELECT 1 
E       FROM ct_column_values_to_be_between__evaluation_parameters_dataset_1 
E        LIMIT %(param_1)s]
E       [parameters: {'param_1': 1}]
E       (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.9.21............................../x64/lib/python3.9.../sqlalchemy/engine/default.py:945: ProgrammingError

The above exception was the direct cause of the following exception:

context = {
  "checkpoint_store_name": "checkpoint_store",
  "config_version": 4,
  "data_docs_sites": {
    "local_site": {
   ..."class_name": "InMemoryStoreBackend"
      }
    }
  },
  "validation_results_store_name": "validation_results_store"
}

    @pytest.fixture
    def postgres_asset(context: AbstractDataContext) -> _SQLAsset:
        DATASOURCE_NAME = "postgres"
        ASSET_NAME = "ten trips"
        datasource = context.data_sources.add_postgres(
            DATASOURCE_NAME, connection_string=CONNECTION_STRING
        )
>       data_asset = datasource.add_table_asset(name=ASSET_NAME, table_name=TABLE_NAME)

.../integration/common_workflows/test_sql_asset_workflows.py:45: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../datasource/fluent/sql_datasource.py:1313: in add_table_asset
    return self._add_asset(asset)
.../datasource/fluent/interfaces.py:883: in _add_asset
    asset.test_connection()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = TableAsset(name='ten trips', type='table', id=None, order_by=[], batch_metadata={}, batch_definitions=[], table_name='ct_column_values_to_be_between__evaluation_parameters_dataset_1', schema_name=None)

    @override
    def test_connection(self) -> None:
        """Test the connection for the TableAsset.
    
        Raises:
            TestConnectionError: If the connection test fails.
        """
        datasource: SQLDatasource = self.datasource
        engine: sqlalchemy.Engine = datasource.get_engine()
        inspector: sqlalchemy.Inspector = sa.inspect(engine)
    
        if self.schema_name and self.schema_name not in inspector.get_schema_names():
            raise TestConnectionError(  # noqa: TRY003 # FIXME CoP
                f'Attempt to connect to table: "{self.qualified_name}" failed because the schema '
                f'"{self.schema_name}" does not exist.'
            )
    
        try:
            with engine.connect() as connection:
                table = sa.table(self.table_name, schema=self.schema_name)
                # don't need to fetch any data, just want to make sure the table is accessible
                connection.execute(sa.select(1, table).limit(1))
        except Exception as query_error:
            LOGGER.info(f"{self.name} `.test_connection()` query failed: {query_error!r}")
>           raise TestConnectionError(  # noqa: TRY003 # FIXME CoP
                f"Attempt to connect to table: {self.qualified_name} failed because the test query "
                f"failed. Ensure the table exists and the user has access to select data from the table: {query_error}"  # noqa: E501 # FIXME CoP
            ) from query_error
E           great_expectations.datasource.fluent.interfaces.TestConnectionError: Attempt to connect to table: ct_column_values_to_be_between__evaluation_parameters_dataset_1 failed because the test query failed. Ensure the table exists and the user has access to select data from the table: (psycopg2.errors.UndefinedTable) relation "ct_column_values_to_be_between__evaluation_parameters_dataset_1" does not exist
E           LINE 2: FROM ct_column_values_to_be_between__evaluation_parameters_d...
E                        ^
E           
E           [SQL: SELECT 1 
E           FROM ct_column_values_to_be_between__evaluation_parameters_dataset_1 
E            LIMIT %(param_1)s]
E           [parameters: {'param_1': 1}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../datasource/fluent/sql_datasource.py:1071: TestConnectionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@billdirks billdirks marked this pull request as ready for review April 15, 2025 22:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant