Skip to content

Comments

[MAINTENANCE] Remove Pandas Upper Pin#11677

Open
josectobar wants to merge 7 commits intodevelopfrom
GX-2403-core-remove-pandas-upper-bound-2
Open

[MAINTENANCE] Remove Pandas Upper Pin#11677
josectobar wants to merge 7 commits intodevelopfrom
GX-2403-core-remove-pandas-upper-bound-2

Conversation

@josectobar
Copy link
Member

  • Description of PR changes above includes a link to an existing GitHub issue
  • PR title is prefixed with one of: [BUGFIX], [FEATURE], [DOCS], [MAINTENANCE], [CONTRIB], [MINORBUMP]
  • Code is linted - run invoke lint (uses ruff format + ruff check)
  • Appropriate tests and docs have been updated

For more information about contributing, visit our community resources.

After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!

@netlify
Copy link

netlify bot commented Feb 20, 2026

Deploy Preview for niobium-lead-7998 ready!

Name Link
🔨 Latest commit daf4550
🔍 Latest deploy log https://app.netlify.com/projects/niobium-lead-7998/deploys/6998f302defffa0008f2888d
😎 Deploy Preview https://deploy-preview-11677.docs.greatexpectations.io
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@josectobar josectobar force-pushed the GX-2403-core-remove-pandas-upper-bound-2 branch from 9a68dd8 to ecb41c7 Compare February 20, 2026 23:00
@codecov
Copy link

codecov bot commented Feb 20, 2026

❌ 58 Tests Failed:

Tests completed Failed Passed Skipped
6557 58 6499 821
View the top 3 failed test(s) by shortest run time
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql::test_success[mysql-one_pattern]
Stack Traces | 0.029s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cc8b59c10>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc82ce200>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cc8343890>
parameters = [{'col_name_m0': 'aa', 'col_name_m1': 'ab', 'col_name_m2': 'ac', 'col_name_m3': nan}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_mtikjegqvb (col_name) VALUES (%(col_name_m0)s), (%(col_name_m1)s), (%(col_name_m2)s), (%(col_name_m3)s)]
E           [parameters: {'col_name_m0': 'aa', 'col_name_m1': 'ab', 'col_name_m2': 'ac', 'col_name_m3': nan}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql::test_success[mysql-no_matches]
Stack Traces | 0.032s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cc8b5a510>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc8eb7120>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cc8340410>
parameters = [{'col_a_m0': 'aa', 'col_a_m1': 'ab', 'col_a_m2': 'ac', 'col_a_m3': nan, ...}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_twkrhwgzeq (col_a, col_b) VALUES (%(col_a_m0)s, %(col_b_m0)s), (%(col_a_m1)s, %(col_b_m1)s), (%(col_a_m2)s, %(col_b_m2)s), (%(col_a_m3)s, %(col_b_m3)s)]
E           [parameters: {'col_a_m0': 'aa', 'col_b_m0': 'aa', 'col_a_m1': 'ab', 'col_b_m1': 'bb', 'col_a_m2': 'ac', 'col_b_m2': 'cc', 'col_a_m3': nan, 'col_b_m3': nan}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_value_lengths_to_equal::test_success_complete__sql[mysql]
Stack Traces | 0.033s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cca4cf650>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc907c890>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cca658910>
parameters = [{'all_the_same_m0': 'FOO', 'all_the_same_m1': 'BAR', 'all_the_same_m2': 'BAZ', 'all_the_same_m3': nan, ...}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_cijkxrzwdr (all_the_same, some_are_different) VALUES (%(all_the_same_m0)s, %(some_are_different_m0)s), (%(all_the_same_m1)s, %(some_are_different_m1)s), (%(all_the_same_m2)s, %(some_are_different_m2)s), (%(all_the_same_m3)s, %(some_are_different_m3)s)]
E           [parameters: {'all_the_same_m0': 'FOO', 'some_are_different_m0': 'FOOD', 'all_the_same_m1': 'BAR', 'some_are_different_m1': 'BAR', 'all_the_same_m2': 'BAZ', 'some_are_different_m2': 'BAZ', 'all_the_same_m3': nan, 'some_are_different_m3': nan}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_match_regex::test_basic_success[mysql]
Stack Traces | 0.033s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cc8f9f050>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc8edcaa0>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cc8f34690>
parameters = [{'basic_strings_m0': 'abc', 'basic_strings_m1': 'def', 'basic_strings_m2': 'ghi', 'complex_strings_m0': 'a1b2', ...}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_ghqqryauoj (basic_strings, complex_strings, with_null) VALUES (%(basic_strings_m0)s, %(complex_strings_m0)s, %(with_null_m0)s), (%(basic_strings_m1)s, %(complex_strings_m1)s, %(with_null_m1)s), (%(basic_strings_m2)s, %(complex_strings_m2)s, %(with_null_m2)s)]
E           [parameters: {'basic_strings_m0': 'abc', 'complex_strings_m0': 'a1b2', 'with_null_m0': 'abc', 'basic_strings_m1': 'def', 'complex_strings_m1': 'cccc', 'with_null_m1': nan, 'basic_strings_m2': 'ghi', 'complex_strings_m2': '123', 'with_null_m2': 'ghi'}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_compound_column_values_to_be_unique::test_golden_path[mysql]
Stack Traces | 0.033s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cc8b5a2d0>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc8216c50>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cc823ae90>
parameters = [{'duplicates_m0': 100, 'duplicates_m1': 100, 'duplicates_m2': 100, 'duplicates_m3': 100, ...}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_ajqcplxafi (string_col, int_col, int_col_2, duplicates) VALUES (%(string_col_m0)s, %(int_col_m0)s, %(int_col_2_m0)s, %(duplicates_m0)s), (%(string_col_m1)s, %(int_col_m1)s, %(int_col_2_m1)s, %(duplicates_m1)s), (%(string_col_m2)s, %(int_col_m2)s, %(int_col_2_m2)s, %(duplicates_m2)s), (%(string_col_m3)s, %(int_col_m3)s, %(int_col_2_m3)s, %(duplicates_m3)s), (%(string_col_m4)s, %(int_col_m4)s, %(int_col_2_m4)s, %(duplicates_m4)s), (%(string_col_m5)s, %(int_col_m5)s, %(int_col_2_m5)s, %(duplicates_m5)s)]
E           [parameters: {'string_col_m0': 'foo', 'int_col_m0': 1.0, 'int_col_2_m0': 1.0, 'duplicates_m0': 100, 'string_col_m1': 'bar', 'int_col_m1': 2.0, 'int_col_2_m1': 2.0, 'duplicates_m1': 100, 'string_col_m2': 'foo', 'int_col_m2': 1.0, 'int_col_2_m2': 3.0, 'duplicates_m2': 100, 'string_col_m3': 'baz', 'int_col_m3': 3.0, 'int_col_2_m3': 4.0, 'duplicates_m3': 100, 'string_col_m4': nan, 'int_col_m4': None, 'int_col_2_m4': None, 'duplicates_m4': 99, 'string_col_m5': nan, 'int_col_m5': None, 'int_col_2_m5': None, 'duplicates_m5': 99}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_table_row_count_to_be_between::test_golden_path[mysql]
Stack Traces | 0.034s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cc822c350>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc3c168e0>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cc3c78e10>
parameters = [{'col_a_m0': 1.0, 'col_a_m1': 2.0, 'col_a_m2': None, 'col_b_m0': 'a', ...}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_srqhmduzti (col_a, col_b) VALUES (%(col_a_m0)s, %(col_b_m0)s), (%(col_a_m1)s, %(col_b_m1)s), (%(col_a_m2)s, %(col_b_m2)s)]
E           [parameters: {'col_a_m0': 1.0, 'col_b_m0': 'a', 'col_a_m1': 2.0, 'col_b_m1': 'b', 'col_a_m2': None, 'col_b_m2': nan}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_match_like_pattern::test_basic_success[mysql]
Stack Traces | 0.035s run time
self = <sqlalchemy.engine.base.Connection object at 0x7f8cc8b58350>
dialect = <sqlalchemy.dialects.mysql.pymysql.MySQLDialect_pymysql object at 0x7f8cca57e210>
context = <sqlalchemy.dialects.mysql.mysqldb.MySQLExecutionContext_mysqldb object at 0x7f8cc8b16d00>
statement = <sqlalchemy.dialects.mysql.mysqldb.MySQLCompiler_mysqldb object at 0x7f8cc8b79090>
parameters = [{'basic_patterns_m0': 'abc', 'basic_patterns_m1': 'def', 'basic_patterns_m2': 'ghi', 'prefixed_patterns_m0': 'foo_abc', ...}]

    def _exec_single_context(
        self,
        dialect: Dialect,
        context: ExecutionContext,
        statement: Union[str, Compiled],
        parameters: Optional[_AnyMultiExecuteParams],
    ) -> CursorResult[Any]:
        """continue the _execute_context() method for a single DBAPI
        cursor.execute() or cursor.executemany() call.
    
        """
        if dialect.bind_typing is BindTyping.SETINPUTSIZES:
            generic_setinputsizes = context._prepare_set_input_sizes()
    
            if generic_setinputsizes:
                try:
                    dialect.do_set_input_sizes(
                        context.cursor, generic_setinputsizes, context
                    )
                except BaseException as e:
                    self._handle_dbapi_exception(
                        e, str(statement), parameters, None, context
                    )
    
        cursor, str_statement, parameters = (
            context.cursor,
            context.statement,
            context.parameters,
        )
    
        effective_parameters: Optional[_AnyExecuteParams]
    
        if not context.executemany:
            effective_parameters = parameters[0]
        else:
            effective_parameters = parameters
    
        if self._has_events or self.engine._has_events:
            for fn in self.dispatch.before_cursor_execute:
                str_statement, effective_parameters = fn(
                    self,
                    cursor,
                    str_statement,
                    effective_parameters,
                    context,
                    context.executemany,
                )
    
        if self._echo:
            self._log_info(str_statement)
    
            stats = context._get_cache_stats()
    
            if not self.engine.hide_parameters:
                self._log_info(
                    "[%s] %r",
                    stats,
                    sql_util._repr_params(
                        effective_parameters,
                        batches=10,
                        ismulti=context.executemany,
                    ),
                )
            else:
                self._log_info(
                    "[%s] [SQL parameters hidden due to hide_parameters=True]",
                    stats,
                )
    
        evt_handled: bool = False
        try:
            if context.execute_style is ExecuteStyle.EXECUTEMANY:
                effective_parameters = cast(
                    "_CoreMultiExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_executemany:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_executemany(
                        cursor,
                        str_statement,
                        effective_parameters,
                        context,
                    )
            elif not effective_parameters and context.no_parameters:
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute_no_params:
                        if fn(cursor, str_statement, context):
                            evt_handled = True
                            break
                if not evt_handled:
                    self.dialect.do_execute_no_params(
                        cursor, str_statement, context
                    )
            else:
                effective_parameters = cast(
                    "_CoreSingleExecuteParams", effective_parameters
                )
                if self.dialect._has_events:
                    for fn in self.dialect.dispatch.do_execute:
                        if fn(
                            cursor,
                            str_statement,
                            effective_parameters,
                            context,
                        ):
                            evt_handled = True
                            break
                if not evt_handled:
>                   self.dialect.do_execute(
                        cursor, str_statement, effective_parameters, context
                    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           pymysql.err.ProgrammingError: nan can not be used with MySQL

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError

The above exception was the direct cause of the following exception:

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
                ^^^^^^^^^^^^^^^^^^^^^^^^
    )

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../site-packages/flaky/flaky_pytest_plugin.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/integration/conftest.py:196: in _batch_setup_for_datasource
    batch_setup.setup()
.../test_utils/data_source_config/sql.py:233: in setup
    self._safe_bulk_insert(conn, table_data.table, values, max_params)  # type: ignore[arg-type] # FIXME
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../test_utils/data_source_config/sql.py:198: in _safe_bulk_insert
    conn.execute(insert(table).values(values))
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1419: in execute
    return meth(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/sql/elements.py:527: in _execute_on_connection
    return connection._execute_clauseelement(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1641: in _execute_clauseelement
    ret = self._execute_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1846: in _execute_context
    return self._exec_single_context(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1986: in _exec_single_context
    self._handle_dbapi_exception(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:2363: in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/base.py:1967: in _exec_single_context
    self.dialect.do_execute(
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13.../sqlalchemy/engine/default.py:952: in do_execute
    cursor.execute(statement, parameters)
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:151: in execute
    query = self.mogrify(query, args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:129: in mogrify
    query = query % self._escape_args(args, conn)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13................../site-packages/pymysql/cursors.py:104: in _escape_args
    return {key: conn.literal(val) for (key, val) in args.items()}
                 ^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:542: in literal
    return self.escape(obj, self.encoders)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/connections.py:535: in escape
    return converters.escape_item(obj, self.charset, mapping=mapping)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:25: in escape_item
    val = encoder(val, mapping)
          ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = nan
mapping = {<class 'bool'>: <function escape_bool at 0x7f8d0f735a80>, <class 'int'>: <function escape_int at 0x7f8d0f735b20>, <class 'float'>: <function escape_float at 0x7f8d0f735bc0>, <class 'str'>: <function escape_str at 0x7f8d0f735e40>, ...}

    def escape_float(value, mapping=None):
        s = repr(value)
        if s in ("inf", "-inf", "nan"):
>           raise ProgrammingError("%s can not be used with MySQL" % s)
E           sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) nan can not be used with MySQL
E           [SQL: INSERT INTO expectation_test_table_hnpoyhfnoi (basic_patterns, prefixed_patterns, suffixed_patterns, with_null) VALUES (%(basic_patterns_m0)s, %(prefixed_patterns_m0)s, %(suffixed_patterns_m0)s, %(with_null_m0)s), (%(basic_patterns_m1)s, %(prefixed_patterns_m1)s, %(suffixed_patterns_m1)s, %(with_null_m1)s), (%(basic_patterns_m2)s, %(prefixed_patterns_m2)s, %(suffixed_patterns_m2)s, %(with_null_m2)s)]
E           [parameters: {'basic_patterns_m0': 'abc', 'prefixed_patterns_m0': 'foo_abc', 'suffixed_patterns_m0': 'abc_foo', 'with_null_m0': 'ba', 'basic_patterns_m1': 'def', 'prefixed_patterns_m1': 'foo_def', 'suffixed_patterns_m1': 'def_foo', 'with_null_m1': nan, 'basic_patterns_m2': 'ghi', 'prefixed_patterns_m2': 'foo_ghi', 'suffixed_patterns_m2': 'ghi_foo', 'with_null_m2': 'ab'}]
E           (Background on this error at: https://sqlalche..../e/20/f405)

.../hostedtoolcache/Python/3.13.11.........................................................................../x64/lib/python3.13............/site-packages/pymysql/converters.py:56: ProgrammingError
tests.integration.data_sources_and_expectations.expectations.test_expect_table_row_count_to_equal::test_success[mysql]
Stack Traces | 0.045s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc3c8fcf0>

    @parameterize_batch_for_data_sources(data_source_configs=ALL_DATA_SOURCES, data=DATA)
    def test_success(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectTableRowCountToEqual(value=3)
        result = batch_for_datasource.validate(expectation)
>       assert result.success
E       assert False
E        +  where False = {\n  "success": false,\n  "expectation_config": {\n    "type": "expect_table_row_count_to_equal",\n    "kwargs": {\n      "batch_id": "xgmqpltqnd-nxrfhvrojm",\n      "value": 3\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "observed_value": 0\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_table_row_count_to_equal.py:22: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql::test_failure[mysql-all_matches]
Stack Traces | 0.071s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql object at 0x7f8ccc938d20>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc82fddb0>
expectation = ExpectColumnValuesToNotMatchLikePattern(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, d...>, windows=None, batch_id=None, column='col_a', mostly=1, row_condition=None, condition_parser=None, like_pattern='a%')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="a%"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="__"),
                id="underscores_match",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(
                    column=COL_B, like_pattern="a%", mostly=0.7
                ),
                id="mostly_threshold_not_met",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePattern,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern",\n    "kwargs": {\n      "batch_id": "gungugomtp-qpwtxhbdge",\n      "column": "col_a",\n      "like_pattern": "a%"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern.py:97: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql::test_failure[mysql-mostly_threshold_not_met]
Stack Traces | 0.071s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql object at 0x7f8ccc938b40>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc82c82d0>
expectation = ExpectColumnValuesToNotMatchLikePattern(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, d... windows=None, batch_id=None, column='col_b', mostly=0.7, row_condition=None, condition_parser=None, like_pattern='a%')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="a%"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="__"),
                id="underscores_match",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(
                    column=COL_B, like_pattern="a%", mostly=0.7
                ),
                id="mostly_threshold_not_met",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePattern,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern",\n    "kwargs": {\n      "batch_id": "kohunkjtok-eicfpjglnw",\n      "column": "col_b",\n      "mostly": 0.7,\n      "like_pattern": "a%"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern.py:97: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql::test_failure[mysql-underscores_match]
Stack Traces | 0.071s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql object at 0x7f8ccc938be0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc82d4eb0>
expectation = ExpectColumnValuesToNotMatchLikePattern(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, d...>, windows=None, batch_id=None, column='col_a', mostly=1, row_condition=None, condition_parser=None, like_pattern='__')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="a%"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="__"),
                id="underscores_match",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(
                    column=COL_B, like_pattern="a%", mostly=0.7
                ),
                id="mostly_threshold_not_met",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePattern,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern",\n    "kwargs": {\n      "batch_id": "bmgnsgxxbt-hahtptpben",\n      "column": "col_a",\n      "like_pattern": "__"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern.py:97: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql::test_failure[mysql-one_pattern]
Stack Traces | 0.071s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql object at 0x7f8ccc8c16d0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8ce7750>
expectation = ExpectColumnValuesToNotMatchLikePatternList(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'...None, batch_id=None, column='col_name', mostly=1, row_condition=None, condition_parser=None, like_pattern_list=['%a%'])

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePatternList(
                    column=COL_NAME, like_pattern_list=["%a%"]
                ),
                id="one_pattern",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePatternList(
                    column=COL_NAME, like_pattern_list=["%a%", "not_this"]
                ),
                id="multiple_patterns",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=REGULAR_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePatternList,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern_list",\n    "kwargs": {\n      "batch_id": "ctdjjagzax-xxvdnwsuna",\n      "column": "col_name",\n      "like_pattern_list": [\n        "%a%"\n      ]\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern_list.py:84: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_regex.TestNormalSql::test_failure[mysql-all_matches]
Stack Traces | 0.071s run time
self = <expectations.test_expect_column_values_to_not_match_regex.TestNormalSql object at 0x7f8ccc9aa350>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8243570>
expectation = ExpectColumnValuesToNotMatchRegex(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, descrip...'>, windows=None, batch_id=None, column='col_a', mostly=1, row_condition=None, condition_parser=None, regex='^a[abc]$')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="^a[abc]$"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_B, regex="a.", mostly=0.7),
                id="mostly_threshold_not_met",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex=""), id="empty_regex"
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchRegex,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_regex",\n    "kwargs": {\n      "batch_id": "lkoleugrmq-ktfqhzjlqz",\n      "column": "col_a",\n      "regex": "^a[abc]$"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_regex.py:95: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_regex.TestNormalSql::test_failure[mysql-empty_regex]
Stack Traces | 0.072s run time
self = <expectations.test_expect_column_values_to_not_match_regex.TestNormalSql object at 0x7f8ccc9aa490>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8285450>
expectation = ExpectColumnValuesToNotMatchRegex(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, descrip...critical'>, windows=None, batch_id=None, column='col_a', mostly=1, row_condition=None, condition_parser=None, regex='')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="^a[abc]$"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_B, regex="a.", mostly=0.7),
                id="mostly_threshold_not_met",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex=""), id="empty_regex"
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchRegex,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_regex",\n    "kwargs": {\n      "batch_id": "npyrlwhmxd-kwghncdzox",\n      "column": "col_a",\n      "regex": ""\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_regex.py:95: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_regex.TestNormalSql::test_failure[mysql-mostly_threshold_not_met]
Stack Traces | 0.072s run time
self = <expectations.test_expect_column_values_to_not_match_regex.TestNormalSql object at 0x7f8ccc9aa3f0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8266990>
expectation = ExpectColumnValuesToNotMatchRegex(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, descrip...ical'>, windows=None, batch_id=None, column='col_b', mostly=0.7, row_condition=None, condition_parser=None, regex='a.')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="^a[abc]$"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_B, regex="a.", mostly=0.7),
                id="mostly_threshold_not_met",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex=""), id="empty_regex"
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchRegex,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_regex",\n    "kwargs": {\n      "batch_id": "hqpirqzthp-wbjelxdmaj",\n      "column": "col_b",\n      "mostly": 0.7,\n      "regex": "a."\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_regex.py:95: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql::test_failure[mysql-multiple_patterns]
Stack Traces | 0.074s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql object at 0x7f8ccc921ef0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8cfef30>
expectation = ExpectColumnValuesToNotMatchLikePatternList(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'...id=None, column='col_name', mostly=1, row_condition=None, condition_parser=None, like_pattern_list=['%a%', 'not_this'])

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePatternList(
                    column=COL_NAME, like_pattern_list=["%a%"]
                ),
                id="one_pattern",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePatternList(
                    column=COL_NAME, like_pattern_list=["%a%", "not_this"]
                ),
                id="multiple_patterns",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=REGULAR_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePatternList,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern_list",\n    "kwargs": {\n      "batch_id": "aktwlmsfgp-vydiwvxwek",\n      "column": "col_name",\n      "like_pattern_list": [\n        "%a%",\n        "not_this"\n      ]\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern_list.py:84: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_match_like_pattern_list::test_basic_failure[mysql]
Stack Traces | 0.075s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8abead0>

    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_basic_failure(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectColumnValuesToMatchLikePatternList(
            column=BASIC_PATTERNS,
            like_pattern_list=["xyz%"],
        )
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_match_like_pattern_list",\n    "kwargs": {\n      "batch_id": "jbxhmkyzhu-xmjmiyylnq",\n      "column": "basic_patterns",\n      "like_pattern_list": [\n        "xyz%"\n      ]\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_match_like_pattern_list.py:66: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_match_regex::test_basic_failure[mysql]
Stack Traces | 0.076s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8f4e7b0>

    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_SQL_DATA_SOURCES, data=DATA)
    def test_basic_failure(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectColumnValuesToMatchRegex(
            column=BASIC_STRINGS,
            regex="^xyz.*",
        )
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_match_regex",\n    "kwargs": {\n      "batch_id": "pranjxsxyi-bbofxliown",\n      "column": "basic_strings",\n      "regex": "^xyz.*"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_match_regex.py:69: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_match_regex_list::test_basic_failure[mysql]
Stack Traces | 0.076s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8f6eb70>

    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_SQL_DATA_SOURCES, data=DATA)
    def test_basic_failure(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectColumnValuesToMatchRegexList(
            column=BASIC_STRINGS,
            regex_list=["^xyz.*"],
        )
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_match_regex_list",\n    "kwargs": {\n      "batch_id": "plkfwxoojl-zxsvwuylmc",\n      "column": "basic_strings",\n      "regex_list": [\n        "^xyz.*"\n      ]\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_match_regex_list.py:69: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_empty_value_set[postgresql]
Stack Traces | 0.076s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f06c1d270c0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b44a0eb0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_empty_value_set(self, batch_for_datasource: Batch) -> None:
        """When value_set is empty, all non-null values are violations."""
        metric = ColumnDistinctValuesNotInSetCount(column=COLUMN_NAME, value_set=[])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == 3  # a, b, c
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       AssertionError: assert 4 == 3
E        +  where 4 = int(4)
E        +    where 4 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='75b9d88d434255c0ee88ddeffd1fc779', metric_value_kwargs_id='value_set=[]'), value=4).value

.../metrics/column/test_distinct_values_not_in_set_count.py:71: AssertionError
tests.integration.metrics.column.test_distinct_values_count.TestColumnDistinctValuesCount::test_distinct_values_count[postgresql]
Stack Traces | 0.077s run time
self = <tests.integration.metrics.column.test_distinct_values_count.TestColumnDistinctValuesCount object at 0x7f06c214aad0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b4805d10>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_distinct_values_count(self, batch_for_datasource: Batch) -> None:
        metric = ColumnDistinctValuesCount(column=COLUMN_NAME)
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesCountResult)
>       assert metric_result.value == 3
E       AssertionError: assert 4 == 3
E        +  where 4 = ColumnDistinctValuesCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.count', metric_domain_kwargs_id='107fbec238c2c27be6b4039f6b8ea274', metric_value_kwargs_id=()), value=4).value

.../metrics/column/test_distinct_values_count.py:29: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_some_values_not_in_set[postgresql]
Stack Traces | 0.077s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f06c1d29cc0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b45cc910>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_some_values_not_in_set(self, batch_for_datasource: Batch) -> None:
        """When some column values are not in the set, count should reflect that."""
        metric = ColumnDistinctValuesNotInSetCount(
            column=COLUMN_NAME,
            value_set=["a", "b"],  # missing "c"
        )
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
>       assert metric_result.value == 1  # "c" is not in set
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       assert 2 == 1
E        +  where 2 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='01a3c893f17d2a3abc3a7a90670769ef', metric_value_kwargs_id="value_set=['a', 'b']"), value=2).value

.../metrics/column/test_distinct_values_not_in_set_count.py:45: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_match_like_pattern::test_basic_failure[mysql]
Stack Traces | 0.078s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f8cc8b83d90>

    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_basic_failure(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectColumnValuesToMatchLikePattern(
            column=BASIC_PATTERNS,
            like_pattern="xyz%",
        )
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_match_like_pattern",\n    "kwargs": {\n      "batch_id": "acfjntxths-cmgzbejxzl",\n      "column": "basic_patterns",\n      "like_pattern": "xyz%"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 0,\n    "unexpected_count": null,\n    "unexpected_percent": null,\n    "partial_unexpected_list": [],\n    "missing_count": 0,\n    "missing_percent": null,\n    "unexpected_percent_total": null,\n    "unexpected_percent_nonmissing": null,\n    "partial_unexpected_counts": []\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_match_like_pattern.py:64: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_no_values_in_set[postgresql]
Stack Traces | 0.079s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f06c1d54c50>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b4440cd0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_no_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When no column values are in the set, count all distinct values."""
        metric = ColumnDistinctValuesNotInSetCount(column=COLUMN_NAME, value_set=["x", "y", "z"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == 3  # all of a, b, c are not in set
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       assert 4 == 3
E        +  where 4 = int(4)
E        +    where 4 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='83ba3004b7f4493a3ab928cabe4ef413', metric_value_kwargs_id="value_set=['x', 'y', 'z']"), value=4).value

.../metrics/column/test_distinct_values_not_in_set_count.py:58: AssertionError
tests.integration.metrics.column.test_values_not_match_regex_values.TestColumnValuesNotMatchRegexValues::test_special_characters[postgresql]
Stack Traces | 0.079s run time
self = <tests.integration.metrics.column.test_values_not_match_regex_values.TestColumnValuesNotMatchRegexValues object at 0x7f06c1c0c490>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3c863f0>

    @parameterize_batch_for_data_sources(
        data_source_configs=SQL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_special_characters(self, batch_for_datasource: Batch) -> None:
        metric = ColumnValuesNotMatchRegexValues(column=COLUMN_NAME, regex="^(a|d).+")
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnValuesNotMatchRegexValuesResult)
        # Expect values that DO NOT start with 'a' or 'd'
>       assert sorted(metric_result.value) == ["1ab2", "ghi"]
E       AssertionError: assert equals failed
E         #x1B[m[          [         
E         #x1B[m  '1ab2',    '1ab2', 
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m             
E         #x1B[m  'ghi',     'ghi',  
E         #x1B[m]          ]

.../metrics/column/test_values_not_match_regex_values.py:48: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet::test_no_values_in_set[postgresql]
Stack Traces | 0.08s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet object at 0x7f06c1ce75d0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b46a7f70>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_no_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When no column values are in the set, return all distinct values."""
        metric = ColumnDistinctValuesNotInSet(column=COLUMN_NAME, value_set=["x", "y", "z"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetResult)
>       assert set(metric_result.value) == {"a", "b", "c"}
E       AssertionError: assert equals failed
E         #x1B[mset([     set([    
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m  'a',      'a',   
E         #x1B[m  'b',      'b',   
E         #x1B[m  'c',      'c',   
E         #x1B[m])        ])

.../metrics/column/test_distinct_values_not_in_set.py:57: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet::test_some_values_not_in_set[postgresql]
Stack Traces | 0.08s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet object at 0x7f06c1d031e0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b5e8f6b0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_some_values_not_in_set(self, batch_for_datasource: Batch) -> None:
        """When some column values are not in the set, return them."""
        metric = ColumnDistinctValuesNotInSet(
            column=COLUMN_NAME,
            value_set=["a", "b"],  # missing "c"
        )
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetResult)
>       assert metric_result.value == ["c"]
E       AssertionError: assert equals failed
E         #x1B[m#x1B[1;31m[#x1B[m         #x1B[1;32m['c']#x1B[m    
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m#x1B[1;31m  'c',#x1B[m             
E         #x1B[m#x1B[1;31m]#x1B[m

.../metrics/column/test_distinct_values_not_in_set.py:45: AssertionError
tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount::test_partial_match_characters[postgresql]
Stack Traces | 0.08s run time
self = <tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount object at 0x7f06c1c0c270>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3d51b30>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES_EXCEPT_SNOWFLAKE,
        data=DATA_FRAME,
    )
    def test_partial_match_characters(self, batch_for_datasource: Batch) -> None:
        metric = ColumnValuesNotMatchRegexCount(column=COLUMN_NAME, regex="ab")
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnValuesNotMatchRegexCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == 2
E       AssertionError: assert 3 == 2
E        +  where 3 = int(3)
E        +    where 3 = ColumnValuesNotMatchRegexCountResult(id=MetricConfigurationID(metric_name='column_values.not_match_regex.count', metric_domain_kwargs_id='44b5f87c6359a6472b4fe2e783fbd9de', metric_value_kwargs_id='regex=ab'), value=3).value

.../metrics/column/test_values_not_match_regex_count.py:38: AssertionError
tests.integration.metrics.column.test_values_not_match_regex_values.TestColumnValuesNotMatchRegexValues::test_partial_match_characters[postgresql]
Stack Traces | 0.08s run time
self = <tests.integration.metrics.column.test_values_not_match_regex_values.TestColumnValuesNotMatchRegexValues object at 0x7f06c1bb49d0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3c26530>

    @parameterize_batch_for_data_sources(
        data_source_configs=SQL_DATA_SOURCES_EXCEPT_SNOWFLAKE,
        data=DATA_FRAME,
    )
    def test_partial_match_characters(self, batch_for_datasource: Batch) -> None:
        metric = ColumnValuesNotMatchRegexValues(column=COLUMN_NAME, regex="ab")
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnValuesNotMatchRegexValuesResult)
        # Expect values that DO NOT contain 'ab'
>       assert sorted(metric_result.value) == ["def", "ghi"]
E       AssertionError: assert equals failed
E         #x1B[m[         [        
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m  'def',    'def', 
E         #x1B[m  'ghi',    'ghi', 
E         #x1B[m]         ]

.../metrics/column/test_values_not_match_regex_values.py:36: AssertionError
tests.integration.metrics.column_pair_values.test_in_set.TestColumnPairValuesInSetUnexpectedValues::test_ignore_row_if__sql[postgresql-neither-6]
Stack Traces | 0.08s run time
[XPASS(strict)] returns 3 - pairs where both are null are dropped
tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount::test_special_characters[postgresql]
Stack Traces | 0.081s run time
self = <tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount object at 0x7f06c1d1e8a0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3dbdbd0>

    @parameterize_batch_for_data_sources(
        data_source_configs=SPARK_DATA_SOURCES + SQL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_special_characters(self, batch_for_datasource: Batch) -> None:
        metric = ColumnValuesNotMatchRegexCount(column=COLUMN_NAME, regex="^(a|d).+")
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnValuesNotMatchRegexCountResult)
>       assert metric_result.value == 3
E       AssertionError: assert 4 == 3
E        +  where 4 = ColumnValuesNotMatchRegexCountResult(id=MetricConfigurationID(metric_name='column_values.not_match_regex.count', metric_domain_kwargs_id='4ebbbaf334516755aabab8ced97ca172', metric_value_kwargs_id='regex=^(a|d).+'), value=4).value

.../metrics/column/test_values_not_match_regex_count.py:49: AssertionError
tests.integration.metrics.column_pair_values.test_in_set.TestColumnPairValuesInSetUnexpectedValues::test_ignore_row_if__sql[postgresql-both_values_are_missing-3]
Stack Traces | 0.081s run time
self = <tests.integration.metrics.column_pair_values.test_in_set.TestColumnPairValuesInSetUnexpectedValues object at 0x7f06c1bbf980>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3914910>
ignore_row_if = 'both_values_are_missing', unexpected_count = 3

    @pytest.mark.parametrize(
        "ignore_row_if,unexpected_count",
        [
            pytest.param("either_value_is_missing", 1),
            pytest.param("both_values_are_missing", 3),
            pytest.param(
                "neither",
                6,
                marks=pytest.mark.xfail(
                    reason=("returns 3 - pairs where both are null are dropped"),
                    strict=True,
                ),
            ),
        ],
    )
    @parameterize_batch_for_data_sources(
        data_source_configs=SQL_DATA_SOURCES,
        data=DATA_FRAME_WITH_NULLS,
    )
    def test_ignore_row_if__sql(
        self, batch_for_datasource: Batch, ignore_row_if, unexpected_count
    ) -> None:
        """This test captures a bug with SQL data sources and the ignore_row_if condition,
        where column pairs are dropped if both values are null.
        """
        batch = batch_for_datasource
        metric = ColumnPairValuesInSetUnexpectedCount(
            value_pairs_set=NO_MATCH_PAIR_SET,
            column_A=COL_A_WITH_NULLS,
            column_B=COL_B_WITH_NULLS,
            ignore_row_if=ignore_row_if,
        )
        result = batch.compute_metrics(metric)
>       assert result.value == unexpected_count
E       assert 6 == 3
E        +  where 6 = ColumnPairValuesInSetUnexpectedCountResult(id=MetricConfigurationID(metric_name='column_pair_values.in_set.unexpected_count', metric_domain_kwargs_id='f4cf18188f81ff4ec31e101a677f431d', metric_value_kwargs_id="value_pairs_set={(5, 'e')}"), value=6).value

.../metrics/column_pair_values/test_in_set.py:159: AssertionError
tests.integration.metrics.column.test_values_non_null.TestColumnValuesNonNullCount::test_success_sql[postgresql]
Stack Traces | 0.082s run time
self = <tests.integration.metrics.column.test_values_non_null.TestColumnValuesNonNullCount object at 0x7f06c1c0c050>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3ef1810>

    @parameterize_batch_for_data_sources(
        data_source_configs=SQL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_success_sql(self, batch_for_datasource: Batch) -> None:
        batch = batch_for_datasource
        metric = ColumnValuesNonNullCount(column=STRING_COLUMN_NAME)
        metric_result = batch.compute_metrics(metric)
        assert isinstance(metric_result, ColumnValuesNonNullCountResult)
>       assert metric_result.value == self.NON_NULL_COUNT
E       AssertionError: assert 4 == 3
E        +  where 4 = ColumnValuesNonNullCountResult(id=MetricConfigurationID(metric_name='column_values.nonnull.count', metric_domain_kwargs_id='8a0d5e3a4e3686c33d50fd9ebb8a5762', metric_value_kwargs_id=()), value=4).value
E        +  and   3 = <tests.integration.metrics.column.test_values_non_null.TestColumnValuesNonNullCount object at 0x7f06c1c0c050>.NON_NULL_COUNT

.../metrics/column/test_values_non_null.py:102: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet::test_all_values_in_set[postgresql]
Stack Traces | 0.084s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet object at 0x7f06c214b020>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b5339590>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_all_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When all column values are in the set, result should be empty."""
        metric = ColumnDistinctValuesNotInSet(column=COLUMN_NAME, value_set=["a", "b", "c"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetResult)
>       assert metric_result.value == []
E       AssertionError: assert equals failed
E         #x1B[m#x1B[1;31m['NaN']#x1B[m   #x1B[1;32m[]#x1B[m

.../metrics/column/test_distinct_values_not_in_set.py:30: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_all_values_in_set[postgresql]
Stack Traces | 0.084s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f06c214b240>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b4570190>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_all_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When all column values are in the set, count should be 0."""
        metric = ColumnDistinctValuesNotInSetCount(column=COLUMN_NAME, value_set=["a", "b", "c"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
>       assert metric_result.value == 0
E       assert 1 == 0
E        +  where 1 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='24d40dedf0650051578befd1210c23a9', metric_value_kwargs_id="value_set=['a', 'b', 'c']"), value=1).value

.../metrics/column/test_distinct_values_not_in_set_count.py:30: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql::test_include_unexpected_rows_postgres[postgresql]
Stack Traces | 0.103s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern_list.TestNormalSql object at 0x7f06c3276180>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b7e707d0>

    @parameterize_batch_for_data_sources(
        data_source_configs=[PostgreSQLDatasourceTestConfig()], data=DATA
    )
    def test_include_unexpected_rows_postgres(self, batch_for_datasource: Batch) -> None:
        """Test include_unexpected_rows for ExpectColumnValuesToNotMatchLikePatternList."""
        expectation = gxe.ExpectColumnValuesToNotMatchLikePatternList(
            column=COL_NAME, like_pattern_list=["%a%"]
        )
        result = batch_for_datasource.validate(
            expectation, result_format={"result_format": "BASIC", "include_unexpected_rows": True}
        )
    
        assert not result.success
        result_dict = result["result"]
    
        # Verify that unexpected_rows is present and contains the expected data
        assert "unexpected_rows" in result_dict
        assert result_dict["unexpected_rows"] is not None
    
        unexpected_rows_data = result_dict["unexpected_rows"]
        assert isinstance(unexpected_rows_data, list)
    
        # Should contain 3 rows where COL_NAME matches like_pattern_list ["%a%"]
        # ("aa", "ab", "ac" all contain 'a')
>       assert len(unexpected_rows_data) == 3
E       AssertionError: assert 4 == 3
E        +  where 4 = len([{'col_name': 'aa'}, {'col_name': 'ab'}, {'col_name': 'ac'}, {'col_name': 'NaN'}])

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern_list.py:110: AssertionError
tests.integration.metrics.column.test_distinct_values.TestColumnDistinctValues::test_distinct_values_non_pandas[postgresql]
Stack Traces | 0.103s run time
self = <tests.integration.metrics.column.test_distinct_values.TestColumnDistinctValues object at 0x7f06c214a470>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b49ab110>

    @parameterize_batch_for_data_sources(
        data_source_configs=get_non_pandas_data_sources(),
        data=DATA_FRAME,
    )
    def test_distinct_values_non_pandas(self, batch_for_datasource: Batch) -> None:
        metric = ColumnDistinctValues(column=COLUMN_NAME)
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesResult)
        # For SQL and Spark, we expect only the non-null values
>       assert metric_result.value == {"a", "b", "c"}
E       AssertionError: assert equals failed
E         #x1B[mset([     set([    
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m  'a',      'a',   
E         #x1B[m  'b',      'b',   
E         #x1B[m  'c',      'c',   
E         #x1B[m])        ])

.../metrics/column/test_distinct_values.py:55: AssertionError
tests.integration.metrics.column_pair_values.test_in_set.TestColumnPairValuesInSetUnexpectedValues::test_ignore_row_if__sql[postgresql-either_value_is_missing-1]
Stack Traces | 0.105s run time
self = <tests.integration.metrics.column_pair_values.test_in_set.TestColumnPairValuesInSetUnexpectedValues object at 0x7f06c1bbf8e0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b3aa1090>
ignore_row_if = 'either_value_is_missing', unexpected_count = 1

    @pytest.mark.parametrize(
        "ignore_row_if,unexpected_count",
        [
            pytest.param("either_value_is_missing", 1),
            pytest.param("both_values_are_missing", 3),
            pytest.param(
                "neither",
                6,
                marks=pytest.mark.xfail(
                    reason=("returns 3 - pairs where both are null are dropped"),
                    strict=True,
                ),
            ),
        ],
    )
    @parameterize_batch_for_data_sources(
        data_source_configs=SQL_DATA_SOURCES,
        data=DATA_FRAME_WITH_NULLS,
    )
    def test_ignore_row_if__sql(
        self, batch_for_datasource: Batch, ignore_row_if, unexpected_count
    ) -> None:
        """This test captures a bug with SQL data sources and the ignore_row_if condition,
        where column pairs are dropped if both values are null.
        """
        batch = batch_for_datasource
        metric = ColumnPairValuesInSetUnexpectedCount(
            value_pairs_set=NO_MATCH_PAIR_SET,
            column_A=COL_A_WITH_NULLS,
            column_B=COL_B_WITH_NULLS,
            ignore_row_if=ignore_row_if,
        )
        result = batch.compute_metrics(metric)
>       assert result.value == unexpected_count
E       assert 2 == 1
E        +  where 2 = ColumnPairValuesInSetUnexpectedCountResult(id=MetricConfigurationID(metric_name='column_pair_values.in_set.unexpected_count', metric_domain_kwargs_id='0d8b30a834e3346e47f3dc2fc9b2f0ae', metric_value_kwargs_id="value_pairs_set={(5, 'e')}"), value=2).value

.../metrics/column_pair_values/test_in_set.py:159: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_regex.TestNormalSql::test_success[postgresql-mostly]
Stack Traces | 0.107s run time
self = <expectations.test_expect_column_values_to_not_match_regex.TestNormalSql object at 0x7f06c31a5190>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06b7dadc70>
expectation = ExpectColumnValuesToNotMatchRegex(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, descrip...ical'>, windows=None, batch_id=None, column='col_b', mostly=0.6, row_condition=None, condition_parser=None, regex='a.')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="a[x-z]"),
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="[a-z]{99}"),
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_B, regex="a.", mostly=0.6),
                id="mostly",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_success(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchRegex,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert result.success
E       assert False
E        +  where False = {\n  "success": false,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_regex",\n    "kwargs": {\n      "batch_id": "vpwyuscebo-fikehhwfdy",\n      "column": "col_b",\n      "mostly": 0.6,\n      "regex": "a."\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 4,\n    "unexpected_count": 2,\n    "unexpected_percent": 50.0,\n    "partial_unexpected_list": [\n      "aa",\n      "NaN"\n    ],\n    "missing_count": 0,\n    "missing_percent": 0.0,\n    "unexpected_percent_total": 50.0,\n    "unexpected_percent_nonmissing": 50.0,\n    "partial_unexpected_counts": [\n      {\n        "value": "NaN",\n        "count": 1\n      },\n      {\n        "value": "aa",\n        "count": 1\n      }\n    ]\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_regex.py:70: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql::test_failure[postgresql-mostly_threshold_not_met]
Stack Traces | 0.115s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql object at 0x7f071a923660>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06bc0f00f0>
expectation = ExpectColumnValuesToNotMatchLikePattern(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, d... windows=None, batch_id=None, column='col_b', mostly=0.7, row_condition=None, condition_parser=None, like_pattern='a%')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="a%"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="__"),
                id="underscores_match",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(
                    column=COL_B, like_pattern="a%", mostly=0.7
                ),
                id="mostly_threshold_not_met",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePattern,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern",\n    "kwargs": {\n      "batch_id": "gnxqgrurub-tywsbrbzaj",\n      "column": "col_b",\n      "mostly": 0.7,\n      "like_pattern": "a%"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 4,\n    "unexpected_count": 1,\n    "unexpected_percent": 25.0,\n    "partial_unexpected_list": [\n      "aa"\n    ],\n    "missing_count": 0,\n    "missing_percent": 0.0,\n    "unexpected_percent_total": 25.0,\n    "unexpected_percent_nonmissing": 25.0,\n    "partial_unexpected_counts": [\n      {\n        "value": "aa",\n        "count": 1\n      }\n    ]\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern.py:97: AssertionError
tests.integration.metrics.column.test_values_non_null.TestColumnValuesNonNullCount::test_success_spark[spark-filesystem-csv]
Stack Traces | 0.154s run time
self = <tests.integration.metrics.column.test_values_non_null.TestColumnValuesNonNullCount object at 0x7f0919634d60>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f0962357d90>

    @parameterize_batch_for_data_sources(
        data_source_configs=SPARK_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_success_spark(self, batch_for_datasource: Batch) -> None:
        batch = batch_for_datasource
        metric = ColumnValuesNonNullCount(column=STRING_COLUMN_NAME)
        metric_result = batch.compute_metrics(metric)
        assert isinstance(metric_result, ColumnValuesNonNullCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == self.NON_NULL_COUNT
E       AssertionError: assert 4 == 3
E        +  where 4 = int(4)
E        +    where 4 = ColumnValuesNonNullCountResult(id=MetricConfigurationID(metric_name='column_values.nonnull.count', metric_domain_kwargs_id='4e0b8af61fc4e3f8af610ba6126b30ee', metric_value_kwargs_id=()), value=4).value
E        +  and   3 = <tests.integration.metrics.column.test_values_non_null.TestColumnValuesNonNullCount object at 0x7f0919634d60>.NON_NULL_COUNT

.../metrics/column/test_values_non_null.py:91: AssertionError
tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount::test_special_characters[spark-filesystem-csv]
Stack Traces | 0.164s run time
self = <tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount object at 0x7f091976e750>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f09623c1090>

    @parameterize_batch_for_data_sources(
        data_source_configs=SPARK_DATA_SOURCES + SQL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_special_characters(self, batch_for_datasource: Batch) -> None:
        metric = ColumnValuesNotMatchRegexCount(column=COLUMN_NAME, regex="^(a|d).+")
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnValuesNotMatchRegexCountResult)
>       assert metric_result.value == 3
E       AssertionError: assert 4 == 3
E        +  where 4 = ColumnValuesNotMatchRegexCountResult(id=MetricConfigurationID(metric_name='column_values.not_match_regex.count', metric_domain_kwargs_id='a801a36ab008422257e6e166098d633b', metric_value_kwargs_id='regex=^(a|d).+'), value=4).value

.../metrics/column/test_values_not_match_regex_count.py:49: AssertionError
tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount::test_partial_match_characters[spark-filesystem-csv]
Stack Traces | 0.166s run time
self = <tests.integration.metrics.column.test_values_not_match_regex_count.TestColumnValuesNotMatchRegexCount object at 0x7f0919634fc0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f09623579d0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES_EXCEPT_SNOWFLAKE,
        data=DATA_FRAME,
    )
    def test_partial_match_characters(self, batch_for_datasource: Batch) -> None:
        metric = ColumnValuesNotMatchRegexCount(column=COLUMN_NAME, regex="ab")
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnValuesNotMatchRegexCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == 2
E       AssertionError: assert 3 == 2
E        +  where 3 = int(3)
E        +    where 3 = ColumnValuesNotMatchRegexCountResult(id=MetricConfigurationID(metric_name='column_values.not_match_regex.count', metric_domain_kwargs_id='8b04c602afcb900ba056499300dac2a3', metric_value_kwargs_id='regex=ab'), value=3).value

.../metrics/column/test_values_not_match_regex_count.py:38: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_all_values_in_set[spark-filesystem-csv]
Stack Traces | 0.179s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f0919adbbb0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f0962463110>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_all_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When all column values are in the set, count should be 0."""
        metric = ColumnDistinctValuesNotInSetCount(column=COLUMN_NAME, value_set=["a", "b", "c"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
>       assert metric_result.value == 0
E       assert 1 == 0
E        +  where 1 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='8bd2a4dd48d015ae846b9ee2cd649152', metric_value_kwargs_id="value_set=['a', 'b', 'c']"), value=1).value

.../metrics/column/test_distinct_values_not_in_set_count.py:30: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_some_values_not_in_set[spark-filesystem-csv]
Stack Traces | 0.179s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f091970b110>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f09624637f0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_some_values_not_in_set(self, batch_for_datasource: Batch) -> None:
        """When some column values are not in the set, count should reflect that."""
        metric = ColumnDistinctValuesNotInSetCount(
            column=COLUMN_NAME,
            value_set=["a", "b"],  # missing "c"
        )
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
>       assert metric_result.value == 1  # "c" is not in set
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       assert 2 == 1
E        +  where 2 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='7a2a4fe81861e1c43d0ca9edcb5081c5', metric_value_kwargs_id="value_set=['a', 'b']"), value=2).value

.../metrics/column/test_distinct_values_not_in_set_count.py:45: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_empty_value_set[spark-filesystem-csv]
Stack Traces | 0.185s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f091960d310>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f09623156d0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_empty_value_set(self, batch_for_datasource: Batch) -> None:
        """When value_set is empty, all non-null values are violations."""
        metric = ColumnDistinctValuesNotInSetCount(column=COLUMN_NAME, value_set=[])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == 3  # a, b, c
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       AssertionError: assert 4 == 3
E        +  where 4 = int(4)
E        +    where 4 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='0b4c6a00775e3044dae58f24081415aa', metric_value_kwargs_id='value_set=[]'), value=4).value

.../metrics/column/test_distinct_values_not_in_set_count.py:71: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount::test_no_values_in_set[spark-filesystem-csv]
Stack Traces | 0.198s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set_count.TestColumnDistinctValuesNotInSetCount object at 0x7f0919721db0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f0962314cd0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_no_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When no column values are in the set, count all distinct values."""
        metric = ColumnDistinctValuesNotInSetCount(column=COLUMN_NAME, value_set=["x", "y", "z"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetCountResult)
        # Normalize type for Spark compatibility (may return numpy.int64 or Java long)
>       assert int(metric_result.value) == 3  # all of a, b, c are not in set
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       assert 4 == 3
E        +  where 4 = int(4)
E        +    where 4 = ColumnDistinctValuesNotInSetCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.not_in_set.count', metric_domain_kwargs_id='fc55ad22d12be7ac7d3f3eaa1de66ee8', metric_value_kwargs_id="value_set=['x', 'y', 'z']"), value=4).value

.../metrics/column/test_distinct_values_not_in_set_count.py:58: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet::test_no_values_in_set[spark-filesystem-csv]
Stack Traces | 0.214s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet object at 0x7f0919721c70>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f0962463070>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_no_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When no column values are in the set, return all distinct values."""
        metric = ColumnDistinctValuesNotInSet(column=COLUMN_NAME, value_set=["x", "y", "z"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetResult)
>       assert set(metric_result.value) == {"a", "b", "c"}
E       AssertionError: assert equals failed
E         #x1B[mset([     set([    
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m  'a',      'a',   
E         #x1B[m  'b',      'b',   
E         #x1B[m  'c',      'c',   
E         #x1B[m])        ])

.../metrics/column/test_distinct_values_not_in_set.py:57: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_value_lengths_to_equal::test_success_complete__sql[postgresql]
Stack Traces | 0.217s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f06be23b390>

    @parameterize_batch_for_data_sources(data_source_configs=SQL_DATA_SOURCES, data=DATA)
    def test_success_complete__sql(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectColumnValueLengthsToEqual(column=SAME_COL, value=3)
        result = batch_for_datasource.validate(expectation, result_format=ResultFormat.COMPLETE)
        assert result.success
>       assert result.to_json_dict()["result"] == {
            "element_count": 4,
            "unexpected_count": 0,
            "unexpected_percent": 0.0,
            "partial_unexpected_list": [],
            "missing_count": 1,
            "missing_percent": 25.0,
            "unexpected_percent_total": 0.0,
            "unexpected_percent_nonmissing": 0.0,
            "partial_unexpected_counts": [],
            "unexpected_list": [],
            "unexpected_index_query": ANY,
        }
E       AssertionError: assert equals failed
E         #x1B[m{                                {                               
E         #x1B[m  'element_count': 4,              'element_count': 4,           
E         #x1B[m  'missing_count': #x1B[1;33m0#x1B[m,              'missing_count': #x1B[1;33m1#x1B[m,           
E         #x1B[m  'missing_percent': #x1B[1;33m0#x1B[m.0,          'missing_percent': #x1B[1;33m25#x1B[m.0,      
E         #x1B[m  'partial_unexpected_counts':     'partial_unexpected_counts':  
E         #x1B[m[],                              [],                             
E         #x1B[m  'partial_unexpected_list': []    'partial_unexpected_list': [] 
E         #x1B[m,                                ,                               
E         #x1B[m  'unexpected_count': 0,           'unexpected_count': 0,        
E         #x1B[m#x1B[1;31m  'unexpected_index_query': 'SE#x1B[m  #x1B[1;32m  'unexpected_index_query': <AN#x1B[m 
E         #x1B[m#x1B[1;31mLECT all_the_same \nFROM expect#x1B[m  #x1B[1;32mY>,#x1B[m                             
E         #x1B[m#x1B[1;31mation_test_table_anajcesbur \nW#x1B[m                                  
E         #x1B[m#x1B[1;31mHERE all_the_same IS NOT NULL A#x1B[m                                  
E         #x1B[m#x1B[1;31mND length(all_the_same) != 3.0;#x1B[m                                  
E         #x1B[m#x1B[1;31m',#x1B[m                                                               
E         #x1B[m  'unexpected_list': [],           'unexpected_list': [],        
E         #x1B[m  'unexpected_percent': 0.0,       'unexpected_percent': 0.0,    
E         #x1B[m  'unexpected_percent_nonmissin    'unexpected_percent_nonmissin 
E         #x1B[mg': 0.0,                         g': 0.0,                        
E         #x1B[m  'unexpected_percent_total': 0    'unexpected_percent_total': 0 
E         #x1B[m.0,                              .0,                             
E         #x1B[m}                                }

.../data_sources_and_expectations/expectations/test_expect_column_value_lengths_to_equal.py:33: AssertionError
tests.integration.metrics.column.test_distinct_values_count.TestColumnDistinctValuesCount::test_distinct_values_count[spark-filesystem-csv]
Stack Traces | 0.222s run time
self = <tests.integration.metrics.column.test_distinct_values_count.TestColumnDistinctValuesCount object at 0x7f0919adb230>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f096242ff70>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_distinct_values_count(self, batch_for_datasource: Batch) -> None:
        metric = ColumnDistinctValuesCount(column=COLUMN_NAME)
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesCountResult)
>       assert metric_result.value == 3
E       AssertionError: assert 4 == 3
E        +  where 4 = ColumnDistinctValuesCountResult(id=MetricConfigurationID(metric_name='column.distinct_values.count', metric_domain_kwargs_id='dc989bd669131ac4653ef150cd1c5a86', metric_value_kwargs_id=()), value=4).value

.../metrics/column/test_distinct_values_count.py:29: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet::test_some_values_not_in_set[spark-filesystem-csv]
Stack Traces | 0.227s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet object at 0x7f091970a7b0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f0962462cb0>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_some_values_not_in_set(self, batch_for_datasource: Batch) -> None:
        """When some column values are not in the set, return them."""
        metric = ColumnDistinctValuesNotInSet(
            column=COLUMN_NAME,
            value_set=["a", "b"],  # missing "c"
        )
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetResult)
>       assert metric_result.value == ["c"]
E       AssertionError: assert equals failed
E         #x1B[m#x1B[1;31m[#x1B[m         #x1B[1;32m['c']#x1B[m    
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m#x1B[1;31m  'c',#x1B[m             
E         #x1B[m#x1B[1;31m]#x1B[m

.../metrics/column/test_distinct_values_not_in_set.py:45: AssertionError
tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet::test_all_values_in_set[spark-filesystem-csv]
Stack Traces | 0.25s run time
self = <tests.integration.metrics.column.test_distinct_values_not_in_set.TestColumnDistinctValuesNotInSet object at 0x7f0919adb950>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f0962461c70>

    @parameterize_batch_for_data_sources(
        data_source_configs=ALL_DATA_SOURCES,
        data=DATA_FRAME,
    )
    def test_all_values_in_set(self, batch_for_datasource: Batch) -> None:
        """When all column values are in the set, result should be empty."""
        metric = ColumnDistinctValuesNotInSet(column=COLUMN_NAME, value_set=["a", "b", "c"])
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesNotInSetResult)
>       assert metric_result.value == []
E       AssertionError: assert equals failed
E         #x1B[m#x1B[1;31m['NaN']#x1B[m   #x1B[1;32m[]#x1B[m

.../metrics/column/test_distinct_values_not_in_set.py:30: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_regex.TestNormalSql::test_success[spark-filesystem-csv-mostly]
Stack Traces | 0.313s run time
self = <expectations.test_expect_column_values_to_not_match_regex.TestNormalSql object at 0x7f09747eda30>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f09629a8410>
expectation = ExpectColumnValuesToNotMatchRegex(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, descrip...ical'>, windows=None, batch_id=None, column='col_b', mostly=0.6, row_condition=None, condition_parser=None, regex='a.')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="a[x-z]"),
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="[a-z]{99}"),
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_B, regex="a.", mostly=0.6),
                id="mostly",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_success(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchRegex,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert result.success
E       assert False
E        +  where False = {\n  "success": false,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_regex",\n    "kwargs": {\n      "batch_id": "khbfsyxaoe-nnecmbvxau",\n      "column": "col_b",\n      "mostly": 0.6,\n      "regex": "a."\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 4,\n    "unexpected_count": 2,\n    "unexpected_percent": 50.0,\n    "partial_unexpected_list": [\n      "NaN",\n      "aa"\n    ],\n    "missing_count": 0,\n    "missing_percent": 0.0,\n    "unexpected_percent_total": 50.0,\n    "unexpected_percent_nonmissing": 50.0,\n    "partial_unexpected_counts": [\n      {\n        "value": "NaN",\n        "count": 1\n      },\n      {\n        "value": "aa",\n        "count": 1\n      }\n    ]\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_regex.py:70: AssertionError
tests.integration.metrics.column.test_distinct_values.TestColumnDistinctValues::test_distinct_values_non_pandas[spark-filesystem-csv]
Stack Traces | 0.428s run time
self = <tests.integration.metrics.column.test_distinct_values.TestColumnDistinctValues object at 0x7f0919adafd0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f096242d090>

    @parameterize_batch_for_data_sources(
        data_source_configs=get_non_pandas_data_sources(),
        data=DATA_FRAME,
    )
    def test_distinct_values_non_pandas(self, batch_for_datasource: Batch) -> None:
        metric = ColumnDistinctValues(column=COLUMN_NAME)
        metric_result = batch_for_datasource.compute_metrics(metric)
    
        assert isinstance(metric_result, ColumnDistinctValuesResult)
        # For SQL and Spark, we expect only the non-null values
>       assert metric_result.value == {"a", "b", "c"}
E       AssertionError: assert equals failed
E         #x1B[mset([     set([    
E         #x1B[m#x1B[1;31m  'NaN',#x1B[m           
E         #x1B[m  'a',      'a',   
E         #x1B[m  'b',      'b',   
E         #x1B[m  'c',      'c',   
E         #x1B[m])        ])

.../metrics/column/test_distinct_values.py:55: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_compound_column_values_to_be_unique::test_golden_path[spark-filesystem-csv]
Stack Traces | 0.694s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f09629dd950>

    @parameterize_batch_for_data_sources(data_source_configs=ALL_DATA_SOURCES, data=DATA)
    def test_golden_path(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectCompoundColumnsToBeUnique(
            column_list=[STRING_COL, INT_COL, INT_COL_2],
            ignore_row_if="any_value_is_missing",
        )
        result = batch_for_datasource.validate(expectation)
>       assert result.success
E       assert False
E        +  where False = {\n  "success": false,\n  "expectation_config": {\n    "type": "expect_compound_columns_to_be_unique",\n    "kwargs": {\n      "batch_id": "wohjoxsfar-gwezugavcd",\n      "column_list": [\n        "string_col",\n        "int_col",\n        "int_col_2"\n      ],\n      "ignore_row_if": "any_value_is_missing"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 6,\n    "unexpected_count": 2,\n    "unexpected_percent": 33.33333333333333,\n    "partial_unexpected_list": [\n      {\n        "string_col": "NaN",\n        "int_col": null,\n        "int_col_2": null\n      },\n      {\n        "string_col": "NaN",\n        "int_col": null,\n        "int_col_2": null\n      }\n    ],\n    "missing_count": 0,\n    "missing_percent": 0.0,\n    "unexpected_percent_total": 33.33333333333333,\n    "unexpected_percent_nonmissing": 33.33333333333333,\n    "partial_unexpected_counts": [\n      {\n        "value": [\n          "NaN",\n          null,\n          null\n        ],\n        "count": 1\n      },\n      {\n        "value": [\n          "NaN",\n          null,\n          null\n        ],\n        "count": 1\n      }\n    ]\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_compound_column_values_to_be_unique.py:49: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql::test_failure[redshift-mostly_threshold_not_met]
Stack Traces | 1.15s run time
self = <expectations.test_expect_column_values_to_not_match_like_pattern.TestNormalSql object at 0x7f45ce789860>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f45caabd3b0>
expectation = ExpectColumnValuesToNotMatchLikePattern(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, d... windows=None, batch_id=None, column='col_b', mostly=0.7, row_condition=None, condition_parser=None, like_pattern='a%')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="a%"),
                id="all_matches",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(column=COL_A, like_pattern="__"),
                id="underscores_match",
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchLikePattern(
                    column=COL_B, like_pattern="a%", mostly=0.7
                ),
                id="mostly_threshold_not_met",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_failure(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchLikePattern,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert not result.success
E       assert not True
E        +  where True = {\n  "success": true,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_like_pattern",\n    "kwargs": {\n      "batch_id": "gztqehafnt-idxcfakiib",\n      "column": "col_b",\n      "mostly": 0.7,\n      "like_pattern": "a%"\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 4,\n    "unexpected_count": 1,\n    "unexpected_percent": 25.0,\n    "partial_unexpected_list": [\n      "aa"\n    ],\n    "missing_count": 0,\n    "missing_percent": 0.0,\n    "unexpected_percent_total": 25.0,\n    "unexpected_percent_nonmissing": 25.0,\n    "partial_unexpected_counts": [\n      {\n        "value": "aa",\n        "count": 1\n      }\n    ]\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_like_pattern.py:97: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_values_to_not_match_regex.TestNormalSql::test_success[redshift-mostly]
Stack Traces | 1.21s run time
self = <expectations.test_expect_column_values_to_not_match_regex.TestNormalSql object at 0x7f45ce8e5de0>
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f45cab55950>
expectation = ExpectColumnValuesToNotMatchRegex(id=None, meta=None, notes=None, result_format=<ResultFormat.BASIC: 'BASIC'>, descrip...ical'>, windows=None, batch_id=None, column='col_b', mostly=0.6, row_condition=None, condition_parser=None, regex='a.')

    @pytest.mark.parametrize(
        "expectation",
        [
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="a[x-z]"),
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_A, regex="[a-z]{99}"),
            ),
            pytest.param(
                gxe.ExpectColumnValuesToNotMatchRegex(column=COL_B, regex="a.", mostly=0.6),
                id="mostly",
            ),
        ],
    )
    @parameterize_batch_for_data_sources(data_source_configs=SUPPORTED_DATA_SOURCES, data=DATA)
    def test_success(
        self,
        batch_for_datasource: Batch,
        expectation: gxe.ExpectColumnValuesToNotMatchRegex,
    ) -> None:
        result = batch_for_datasource.validate(expectation)
>       assert result.success
E       assert False
E        +  where False = {\n  "success": false,\n  "expectation_config": {\n    "type": "expect_column_values_to_not_match_regex",\n    "kwargs": {\n      "batch_id": "wtoaiermgl-jjygzxpcih",\n      "column": "col_b",\n      "mostly": 0.6,\n      "regex": "a."\n    },\n    "meta": {},\n    "severity": "critical"\n  },\n  "result": {\n    "element_count": 4,\n    "unexpected_count": 2,\n    "unexpected_percent": 50.0,\n    "partial_unexpected_list": [\n      "aa",\n      "NaN"\n    ],\n    "missing_count": 0,\n    "missing_percent": 0.0,\n    "unexpected_percent_total": 50.0,\n    "unexpected_percent_nonmissing": 50.0,\n    "partial_unexpected_counts": [\n      {\n        "value": "NaN",\n        "count": 1\n      },\n      {\n        "value": "aa",\n        "count": 1\n      }\n    ]\n  },\n  "meta": {},\n  "exception_info": {\n    "raised_exception": false,\n    "exception_traceback": null,\n    "exception_message": null\n  }\n}.success

.../data_sources_and_expectations/expectations/test_expect_column_values_to_not_match_regex.py:70: AssertionError
tests.integration.data_sources_and_expectations.expectations.test_expect_column_value_lengths_to_equal::test_success_complete__sql[redshift]
Stack Traces | 2s run time
batch_for_datasource = <great_expectations.datasource.fluent.interfaces.Batch object at 0x7f45cb0f1a90>

    @parameterize_batch_for_data_sources(data_source_configs=SQL_DATA_SOURCES, data=DATA)
    def test_success_complete__sql(batch_for_datasource: Batch) -> None:
        expectation = gxe.ExpectColumnValueLengthsToEqual(column=SAME_COL, value=3)
        result = batch_for_datasource.validate(expectation, result_format=ResultFormat.COMPLETE)
        assert result.success
>       assert result.to_json_dict()["result"] == {
            "element_count": 4,
            "unexpected_count": 0,
            "unexpected_percent": 0.0,
            "partial_unexpected_list": [],
            "missing_count": 1,
            "missing_percent": 25.0,
            "unexpected_percent_total": 0.0,
            "unexpected_percent_nonmissing": 0.0,
            "partial_unexpected_counts": [],
            "unexpected_list": [],
            "unexpected_index_query": ANY,
        }
E       AssertionError: assert equals failed
E         #x1B[m{                                {                               
E         #x1B[m  'element_count': 4,              'element_count': 4,           
E         #x1B[m  'missing_count': #x1B[1;33m0#x1B[m,              'missing_count': #x1B[1;33m1#x1B[m,           
E         #x1B[m  'missing_percent': #x1B[1;33m0#x1B[m.0,          'missing_percent': #x1B[1;33m25#x1B[m.0,      
E         #x1B[m  'partial_unexpected_counts':     'partial_unexpected_counts':  
E         #x1B[m[],                              [],                             
E         #x1B[m  'partial_unexpected_list': []    'partial_unexpected_list': [] 
E         #x1B[m,                                ,                               
E         #x1B[m  'unexpected_count': 0,           'unexpected_count': 0,        
E         #x1B[m#x1B[1;31m  'unexpected_index_query': 'SE#x1B[m  #x1B[1;32m  'unexpected_index_query': <AN#x1B[m 
E         #x1B[m#x1B[1;31mLECT all_the_same \nFROM expect#x1B[m  #x1B[1;32mY>,#x1B[m                             
E         #x1B[m#x1B[1;31mation_test_table_oupzdtmsdr \nW#x1B[m                                  
E         #x1B[m#x1B[1;31mHERE all_the_same IS NOT NULL A#x1B[m                                  
E         #x1B[m#x1B[1;31mND length(all_the_same) != 3.0;#x1B[m                                  
E         #x1B[m#x1B[1;31m',#x1B[m                                                               
E         #x1B[m  'unexpected_list': [],           'unexpected_list': [],        
E         #x1B[m  'unexpected_percent': 0.0,       'unexpected_percent': 0.0,    
E         #x1B[m  'unexpected_percent_nonmissin    'unexpected_percent_nonmissin 
E         #x1B[mg': 0.0,                         g': 0.0,                        
E         #x1B[m  'unexpected_percent_total': 0    'unexpected_percent_total': 0 
E         #x1B[m.0,                              .0,                             
E         #x1B[m}                                }

.../data_sources_and_expectations/expectations/test_expect_column_value_lengths_to_equal.py:33: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant