Skip to content
Merged
Show file tree
Hide file tree
Changes from 75 commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
3ea549d
Optimize expect_column_distinct_values_to_equal_set with database-pus…
NathanFarmer Jan 23, 2026
28482f9
Fix circular import: move ValidationDependencies to TYPE_CHECKING block
NathanFarmer Jan 23, 2026
79f2b07
Fix backward compatibility: return observed_value and handle type coe…
NathanFarmer Jan 23, 2026
b34f285
Fix: cast partial_unexpected_count to int, only include unexpected_co…
NathanFarmer Jan 23, 2026
65dbe17
Fix type error: handle partial_unexpected_count type safely
NathanFarmer Jan 23, 2026
3a4eeb9
Add integration tests for column.distinct_values.not_equal_set metric
NathanFarmer Jan 26, 2026
3378358
Add date comparison tests for column.distinct_values.not_equal_set me…
NathanFarmer Jan 26, 2026
a3f653c
Remove test_dates_with_str_value_set from metric tests
NathanFarmer Jan 26, 2026
de6d219
Add result format integration tests for ExpectColumnDistinctValuesToE…
NathanFarmer Jan 26, 2026
08843f7
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Jan 27, 2026
c150c43
Fix type errors: use result.result instead of to_json_dict
NathanFarmer Jan 27, 2026
8645225
Fix value_counts comparison: use to_json_dict for proper serialization
NathanFarmer Jan 27, 2026
1ab245a
Fix type errors: compare full result dict instead of nested access
NathanFarmer Jan 27, 2026
88d0001
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Jan 27, 2026
62cf62c
Remove unnecessary fallback to column.value_counts metric
NathanFarmer Jan 27, 2026
1857d71
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 27, 2026
6abfb6c
Trigger build
NathanFarmer Jan 27, 2026
7121c57
BREAKING: Remove column.value_counts and column.distinct_values
NathanFarmer Jan 27, 2026
b4a01ee
Update tests for breaking changes in expect_column_distinct_values_to…
NathanFarmer Jan 27, 2026
92350b3
Change result format: observed_value=None, violations in partial_unex…
NathanFarmer Jan 28, 2026
114216f
Restore renderer to show unexpected and missing values from details
NathanFarmer Jan 28, 2026
2737215
Revert expectation to original column.value_counts implementation
NathanFarmer Jan 28, 2026
4dfdf4f
Revert test expectations to match original behavior
NathanFarmer Jan 28, 2026
8a42013
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Jan 28, 2026
e0eb783
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Jan 29, 2026
c199bc8
Limit observed_value to 1000 values to prevent 413 payload errors
NathanFarmer Jan 30, 2026
79b7f7b
Limit value_counts to 1000 items to prevent 413 payload errors
NathanFarmer Jan 30, 2026
c6e78fc
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Feb 2, 2026
e121b93
Use shared MAX_DISTINCT_VALUES constant (500) to limit payload size
NathanFarmer Feb 2, 2026
c57a279
Implement database-pushdown for distinct values expectations
NathanFarmer Feb 2, 2026
f334b52
Fix circular import by moving MAX_DISTINCT_VALUES to constants.py
NathanFarmer Feb 2, 2026
c396e28
Move MAX_RESULT_RECORDS to constants.py for consistency
NathanFarmer Feb 2, 2026
0198525
Fix mypy errors: update imports and add type ignore comments
NathanFarmer Feb 2, 2026
6dfaa70
Fix remaining MAX_RESULT_RECORDS imports in test files
NathanFarmer Feb 2, 2026
0355ce7
Update tests for database-pushdown result format
NathanFarmer Feb 2, 2026
0163c4b
Fix tests: add type coercion to metrics and update equal_set renderer…
NathanFarmer Feb 2, 2026
4790869
Revert be_in_set and contain_set integration tests to OLD format
NathanFarmer Feb 2, 2026
398a7a9
Use original ov__/exp__ prefixes in equal_set renderer
NathanFarmer Feb 2, 2026
8009411
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Feb 2, 2026
7d2e31f
Trigger build
NathanFarmer Feb 2, 2026
dbfca64
Fix mypy error: add type annotation for coerced_set
NathanFarmer Feb 2, 2026
6cca763
Add SQL type coercion for string dates to fix BigQuery tests
NathanFarmer Feb 3, 2026
a860163
Add database-pushdown metrics for distinct values set comparisons
NathanFarmer Feb 3, 2026
2a0fec2
Refactor distinct values metrics into separate files
NathanFarmer Feb 3, 2026
a6506e3
Define _SQLALCHEMY_1_4_OR_GREATER locally in each file that uses it
NathanFarmer Feb 3, 2026
70792ee
Rename missing_from_set metrics to missing_from_column
NathanFarmer Feb 3, 2026
8f6d6a6
Use ScalarValue type alias instead of Any for coercion functions
NathanFarmer Feb 3, 2026
100658d
Merge branch 'm/gx-2374/distinct-values-metrics' into m/gx-2374/disti…
NathanFarmer Feb 3, 2026
f6b692f
fix: remove duplicate metric class definitions causing F811 errors
NathanFarmer Feb 3, 2026
dedd2ba
fix: rename metric references from missing_from_set to missing_from_c…
NathanFarmer Feb 3, 2026
bad963e
fix: remove duplicate type coercion helper functions
NathanFarmer Feb 3, 2026
4f61284
refactor: remove unnecessary get_validation_dependencies override - k…
NathanFarmer Feb 3, 2026
e6ca449
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Feb 3, 2026
d32e57d
fix: handle None limit in Spark/SQL implementations to prevent py4j e…
NathanFarmer Feb 3, 2026
1ad3779
chore: remove unused distinct_values_not_equal_set metric
NathanFarmer Feb 3, 2026
93dcea6
feat: add missing_count/partial_missing_list and unexpected_count/par…
NathanFarmer Feb 3, 2026
6031ab5
fix: update integration test result format for equal_set
NathanFarmer Feb 3, 2026
5cc55fb
docs: add partial_missing_list to result format documentation
NathanFarmer Feb 4, 2026
cce2d28
docs: clarify missing_count and partial_missing_list for distinct val…
NathanFarmer Feb 4, 2026
f532044
test: add result_format unit tests for equal_set
NathanFarmer Feb 4, 2026
b6fc1bd
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Feb 4, 2026
f1ea710
feat: respect partial_unexpected_count setting for partial_missing_list
NathanFarmer Feb 4, 2026
f5100b9
feat: use MAX_DISTINCT_VALUES (500) for partial lists limit in equal_set
NathanFarmer Feb 4, 2026
80967ab
fix: use default partial_unexpected_count of 20, not 500
NathanFarmer Feb 4, 2026
beef742
fix: change metric default limit to MAX_DISTINCT_VALUES (500) and rem…
NathanFarmer Feb 4, 2026
ecd978b
feat: default partial_unexpected_count to MAX_DISTINCT_VALUES (500) f…
NathanFarmer Feb 4, 2026
8df73c4
docs: update partial_unexpected_count default to 500 for distinct val…
NathanFarmer Feb 4, 2026
ba16e60
fix: add all four new metrics to public API exports
NathanFarmer Feb 4, 2026
8b927d9
fix: reduce MAX_DISTINCT_VALUES from 500 to 200 to prevent 413 errors
NathanFarmer Feb 4, 2026
2c88c60
docs: fix partial_missing_list default to 200 for distinct values Exp…
NathanFarmer Feb 4, 2026
a840e11
fix: actually slice partial lists to partial_unexpected_count
NathanFarmer Feb 4, 2026
7ac3102
fix: always use MAX_DISTINCT_VALUES for distinct values expectations
NathanFarmer Feb 4, 2026
81d81c4
fix: change MAX_DISTINCT_VALUES back to 500
NathanFarmer Feb 4, 2026
42163a7
Change MAX_DISTINCT_VALUES from 500 to 20
NathanFarmer Feb 4, 2026
f6fe7e4
Fix redundant '20 or 20' in docs to just '20'
NathanFarmer Feb 4, 2026
88a713b
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Feb 10, 2026
597aade
Merge branch 'develop' into m/gx-2374/distinct-values-equal-set
NathanFarmer Feb 25, 2026
120568f
Add pandas 3.0 logic
NathanFarmer Feb 25, 2026
dd37ccd
Fix type error
NathanFarmer Feb 25, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/docusaurus/docs/cloud/validations/format_results.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,13 +186,14 @@ Follow the steps below to select a base format, optionally configure additional
| Field within `result` | Value |
|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| element_count | The total number of values in the column. |
| missing_count | The number of missing values in the column. |
| missing_count | The number of missing (null) values in the column. For distinct values Expectations, this is the count of expected values not found in the column. |
| missing_percent | The total percent of rows missing values for the column. |
| unexpected_count | The total count of unexpected values in a column. |
| unexpected_percent | The overall percent of unexpected values in a column. |
| unexpected_percent_nonmissing | The percent of unexpected values in a column, excluding rows that have no value for that column. |
| observed_value | The aggregate statistic computed for the column. This only applies to Expectations that pertain to the aggregate value of a column, rather than the individual values in each row for the column. |
| partial_unexpected_list | A partial list of values that violate the Expectation. (Up to 20 values by default.) |
| partial_missing_list | A partial list of expected values that are missing from the column. Applies to distinct values Expectations. (Up to 20 values by default.) |
| partial_unexpected_index_list | A partial list of the unexpected values in the column, as defined by the columns in `unexpected_index_column_names`. (Up to 20 indices by default.) |
| partial_unexpected_counts | A partial list of values and counts, showing the number of times each of the unexpected values occurs. (Up to 20 unexpected value/count pairs by default.) |
| unexpected_index_list | A list of the indices of the unexpected values in the column, as defined by the columns in `unexpected_index_column_names`. This only applies to Expectations that have a yes/no answer for each row. |
Expand All @@ -214,6 +215,7 @@ Follow the steps below to select a base format, optionally configure additional
| unexpected_percent_nonmissing |no |yes |yes |yes |
| observed_value |no |yes |yes |yes |
| partial_unexpected_list |no |yes ** |yes ** |yes ** |
| partial_missing_list |no |yes ** |yes ** |yes ** |
| partial_unexpected_index_list |no |no |yes ** |yes ** |
| partial_unexpected_counts |no |no |yes ** |yes ** |
| unexpected_index_list |no |no |no |yes |
Expand Down
9 changes: 9 additions & 0 deletions great_expectations/constants.py
Original file line number Diff line number Diff line change
@@ -1 +1,10 @@
from typing import Final

DATAFRAME_REPLACEMENT_STR = "<DATAFRAME>"

# Maximum number of result records to return in expectation results
MAX_RESULT_RECORDS: Final[int] = 200

# Maximum number of distinct values to return in expectation results
# to prevent payload size issues (e.g., HTTP 413 errors with GX Cloud)
MAX_DISTINCT_VALUES: Final[int] = 20
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
from typing import TYPE_CHECKING, Any, ClassVar, Dict, Optional, Type, Union

from great_expectations.compatibility.typing_extensions import override
from great_expectations.constants import MAX_DISTINCT_VALUES
from great_expectations.expectations.expectation import (
ColumnAggregateExpectation,
_style_row_condition,
parse_value_to_observed_type,
render_suite_parameter_string,
)
from great_expectations.expectations.metadata_types import DataQualityIssues, SupportedDataSources
Expand Down Expand Up @@ -218,7 +218,12 @@ class ExpectColumnDistinctValuesToEqualSet(ColumnAggregateExpectation):
_library_metadata = library_metadata

# Setting necessary computation metric dependencies and defining kwargs, as well as assigning kwargs default values\ # noqa: E501 # FIXME CoP
metric_dependencies = ("column.value_counts",)
metric_dependencies = (
"column.distinct_values.not_in_set.count",
"column.distinct_values.not_in_set",
"column.distinct_values.missing_from_column.count",
"column.distinct_values.missing_from_column",
)
success_keys = ("value_set",)
args_keys = (
"column",
Expand Down Expand Up @@ -362,25 +367,39 @@ def _validate(
runtime_configuration: Optional[dict] = None,
execution_engine: Optional[ExecutionEngine] = None,
):
observed_value_counts = metrics["column.value_counts"]
observed_value_set = set(observed_value_counts.index)
value_set = self._get_success_kwargs()["value_set"]

# Try to coerce string values to match the type of observed values
if observed_value_set and value_set:
first_observed = next(iter(observed_value_set))
expected_value_set = {
parse_value_to_observed_type(first_observed, value) for value in value_set
}
else:
expected_value_set = set(value_set)
# Get count of unexpected values (values in column but NOT in expected set)
unexpected_count = metrics.get("column.distinct_values.not_in_set.count", 0)
unexpected_values = metrics.get("column.distinct_values.not_in_set", [])

# Get count of missing values (values in expected set but NOT in column)
missing_count = metrics.get("column.distinct_values.missing_from_column.count", 0)
missing_values = metrics.get("column.distinct_values.missing_from_column", [])

# Success if no unexpected values AND no missing values
success = (unexpected_count == 0) and (missing_count == 0)

# Check partial_unexpected_count setting to determine if partial lists should be included
# For distinct values Expectations, always use MAX_DISTINCT_VALUES as the limit
# but respect partial_unexpected_count: 0 to exclude the list entirely
result_format = (
runtime_configuration.get("result_format", {}) if runtime_configuration else {}
)
partial_unexpected_count = result_format.get("partial_unexpected_count", 20)
include_partial_lists = partial_unexpected_count > 0

result_dict: Dict[str, Any] = {
"observed_value": None,
"unexpected_count": unexpected_count,
"missing_count": missing_count,
}

if include_partial_lists:
result_dict["partial_unexpected_list"] = unexpected_values[:MAX_DISTINCT_VALUES]
result_dict["partial_missing_list"] = missing_values[:MAX_DISTINCT_VALUES]

return {
"success": observed_value_set == expected_value_set,
"result": {
"observed_value": sorted(list(observed_value_set)),
"details": {"value_counts": observed_value_counts},
},
"success": success,
"result": result_dict,
}

@classmethod
Expand All @@ -397,69 +416,54 @@ def _atomic_diagnostic_observed_value(
result=result,
runtime_configuration=runtime_configuration,
)
expected_param_prefix = "exp__"
expected_param_name = "expected_value"
ov_param_prefix = "ov__"
# Use original prefixes that frontend expects
ov_param_prefix = "ov__" # for unexpected values (observed but not expected)
ov_param_name = "observed_value"
exp_param_prefix = "exp__" # for missing values (expected but not observed)
exp_param_name = "expected_value"

renderer_configuration.add_param(
name=expected_param_name,
param_type=RendererValueType.ARRAY,
value=renderer_configuration.kwargs.get("value_set", []),
)
renderer_configuration = cls._add_array_params(
array_param_name=expected_param_name,
param_prefix=expected_param_prefix,
renderer_configuration=renderer_configuration,
)
# Get unexpected and missing values from result, limited to MAX_DISTINCT_VALUES
result_dict = result.get("result", {}) if result else {}
unexpected_values = result_dict.get("partial_unexpected_list", [])[:MAX_DISTINCT_VALUES]
missing_values = result_dict.get("partial_missing_list", [])[:MAX_DISTINCT_VALUES]

# Add unexpected values (values in column but NOT in expected set) using ov__ prefix
renderer_configuration.add_param(
name=ov_param_name,
param_type=RendererValueType.ARRAY,
value=result.get("result", {}).get("observed_value", []) if result else [],
value=unexpected_values,
)
renderer_configuration = cls._add_array_params(
array_param_name=ov_param_name,
param_prefix=ov_param_prefix,
renderer_configuration=renderer_configuration,
)
observed_value_set = set(
result.get("result", {}).get("observed_value", []) if result else []
)
sample_observed_value = next(iter(observed_value_set)) if observed_value_set else None
expected_value_set = {
parse_value_to_observed_type(observed_value=sample_observed_value, value=value)
for value in renderer_configuration.kwargs.get("value_set", [])
}

observed_values = (
(name, schema)
for name, schema in renderer_configuration.params
if name.startswith(ov_param_prefix)
# Add missing values (values in expected set but NOT in column) using exp__ prefix
renderer_configuration.add_param(
name=exp_param_name,
param_type=RendererValueType.ARRAY,
value=missing_values,
)

expected_values = (
(name, schema)
for name, schema in renderer_configuration.params
if name.startswith(expected_param_prefix)
renderer_configuration = cls._add_array_params(
array_param_name=exp_param_name,
param_prefix=exp_param_prefix,
renderer_configuration=renderer_configuration,
)

template_str_list = []
for name, schema in observed_values:
render_state = (
ObservedValueRenderState.EXPECTED.value
if schema.value in expected_value_set
else ObservedValueRenderState.UNEXPECTED.value
)
renderer_configuration.params.__dict__[name].render_state = render_state
template_str_list.append(f"${name}")

for name, schema in expected_values:
coerced_value = parse_value_to_observed_type(
observed_value=sample_observed_value,
value=schema.value,
)
if coerced_value not in observed_value_set:
# All unexpected values (ov__) get UNEXPECTED render state
for name, schema in renderer_configuration.params:
if name.startswith(ov_param_prefix):
renderer_configuration.params.__dict__[
name
].render_state = ObservedValueRenderState.UNEXPECTED.value
template_str_list.append(f"${name}")

# All missing values (exp__) get MISSING render state
for name, schema in renderer_configuration.params:
if name.startswith(exp_param_prefix):
renderer_configuration.params.__dict__[
name
].render_state = ObservedValueRenderState.MISSING.value
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
__version__ as SQLALCHEMY_VERSION,
)
from great_expectations.compatibility.sqlalchemy import sqlalchemy as sa
from great_expectations.constants import MAX_DISTINCT_VALUES
from great_expectations.core.metric_domain_types import MetricDomainTypes
from great_expectations.execution_engine import (
PandasExecutionEngine,
Expand Down Expand Up @@ -144,7 +145,11 @@ class ColumnDistinctValuesMissingFromColumn(ColumnAggregateMetricProvider):

@column_aggregate_value(engine=PandasExecutionEngine)
def _pandas(
cls, column: pd.Series, value_set: List[Any], limit: int = 20, **kwargs
cls,
column: pd.Series,
value_set: List[Any],
limit: int = MAX_DISTINCT_VALUES,
**kwargs,
) -> List[Any]:
column_set = set(column.dropna().unique())
expected_set = _coerce_value_set_to_column_type(column_set, value_set)
Expand All @@ -161,7 +166,7 @@ def _sqlalchemy(
) -> List[Any]:
"""Return values in the expected set that are missing from the column."""
value_set = _coerce_value_set_for_sql(metric_value_kwargs.get("value_set", []))
limit = metric_value_kwargs.get("limit", 20)
limit = metric_value_kwargs.get("limit") or MAX_DISTINCT_VALUES

selectable: sqlalchemy.Selectable
accessor_domain_kwargs: Dict[str, str]
Expand Down Expand Up @@ -200,7 +205,7 @@ def _spark(
) -> List[Any]:
"""Return values in the expected set that are missing from the column."""
value_set = metric_value_kwargs.get("value_set", [])
limit = metric_value_kwargs.get("limit", 20)
limit = metric_value_kwargs.get("limit") or MAX_DISTINCT_VALUES

df: pyspark.DataFrame
accessor_domain_kwargs: Dict[str, str]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
__version__ as SQLALCHEMY_VERSION,
)
from great_expectations.compatibility.sqlalchemy import sqlalchemy as sa
from great_expectations.constants import MAX_DISTINCT_VALUES
from great_expectations.core.metric_domain_types import MetricDomainTypes
from great_expectations.execution_engine import (
PandasExecutionEngine,
Expand Down Expand Up @@ -148,7 +149,11 @@ class ColumnDistinctValuesNotInSet(ColumnAggregateMetricProvider):

@column_aggregate_value(engine=PandasExecutionEngine)
def _pandas(
cls, column: pd.Series, value_set: List[Any], limit: int = 20, **kwargs
cls,
column: pd.Series,
value_set: List[Any],
limit: int = MAX_DISTINCT_VALUES,
**kwargs,
) -> List[Any]:
column_set = set(column.dropna().unique())
expected_set = _coerce_value_set_to_column_type(column_set, value_set)
Expand All @@ -170,7 +175,7 @@ def _sqlalchemy(
) -> List[Any]:
"""Return sample of distinct values in column that are NOT in the expected set."""
value_set = _coerce_value_set_for_sql(metric_value_kwargs.get("value_set", []))
limit = metric_value_kwargs.get("limit", 20)
limit = metric_value_kwargs.get("limit") or MAX_DISTINCT_VALUES

selectable: sqlalchemy.Selectable
accessor_domain_kwargs: Dict[str, str]
Expand Down Expand Up @@ -233,7 +238,7 @@ def _spark(
) -> List[Any]:
"""Return sample of distinct values in column that are NOT in the expected set."""
value_set = metric_value_kwargs.get("value_set", [])
limit = metric_value_kwargs.get("limit", 20)
limit = metric_value_kwargs.get("limit") or MAX_DISTINCT_VALUES

df: pyspark.DataFrame
accessor_domain_kwargs: Dict[str, str]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
)

import great_expectations.exceptions as gx_exceptions
from great_expectations.constants import MAX_RESULT_RECORDS
from great_expectations.expectations.metrics.util import (
MAX_RESULT_RECORDS,
get_dbms_compatible_metric_domain_kwargs,
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@

from great_expectations.compatibility.pyspark import functions as F
from great_expectations.compatibility.sqlalchemy import sqlalchemy as sa
from great_expectations.constants import MAX_RESULT_RECORDS
from great_expectations.expectations.metrics.map_metric_provider.is_sqlalchemy_metric_selectable import ( # noqa: E501 # FIXME CoP
_is_sqlalchemy_metric_selectable,
)
from great_expectations.expectations.metrics.util import (
MAX_RESULT_RECORDS,
get_dbms_compatible_metric_domain_kwargs,
)
from great_expectations.util import (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from great_expectations.compatibility.sqlalchemy import (
sqlalchemy as sa,
)
from great_expectations.constants import MAX_RESULT_RECORDS
from great_expectations.core.metric_function_types import (
SummarizationMetricNameSuffixes,
)
Expand All @@ -28,7 +29,6 @@
_is_sqlalchemy_metric_selectable,
)
from great_expectations.expectations.metrics.util import (
MAX_RESULT_RECORDS,
compute_unexpected_pandas_indices,
get_dbms_compatible_metric_domain_kwargs,
get_sqlalchemy_source_table_and_schema,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@

from great_expectations.compatibility.pyspark import functions as F
from great_expectations.compatibility.sqlalchemy import sqlalchemy as sa
from great_expectations.constants import MAX_RESULT_RECORDS
from great_expectations.expectations.metrics.map_metric_provider.is_sqlalchemy_metric_selectable import ( # noqa: E501 # FIXME CoP
_is_sqlalchemy_metric_selectable,
)
from great_expectations.expectations.metrics.util import (
MAX_RESULT_RECORDS,
get_dbms_compatible_metric_domain_kwargs,
)
from great_expectations.util import (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
from great_expectations.compatibility.sqlalchemy import (
sqlalchemy as sa,
)
from great_expectations.constants import MAX_RESULT_RECORDS
from great_expectations.execution_engine.sqlalchemy_dialect import GXSqlDialect
from great_expectations.expectations.metrics.metric_provider import MetricProvider
from great_expectations.expectations.metrics.util import MAX_RESULT_RECORDS
from great_expectations.util import get_sqlalchemy_subquery_type

if TYPE_CHECKING:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

from typing import TYPE_CHECKING, Any, Dict, List, Optional

from great_expectations.constants import MAX_RESULT_RECORDS
from great_expectations.core.metric_domain_types import MetricDomainTypes
from great_expectations.execution_engine import (
SparkDFExecutionEngine,
Expand All @@ -12,7 +13,6 @@
QueryMetricProvider,
QueryParameters,
)
from great_expectations.expectations.metrics.util import MAX_RESULT_RECORDS

if TYPE_CHECKING:
from great_expectations.compatibility import pyspark
Expand Down
Loading
Loading