Skip to content

QueryExecutionError: DB::Exception: Types of column 1 in section IN don't match: UInt64 on the left, Tuple(UInt64) on the right

Open

Description

Environment

Sentry version : 23.1.1
ClickHouse version: 22.8.17

Steps to Reproduce

To deploy Sentry, visit http://127.0.0.1:3000/organizations/sentry/releases/

Expected Result

No exceptions

Actual Result

QueryExecutionError: DB::Exception: Types of column 1 in section IN don't match: UInt64 on the left, Tuple(UInt64) on the right. Stack trace:

0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4029da in /opt/bitnami/clickhouse/bin/clickhouse
1. DB::Set::checkTypesEqual(unsigned long, std::__1::shared_ptr<DB::IDataType const> const&) const @ 0x14fd8600 in /opt/bitnami/clickhouse/bin/clickhouse
2. DB::KeyCondition::tryPrepareSetIndex(DB::KeyCondition::FunctionTree const&, std::__1::shared_ptr<DB::Context const>, DB::KeyCondition::RPNElement&, unsigned long&) @ 0x15841324 in /opt/bitnami/clickhouse/bin/clickhouse
3. DB::KeyCondition::tryParseAtomFromAST(DB::KeyCondition::Tree const&, std::__1::shared_ptr<DB::Context const>, DB::Block&, DB::KeyCondition::RPNElement&) @ 0x1583d867 in /opt/bitnami/clickhouse/bin/clickhouse
4. DB::KeyCondition::traverseAST(DB::KeyCondition::Tree const&, std::__1::shared_ptr<DB::Context const>, DB::Block&) @ 0x1583...
  File "sentry/api/bases/organization_events.py", line 186, in handle_query_errors
    yield
  File "sentry/api/endpoints/organization_sessions.py", line 84, in handle_query_errors
    yield
  File "sentry/api/endpoints/organization_sessions.py", line 40, in data_fn
    return release_health.run_sessions_query(
  File "sentry/utils/services.py", line 127, in <lambda>
    context[key] = (lambda f: lambda *a, **k: getattr(self, f)(*a, **k))(key)
  File "sentry/release_health/sessions.py", line 115, in run_sessions_query
    totals, series = _run_sessions_query(query_clone)
  File "sentry/snuba/sessions_v2.py", line 536, in _run_sessions_query
    result_timeseries = timeseries_query_builder.run_query("sessions.timeseries")["data"]
  File "sentry/search/events/builder/discover.py", line 1438, in run_query
    return raw_snql_query(self.get_snql_query(), referrer, use_cache)
  File "sentry/utils/snuba.py", line 744, in raw_snql_query
    return _apply_cache_and_build_results([params], referrer=referrer, use_cache=use_cache)[0]
  File "sentry/utils/snuba.py", line 811, in _apply_cache_and_build_results
    query_results = _bulk_snuba_query([item[1] for item in to_query], headers)
  File "sentry/utils/snuba.py", line 894, in _bulk_snuba_query
    raise clickhouse_error_codes_map.get(error["code"], QueryExecutionError)(

error sql

SELECT (toStartOfHour(started, 'Universal') AS _snuba_bucketed_started), (project_id AS _snuba_project_id), _snuba_bucketed_started, (plus(countIfMerge(sessions), sumIfMerge(sessions_preaggr)) AS _snuba_sessions), _snuba_project_id FROM sessions_hourly_local PREWHERE in(_snuba_project_id, [1, 2]) WHERE greaterOrEquals((started AS _snuba_started), toDateTime('2023-05-21T03:00:00', 'Universal')) AND less(_snuba_started, toDateTime('2023-05-22T02:24:00', 'Universal')) AND equals((org_id AS _snuba_org_id), 1) AND in(tuple(_snuba_project_id), tuple(tuple(1))) AND in(_snuba_project_id, tuple(1)) GROUP BY _snuba_bucketed_started, _snuba_project_id ORDER BY _snuba_sessions DESC LIMIT 5000 OFFSET 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    • Status

      Waiting for: Community

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions