Skip to content

Conversation

@pedro93
Copy link
Collaborator

@pedro93 pedro93 commented Dec 19, 2025

Summary

  • Change emit_mcp() to return Optional[TraceData] containing trace ID and URN/aspect mapping when available
  • Change emit_mcps() to return List[TraceData] with one TraceData per batch/chunk
  • Add get_trace_status() method to query trace status from the server
  • Trace IDs are now extracted from the traceparent response header for all emit modes (previously only extracted for ASYNC_WAIT)

This allows users to retrieve trace IDs when using SYNC_PRIMARY and ASYNC emit modes, and provides a method to query trace status later.

Motivation

Previously, trace IDs were only accessible when using ASYNC_WAIT emit mode. Users who wanted to use SYNC_PRIMARY (the default) or ASYNC modes had no way to retrieve trace IDs for debugging or status checking purposes. This change exposes the trace IDs for all emit modes.

Example Usage

from datahub.emitter.rest_emitter import DataHubRestEmitter

emitter = DataHubRestEmitter("http://localhost:8080")

# emit_mcp now returns TraceData (or None)
trace = emitter.emit_mcp(mcp)
if trace:
    print(f"Trace ID: {trace.trace_id}")
    
    # Later, check the status
    status = emitter.get_trace_status(trace)
    if status:
        print(f"Status: {status}")

# emit_mcps returns a list of TraceData
traces = emitter.emit_mcps([mcp1, mcp2, mcp3])
for trace in traces:
    print(f"Batch trace ID: {trace.trace_id}")

Test plan

  • Added comprehensive unit tests for TraceData return functionality (7 tests)
  • Added unit tests for get_trace_status() method (6 tests)
  • Updated existing tests to work with new return types
  • All 66 tests pass

🤖 Generated with Claude Code

- Change emit_mcp() to return Optional[TraceData] containing trace ID
  and URN/aspect mapping when available
- Change emit_mcps() to return List[TraceData] with one TraceData per
  batch/chunk
- Add get_trace_status() method to query trace status from the server
- Trace IDs are extracted from the traceparent response header for all
  emit modes (previously only extracted for ASYNC_WAIT)
- Add comprehensive unit tests for the new functionality

This allows users to retrieve trace IDs when using SYNC_PRIMARY and
ASYNC emit modes, and provides a method to query trace status later.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@github-actions
Copy link
Contributor

Linear: ING-1310

@github-actions github-actions bot added the ingestion PR or Issue related to the ingestion of metadata label Dec 19, 2025
@codecov
Copy link

codecov bot commented Dec 19, 2025

❌ 81 Tests Failed:

Tests completed Failed Passed Skipped
899 81 818 40
View the top 3 failed test(s) by shortest run time
tests.lineage.test_lineage::test_lineage_via_node[3-LineageStyle.DATASET_JOB_DATASET]
Stack Traces | 0.035s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
lineage_style = <LineageStyle.DATASET_JOB_DATASET: 'DATASET_JOB_DATASET'>
graph_level = 3

    @pytest.mark.parametrize(
        "lineage_style",
        [
            Scenario.LineageStyle.DATASET_QUERY_DATASET,
            Scenario.LineageStyle.DATASET_JOB_DATASET,
        ],
    )
    @pytest.mark.parametrize(
        "graph_level",
        [
            1,
            2,
            3,
            # TODO - convert this to range of 1 to 10 to make sure we can handle large graphs
        ],
    )
    def test_lineage_via_node(
        graph_client: DataHubGraph, lineage_style: Scenario.LineageStyle, graph_level: int
    ) -> None:
        scenario: Scenario = Scenario(
            hop_platform_map={0: "mysql", 1: "snowflake"},
            lineage_style=lineage_style,
            num_hops=graph_level,
            default_dataset_prefix=f"{lineage_style.value}.",
        )
    
        # Create an emitter to the GMS REST API.
        emitter = graph_client
        # emitter = DataHubConsoleEmitter()
    
        # Emit metadata!
        for mcp in scenario.get_entity_mcps():
>           emitter.emit_mcp(mcp)

tests/lineage/test_lineage.py:837: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.lineage.test_lineage::test_lineage_via_node[2-LineageStyle.DATASET_JOB_DATASET]
Stack Traces | 0.036s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
lineage_style = <LineageStyle.DATASET_JOB_DATASET: 'DATASET_JOB_DATASET'>
graph_level = 2

    @pytest.mark.parametrize(
        "lineage_style",
        [
            Scenario.LineageStyle.DATASET_QUERY_DATASET,
            Scenario.LineageStyle.DATASET_JOB_DATASET,
        ],
    )
    @pytest.mark.parametrize(
        "graph_level",
        [
            1,
            2,
            3,
            # TODO - convert this to range of 1 to 10 to make sure we can handle large graphs
        ],
    )
    def test_lineage_via_node(
        graph_client: DataHubGraph, lineage_style: Scenario.LineageStyle, graph_level: int
    ) -> None:
        scenario: Scenario = Scenario(
            hop_platform_map={0: "mysql", 1: "snowflake"},
            lineage_style=lineage_style,
            num_hops=graph_level,
            default_dataset_prefix=f"{lineage_style.value}.",
        )
    
        # Create an emitter to the GMS REST API.
        emitter = graph_client
        # emitter = DataHubConsoleEmitter()
    
        # Emit metadata!
        for mcp in scenario.get_entity_mcps():
>           emitter.emit_mcp(mcp)

tests/lineage/test_lineage.py:837: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.restli.test_restli_batch_ingestion::test_restli_batch_ingestion_async
Stack Traces | 0.038s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I

    def test_restli_batch_ingestion_async(graph_client):
        # Positive Test (all valid MetadataChangeProposal)
        mcps = _create_valid_dashboard_mcps()
>       ret = graph_client.emit_mcps(mcps, emit_mode=EmitMode.ASYNC)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

tests/restli/test_restli_batch_ingestion.py:144: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:688: in emit_mcps
    return self._emit_restli_mcps(mcps, emit_mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/rest_emitter.py:821: in _emit_restli_mcps
    extract_trace_data_from_mcps(response, mcp_chunk) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.lineage.test_lineage::test_lineage_via_node[1-LineageStyle.DATASET_JOB_DATASET]
Stack Traces | 0.04s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
lineage_style = <LineageStyle.DATASET_JOB_DATASET: 'DATASET_JOB_DATASET'>
graph_level = 1

    @pytest.mark.parametrize(
        "lineage_style",
        [
            Scenario.LineageStyle.DATASET_QUERY_DATASET,
            Scenario.LineageStyle.DATASET_JOB_DATASET,
        ],
    )
    @pytest.mark.parametrize(
        "graph_level",
        [
            1,
            2,
            3,
            # TODO - convert this to range of 1 to 10 to make sure we can handle large graphs
        ],
    )
    def test_lineage_via_node(
        graph_client: DataHubGraph, lineage_style: Scenario.LineageStyle, graph_level: int
    ) -> None:
        scenario: Scenario = Scenario(
            hop_platform_map={0: "mysql", 1: "snowflake"},
            lineage_style=lineage_style,
            num_hops=graph_level,
            default_dataset_prefix=f"{lineage_style.value}.",
        )
    
        # Create an emitter to the GMS REST API.
        emitter = graph_client
        # emitter = DataHubConsoleEmitter()
    
        # Emit metadata!
        for mcp in scenario.get_entity_mcps():
>           emitter.emit_mcp(mcp)

tests/lineage/test_lineage.py:837: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.cli.dataset_cmd.test_dataset_command::test_dataset_sync_from_datahub
Stack Traces | 0.045s run time
setup_teardown_dataset = None
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>

    def test_dataset_sync_from_datahub(
        setup_teardown_dataset, graph_client: DataHubGraph, auth_session
    ):
        """Test syncing dataset from DataHub to YAML"""
        with tempfile.NamedTemporaryFile(suffix=".yml", delete=False) as tmp:
            temp_file_path = Path(tmp.name)
            try:
                # First, create a dataset in DataHub
                dataset = Dataset(
                    id=dataset_id,
                    platform="snowflake",
                    description="Test dataset created directly in DataHub",
                    tags=["from_datahub", "cli_test"],
                    properties={"origin": "direct_creation"},
                    schema=None,
                )
    
                # Emit the dataset to DataHub
                for mcp in dataset.generate_mcp():
>                   graph_client.emit(mcp)

.../cli/dataset_cmd/test_dataset_command.py:148: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.restli.restli_test::test_gms_ignore_unknown_dashboard_info
Stack Traces | 0.045s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc

    def test_gms_ignore_unknown_dashboard_info(graph_client):
        dashboard_urn = make_dashboard_urn(platform="looker", name="test-ignore-unknown")
        generated_urns.extend([dashboard_urn])
    
        audit_stamp = pre_json_transform(
            ChangeAuditStampsClass(
                created=AuditStampClass(
                    time=int(time.time() * 1000),
                    actor="urn:li:corpuser:datahub",
                )
            ).to_obj()
        )
    
        invalid_dashboard_info = {
            "title": "Ignore Unknown Title",
            "description": "Ignore Unknown Description",
            "lastModified": audit_stamp,
            "notAValidField": "invalid field value",
        }
        mcpw = MetadataChangeProposalInvalidWrapper(
            entityUrn=dashboard_urn,
            aspectName="dashboardInfo",
            aspect=invalid_dashboard_info,
        )
    
        mcp = mcpw.make_mcp()
        assert "notAValidField" in str(mcp)
        assert "invalid field value" in str(mcp)
    
>       graph_client.emit_mcp(mcpw, emit_mode=EmitMode.SYNC_PRIMARY)

tests/restli/restli_test.py:91: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.lineage.test_lineage::test_lineage_via_node[3-LineageStyle.DATASET_QUERY_DATASET]
Stack Traces | 0.046s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
lineage_style = <LineageStyle.DATASET_QUERY_DATASET: 'DATASET_QUERY_DATASET'>
graph_level = 3

    @pytest.mark.parametrize(
        "lineage_style",
        [
            Scenario.LineageStyle.DATASET_QUERY_DATASET,
            Scenario.LineageStyle.DATASET_JOB_DATASET,
        ],
    )
    @pytest.mark.parametrize(
        "graph_level",
        [
            1,
            2,
            3,
            # TODO - convert this to range of 1 to 10 to make sure we can handle large graphs
        ],
    )
    def test_lineage_via_node(
        graph_client: DataHubGraph, lineage_style: Scenario.LineageStyle, graph_level: int
    ) -> None:
        scenario: Scenario = Scenario(
            hop_platform_map={0: "mysql", 1: "snowflake"},
            lineage_style=lineage_style,
            num_hops=graph_level,
            default_dataset_prefix=f"{lineage_style.value}.",
        )
    
        # Create an emitter to the GMS REST API.
        emitter = graph_client
        # emitter = DataHubConsoleEmitter()
    
        # Emit metadata!
        for mcp in scenario.get_entity_mcps():
>           emitter.emit_mcp(mcp)

tests/lineage/test_lineage.py:837: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_field_tags_patch[graph_client]
Stack Traces | 0.046s run time
request = <FixtureRequest for <Function test_field_tags_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_field_tags_patch(request, client_fixture_name: DataHubGraph):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset-{uuid.uuid4()}", env="PROD"
        )
    
        field_path = "foo.bar"
    
        editable_field = EditableSchemaMetadataClass(
            [
                EditableSchemaFieldInfoClass(
                    fieldPath=field_path, description="This is a test field"
                )
            ]
        )
        mcpw = MetadataChangeProposalWrapper(entityUrn=dataset_urn, aspect=editable_field)
    
>       graph_client.emit_mcp(mcpw)

tests/patch/test_dataset_patches.py:242: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_dataset_terms_patch[graph_client]
Stack Traces | 0.047s run time
request = <FixtureRequest for <Function test_dataset_terms_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_dataset_terms_patch(request, client_fixture_name):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset-{uuid.uuid4()}", env="PROD"
        )
>       helper_test_entity_terms_patch(graph_client, dataset_urn, DatasetPatchBuilder)

tests/patch/test_dataset_patches.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/patch/common_patch_tests.py:56: in helper_test_entity_terms_patch
    graph_client.emit_mcp(mcpw)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_dataset_upstream_lineage_patch[graph_client]
Stack Traces | 0.047s run time
request = <FixtureRequest for <Function test_dataset_upstream_lineage_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_dataset_upstream_lineage_patch(request, client_fixture_name: DataHubGraph):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset-{uuid.uuid4()}", env="PROD"
        )
    
        other_dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset2-{uuid.uuid4()}", env="PROD"
        )
    
        patch_dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset3-{uuid.uuid4()}", env="PROD"
        )
    
        upstream_lineage = UpstreamLineageClass(
            upstreams=[
                UpstreamClass(dataset=other_dataset_urn, type=DatasetLineageTypeClass.VIEW)
            ]
        )
        upstream_lineage_to_add = UpstreamClass(
            dataset=patch_dataset_urn, type=DatasetLineageTypeClass.VIEW
        )
        mcpw = MetadataChangeProposalWrapper(entityUrn=dataset_urn, aspect=upstream_lineage)
    
>       graph_client.emit_mcp(mcpw)

tests/patch/test_dataset_patches.py:94: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.lineage.test_lineage::test_lineage_via_node[2-LineageStyle.DATASET_QUERY_DATASET]
Stack Traces | 0.05s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
lineage_style = <LineageStyle.DATASET_QUERY_DATASET: 'DATASET_QUERY_DATASET'>
graph_level = 2

    @pytest.mark.parametrize(
        "lineage_style",
        [
            Scenario.LineageStyle.DATASET_QUERY_DATASET,
            Scenario.LineageStyle.DATASET_JOB_DATASET,
        ],
    )
    @pytest.mark.parametrize(
        "graph_level",
        [
            1,
            2,
            3,
            # TODO - convert this to range of 1 to 10 to make sure we can handle large graphs
        ],
    )
    def test_lineage_via_node(
        graph_client: DataHubGraph, lineage_style: Scenario.LineageStyle, graph_level: int
    ) -> None:
        scenario: Scenario = Scenario(
            hop_platform_map={0: "mysql", 1: "snowflake"},
            lineage_style=lineage_style,
            num_hops=graph_level,
            default_dataset_prefix=f"{lineage_style.value}.",
        )
    
        # Create an emitter to the GMS REST API.
        emitter = graph_client
        # emitter = DataHubConsoleEmitter()
    
        # Emit metadata!
        for mcp in scenario.get_entity_mcps():
>           emitter.emit_mcp(mcp)

tests/lineage/test_lineage.py:837: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_custom_properties_patch[graph_client]
Stack Traces | 0.05s run time
request = <FixtureRequest for <Function test_custom_properties_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_custom_properties_patch(request, client_fixture_name: DataHubGraph):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset-{uuid.uuid4()}", env="PROD"
        )
        orig_dataset_properties = DatasetPropertiesClass(
            name="test_name", description="test_description"
        )
>       helper_test_custom_properties_patch(
            graph_client,
            test_entity_urn=dataset_urn,
            patch_builder_class=DatasetPatchBuilder,
            custom_properties_aspect_class=DatasetPropertiesClass,
            base_aspect=orig_dataset_properties,
        )

tests/patch/test_dataset_patches.py:328: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/patch/common_patch_tests.py:209: in helper_test_custom_properties_patch
    graph_client.emit(mcpw)
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_field_terms_patch[graph_client]
Stack Traces | 0.051s run time
request = <FixtureRequest for <Function test_field_terms_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_field_terms_patch(request, client_fixture_name: DataHubGraph):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset-{uuid.uuid4()}", env="PROD"
        )
    
        field_path = "foo.bar"
    
        editable_field = EditableSchemaMetadataClass(
            [
                EditableSchemaFieldInfoClass(
                    fieldPath=field_path, description="This is a test field"
                )
            ]
        )
        mcpw = MetadataChangeProposalWrapper(entityUrn=dataset_urn, aspect=editable_field)
    
>       graph_client.emit_mcp(mcpw)

tests/patch/test_dataset_patches.py:180: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.lineage.test_lineage::test_lineage_via_node[1-LineageStyle.DATASET_QUERY_DATASET]
Stack Traces | 0.056s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
lineage_style = <LineageStyle.DATASET_QUERY_DATASET: 'DATASET_QUERY_DATASET'>
graph_level = 1

    @pytest.mark.parametrize(
        "lineage_style",
        [
            Scenario.LineageStyle.DATASET_QUERY_DATASET,
            Scenario.LineageStyle.DATASET_JOB_DATASET,
        ],
    )
    @pytest.mark.parametrize(
        "graph_level",
        [
            1,
            2,
            3,
            # TODO - convert this to range of 1 to 10 to make sure we can handle large graphs
        ],
    )
    def test_lineage_via_node(
        graph_client: DataHubGraph, lineage_style: Scenario.LineageStyle, graph_level: int
    ) -> None:
        scenario: Scenario = Scenario(
            hop_platform_map={0: "mysql", 1: "snowflake"},
            lineage_style=lineage_style,
            num_hops=graph_level,
            default_dataset_prefix=f"{lineage_style.value}.",
        )
    
        # Create an emitter to the GMS REST API.
        emitter = graph_client
        # emitter = DataHubConsoleEmitter()
    
        # Emit metadata!
        for mcp in scenario.get_entity_mcps():
>           emitter.emit_mcp(mcp)

tests/lineage/test_lineage.py:837: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_dataset_ownership_patch[graph_client]
Stack Traces | 0.058s run time
request = <FixtureRequest for <Function test_dataset_ownership_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_dataset_ownership_patch(request, client_fixture_name):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset{uuid.uuid4()}", env="PROD"
        )
>       helper_test_ownership_patch(graph_client, dataset_urn, DatasetPatchBuilder)

tests/patch/test_dataset_patches.py:40: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/patch/common_patch_tests.py:155: in helper_test_ownership_patch
    graph_client.emit_mcp(mcpw)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.platform_resources.test_platform_resource::test_platform_resource_read_write
Stack Traces | 0.058s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
test_id = 'test_bj03xhnz', cleanup_resources = []

    def test_platform_resource_read_write(graph_client, test_id, cleanup_resources):
        key = PlatformResourceKey(
            platform=f"test_platform_{test_id}",
            resource_type=f"test_resource_type_{test_id}",
            primary_key=f"test_primary_key_{test_id}",
        )
        platform_resource = PlatformResource.create(
            key=key,
            secondary_keys=[f"test_secondary_key_{test_id}"],
            value={"test_key": f"test_value_{test_id}"},
        )
>       platform_resource.to_datahub(graph_client)

tests/platform_resources/test_platform_resource.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../entities/platformresource/platform_resource.py:207: in to_datahub
    graph_client.emit(mcp)
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_dataset_tags_patch[graph_client]
Stack Traces | 0.059s run time
request = <FixtureRequest for <Function test_dataset_tags_patch[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_dataset_tags_patch(request, client_fixture_name):
        graph_client = request.getfixturevalue(client_fixture_name)
        dataset_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset-{uuid.uuid4()}", env="PROD"
        )
>       helper_test_dataset_tags_patch(graph_client, dataset_urn, DatasetPatchBuilder)

tests/patch/test_dataset_patches.py:52: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/patch/common_patch_tests.py:97: in helper_test_dataset_tags_patch
    graph_client.emit_mcp(mcpw)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.cli.dataset_cmd.test_dataset_command::test_dataset_sync_bidirectional
Stack Traces | 0.064s run time
setup_teardown_dataset = None
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>

    def test_dataset_sync_bidirectional(
        setup_teardown_dataset, graph_client: DataHubGraph, auth_session
    ):
        """Test bidirectional sync with modifications on both sides"""
        with tempfile.NamedTemporaryFile(suffix=".yml", delete=False) as tmp:
            temp_file_path = Path(tmp.name)
            try:
                # 1. Create initial dataset in YAML
                create_dataset_yaml(temp_file_path)
    
                # 2. Sync to DataHub
>               run_cli_command(
                    f"dataset sync -f {temp_file_path} --to-datahub", auth_session
                )

.../cli/dataset_cmd/test_dataset_command.py:191: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

cmd = 'dataset sync -f /tmp/tmp5ew71mgx.yml --to-datahub'
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>

    def run_cli_command(cmd, auth_session):
        """Run a DataHub CLI command using CliRunner and auth_session"""
        args = cmd.split()
        result = run_datahub_cmd(
            args,
            env={
                "DATAHUB_GMS_URL": auth_session.gms_url(),
                "DATAHUB_GMS_TOKEN": auth_session.gms_token(),
            },
        )
    
        if result.exit_code != 0:
            logger.error(f"Command failed: {cmd}")
            logger.error(f"STDOUT: {result.stdout}")
            logger.error(f"STDERR: {result.stderr}")
>           raise Exception(f"Command failed with return code {result.exit_code}")
E           Exception: Command failed with return code 1

.../cli/dataset_cmd/test_dataset_command.py:86: Exception
tests.patch.test_dataset_patches::test_datajob_set_fine_grained_lineages[graph_client]
Stack Traces | 0.069s run time
request = <FixtureRequest for <Function test_datajob_set_fine_grained_lineages[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_datajob_set_fine_grained_lineages(request, client_fixture_name):
        """Test setting fine-grained lineages with end-to-end DataHub integration."""
        graph_client = request.getfixturevalue(client_fixture_name)
        datajob_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset{uuid.uuid4()}", env="PROD"
        )
>       helper_test_set_fine_grained_lineages(
            graph_client, datajob_urn, UpstreamLineageClass, DatasetPatchBuilder
        )

tests/patch/test_dataset_patches.py:383: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/patch/common_patch_tests.py:386: in helper_test_set_fine_grained_lineages
    graph_client.emit_mcp(patch_mcp)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.patch.test_dataset_patches::test_datajob_add_fine_grained_lineage[graph_client]
Stack Traces | 0.07s run time
request = <FixtureRequest for <Function test_datajob_add_fine_grained_lineage[graph_client]>>
client_fixture_name = 'graph_client'

    @pytest.mark.parametrize(
        "client_fixture_name", ["graph_client", "openapi_graph_client"]
    )
    def test_datajob_add_fine_grained_lineage(request, client_fixture_name):
        """Test that add_fine_grained_lineage works correctly for DataJobs."""
        graph_client = request.getfixturevalue(client_fixture_name)
        datajob_urn = make_dataset_urn(
            platform="hive", name=f"SampleHiveDataset{uuid.uuid4()}", env="PROD"
        )
>       helper_test_add_fine_grained_lineage(
            graph_client, datajob_urn, UpstreamLineageClass, DatasetPatchBuilder
        )

tests/patch/test_dataset_patches.py:369: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/patch/common_patch_tests.py:318: in helper_test_add_fine_grained_lineage
    graph_client.emit_mcp(patch_mcp)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.assertions.custom_assertions_test::test_create_update_delete_dataset_custom_assertion
Stack Traces | 0.076s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module")
    def test_data(graph_client):
        mcpw = MetadataChangeProposalWrapper(
            entityUrn=TEST_DATASET_URN, aspect=StatusClass(removed=False)
        )
>       graph_client.emit(mcpw)

tests/assertions/custom_assertions_test.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.restli.test_restli_batch_ingestion::test_restli_batch_ingestion_sync
Stack Traces | 0.089s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I

    def test_restli_batch_ingestion_sync(graph_client):
        # Positive Test (all valid MetadataChangeProposal)
        mcps = _create_valid_dashboard_mcps()
>       ret = graph_client.emit_mcps(mcps, emit_mode=EmitMode.SYNC_PRIMARY)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

tests/restli/test_restli_batch_ingestion.py:119: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:688: in emit_mcps
    return self._emit_restli_mcps(mcps, emit_mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/rest_emitter.py:821: in _emit_restli_mcps
    extract_trace_data_from_mcps(response, mcp_chunk) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.platform_resources.test_platform_resource::test_platform_resource_listing_by_resource_type
Stack Traces | 0.09s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
test_id = 'test_bhqrs95g', cleanup_resources = []

    def test_platform_resource_listing_by_resource_type(
        graph_client, test_id, cleanup_resources
    ):
        # Generate two resources with the same resource type
        key1 = PlatformResourceKey(
            platform=f"test_platform_{test_id}",
            resource_type=f"test_resource_type_{test_id}",
            primary_key=f"test_primary_key_1_{test_id}",
        )
        platform_resource1 = PlatformResource.create(
            key=key1,
            value={"test_key": f"test_value_1_{test_id}"},
        )
>       platform_resource1.to_datahub(graph_client)

tests/platform_resources/test_platform_resource.py:159: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../entities/platformresource/platform_resource.py:207: in to_datahub
    graph_client.emit(mcp)
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.platform_resources.test_platform_resource::test_platform_resource_search
Stack Traces | 0.097s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
test_id = 'test_97489md5', cleanup_resources = []

    def test_platform_resource_search(graph_client, test_id, cleanup_resources):
        key = PlatformResourceKey(
            platform=f"test_platform_{test_id}",
            resource_type=f"test_resource_type_{test_id}",
            primary_key=f"test_primary_key_{test_id}",
        )
        platform_resource = PlatformResource.create(
            key=key,
            secondary_keys=[f"test_secondary_key_{test_id}"],
            value={"test_key": f"test_value_{test_id}"},
        )
>       platform_resource.to_datahub(graph_client)

tests/platform_resources/test_platform_resource.py:84: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../entities/platformresource/platform_resource.py:207: in to_datahub
    graph_client.emit(mcp)
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.cli.dataset_cmd.test_dataset_command::test_dataset_sync_to_datahub
Stack Traces | 0.104s run time
setup_teardown_dataset = None
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>

    def test_dataset_sync_to_datahub(
        setup_teardown_dataset, graph_client: DataHubGraph, auth_session
    ):
        """Test syncing dataset from YAML to DataHub"""
        with tempfile.NamedTemporaryFile(suffix=".yml", delete=False) as tmp:
            temp_file_path = Path(tmp.name)
            try:
                # Create a dataset YAML file
                create_dataset_yaml(temp_file_path)
    
                # Run the CLI command to sync to DataHub
                cmd = f"dataset sync -f {temp_file_path} --to-datahub"
>               result = run_cli_command(cmd, auth_session)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.../cli/dataset_cmd/test_dataset_command.py:103: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

cmd = 'dataset sync -f /tmp/tmpmuvc__kw.yml --to-datahub'
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>

    def run_cli_command(cmd, auth_session):
        """Run a DataHub CLI command using CliRunner and auth_session"""
        args = cmd.split()
        result = run_datahub_cmd(
            args,
            env={
                "DATAHUB_GMS_URL": auth_session.gms_url(),
                "DATAHUB_GMS_TOKEN": auth_session.gms_token(),
            },
        )
    
        if result.exit_code != 0:
            logger.error(f"Command failed: {cmd}")
            logger.error(f"STDOUT: {result.stdout}")
            logger.error(f"STDERR: {result.stderr}")
>           raise Exception(f"Command failed with return code {result.exit_code}")
E           Exception: Command failed with return code 1

.../cli/dataset_cmd/test_dataset_command.py:86: Exception
tests.tags_and_terms.tags_and_terms_test::test_update_schemafield
Stack Traces | 0.105s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/tags_and_terms/data.json", "tags_and_terms"
        )

tests/tags_and_terms/tags_and_terms_test.py:18: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58d0ef8590>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.tags_and_terms.tags_and_terms_test::test_add_term
Stack Traces | 0.107s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/tags_and_terms/data.json", "tags_and_terms"
        )

tests/tags_and_terms/tags_and_terms_test.py:18: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58d0f17b10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.platform_resources.test_platform_resource::test_platform_resource_urn_secondary_key
Stack Traces | 0.112s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
test_id = 'test_8jxpjknr', cleanup_resources = []

    def test_platform_resource_urn_secondary_key(graph_client, test_id, cleanup_resources):
        key = PlatformResourceKey(
            platform=f"test_platform_{test_id}",
            resource_type=f"test_resource_type_{test_id}",
            primary_key=f"test_primary_key_{test_id}",
        )
        dataset_urn = (
            f"urn:li:dataset:(urn:li:dataPlatform:test,test_secondary_key_{test_id},PROD)"
        )
        platform_resource = PlatformResource.create(
            key=key,
            value={"test_key": f"test_value_{test_id}"},
            secondary_keys=[dataset_urn],
        )
>       platform_resource.to_datahub(graph_client)

tests/platform_resources/test_platform_resource.py:132: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../entities/platformresource/platform_resource.py:207: in to_datahub
    graph_client.emit(mcp)
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.assertions.assertions_test::test_gms_get_assertions_on_dataset_field
Stack Traces | 0.117s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f213ca05250>
generate_test_data = '.../pytest-0/test_dq_events0/test_dq_events.json'

    @pytest.fixture(scope="module")
    def test_run_ingestion(auth_session, generate_test_data):
>       ingest_file_via_rest(auth_session, generate_test_data)

tests/assertions/assertions_test.py:236: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f213ca3d410>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.cli.datahub_graph_test::test_get_entities_v3
Stack Traces | 0.118s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I

    @pytest.fixture(scope="module", autouse=False)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/cli/graph_data.json", "graph"
        )

tests/cli/datahub_graph_test.py:22: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958bc73d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.restli.restli_test::test_gms_delete_mcp
Stack Traces | 0.118s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc

    def test_gms_delete_mcp(graph_client):
        dashboard_urn = make_dashboard_urn(platform="looker", name="test-delete-mcp")
        generated_urns.extend([dashboard_urn])
    
        audit_stamp = pre_json_transform(
            ChangeAuditStampsClass(
                created=AuditStampClass(
                    time=int(time.time() * 1000),
                    actor="urn:li:corpuser:datahub",
                )
            ).to_obj()
        )
    
        invalid_dashboard_info = {
            "title": "Ignore Unknown Title",
            "description": "Ignore Unknown Description",
            "lastModified": audit_stamp,
            "notAValidField": "invalid field value",
        }
        mcpw = MetadataChangeProposalInvalidWrapper(
            entityUrn=dashboard_urn,
            aspectName="dashboardInfo",
            aspect=invalid_dashboard_info,
        )
    
        mcp = mcpw.make_mcp()
        assert "notAValidField" in str(mcp)
        assert "invalid field value" in str(mcp)
    
>       graph_client.emit_mcp(mcpw, emit_mode=EmitMode.SYNC_PRIMARY)

tests/restli/restli_test.py:132: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.assertions.assertions_test::test_gms_get_assertions_on_dataset
Stack Traces | 0.12s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f213ca05250>
generate_test_data = '.../pytest-0/test_dq_events0/test_dq_events.json'

    @pytest.fixture(scope="module")
    def test_run_ingestion(auth_session, generate_test_data):
>       ingest_file_via_rest(auth_session, generate_test_data)

tests/assertions/assertions_test.py:236: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f213d491e50>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.search.test_lineage_search_index_fields::test_upstream_dataset_search_index_fields
Stack Traces | 0.12s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_upstream_dataset_search_index_fields>>

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client, request):
        """Fixture to ingest test data and clean up after tests."""
        # Create temporary file for MCP data
        with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
            mcp_data = [
                create_upstream_dataset_mcp_data(),
                create_downstream_dataset_with_lineage_mcp_data(),
                create_upstream_lineage_mcp_data(),
                create_dataset_without_lineage_mcp_data(),
            ]
            json.dump(mcp_data, f, indent=2)
            temp_file_path = f.name
    
        try:
            logger.info("Ingesting lineage test data")
>           ingest_file_via_rest(auth_session, temp_file_path)

tests/search/test_lineage_search_index_fields.py:190: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95f30cc90>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.search.test_lineage_search_index_fields::test_lineage_search_index_fields_without_lineage
Stack Traces | 0.124s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_lineage_search_index_fields_without_lineage>>

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client, request):
        """Fixture to ingest test data and clean up after tests."""
        # Create temporary file for MCP data
        with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
            mcp_data = [
                create_upstream_dataset_mcp_data(),
                create_downstream_dataset_with_lineage_mcp_data(),
                create_upstream_lineage_mcp_data(),
                create_dataset_without_lineage_mcp_data(),
            ]
            json.dump(mcp_data, f, indent=2)
            temp_file_path = f.name
    
        try:
            logger.info("Ingesting lineage test data")
>           ingest_file_via_rest(auth_session, temp_file_path)

tests/search/test_lineage_search_index_fields.py:190: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958bc1150>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.assertions.assertions_test::test_gms_get_assertion_info
Stack Traces | 0.133s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f213ca05250>
generate_test_data = '.../pytest-0/test_dq_events0/test_dq_events.json'

    @pytest.fixture(scope="module")
    def test_run_ingestion(auth_session, generate_test_data):
>       ingest_file_via_rest(auth_session, generate_test_data)

tests/assertions/assertions_test.py:236: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f213030acd0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.actions.doc_propagation.test_propagation::test_col_col_propagation_large_fanout
Stack Traces | 0.143s run time
large_fanout_graph_function = <function large_fanout_graph_function.<locals>._large_fanout_graph at 0x7f21303d4fe0>
test_id = 'test_b91070ec'
action_env_vars = ActionTestEnv(DATAHUB_ACTIONS_DOC_PROPAGATION_MAX_PROPAGATION_FANOUT=20)
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc

    def test_col_col_propagation_large_fanout(
        large_fanout_graph_function,
        test_id: str,
        action_env_vars: ActionTestEnv,
        graph_client,
    ):
        default_max_fanout = (
            action_env_vars.DATAHUB_ACTIONS_DOC_PROPAGATION_MAX_PROPAGATION_FANOUT
        )
    
>       with large_fanout_graph_function(test_id, default_max_fanout) as (
            dataset_1,
            all_urns,
        ):

.../actions/doc_propagation/test_propagation.py:551: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.11.14.../x64/lib/python3.11/contextlib.py:137: in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
.../actions/doc_propagation/test_propagation.py:271: in _large_fanout_graph
    graph_client.emit(
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.tags_and_terms.tags_and_terms_test::test_add_tag_to_chart
Stack Traces | 0.15s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/tags_and_terms/data.json", "tags_and_terms"
        )

tests/tags_and_terms/tags_and_terms_test.py:18: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58d0e480d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.ml_models.test_ml_models::test_create_ml_models
Stack Traces | 0.156s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="module", autouse=False)
    def ingest_cleanup_data(auth_session, graph_client):
        _, filename = tempfile.mkstemp(suffix=".json")
        try:
            create_test_data(filename)
>           yield from _ingest_cleanup_data_impl(
                auth_session, graph_client, filename, "ml_models"
            )

tests/ml_models/test_ml_models.py:82: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f99a6fbd3d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.institutional_memory.institutional_memory_test::test_upsert_institutional_memory
Stack Traces | 0.175s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session,
            graph_client,
            "tests/institutional_memory/data.json",
            "institutional_memory",
        )

tests/institutional_memory/institutional_memory_test.py:9: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f99ae8f2250>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.platform_resources.test_platform_resource::test_platform_resource_listing_complex_queries
Stack Traces | 0.19s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw
test_id = 'test_qsk3s9lb'

    def test_platform_resource_listing_complex_queries(graph_client, test_id):
        # Generate two resources with the same resource type
        key1 = PlatformResourceKey(
            platform=f"test_platform1_{test_id}",
            resource_type=f"test_resource_type_{test_id}",
            primary_key=f"test_primary_key_1_{test_id}",
        )
        platform_resource1 = PlatformResource.create(
            key=key1,
            value={"test_key": f"test_value_1_{test_id}"},
        )
>       platform_resource1.to_datahub(graph_client)

tests/platform_resources/test_platform_resource.py:202: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../entities/platformresource/platform_resource.py:207: in to_datahub
    graph_client.emit(mcp)
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.entity_versioning.test_versioning_ingest::test_ingest_many_versions
Stack Traces | 0.208s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(graph_client: DataHubGraph):
        try:
>           graph_client.emit(
                MetadataChangeProposalWrapper(
                    entityUrn=OLD_LATEST_URN.urn(),
                    aspect=VersionPropertiesClass(
                        versionSet=EXISTS_VERSION_SET_URN.urn(),
                        version=VersionTagClass(versionTag="first"),
                        sortId="abc",
                        versioningScheme=VersioningSchemeClass.LEXICOGRAPHIC_STRING,
                    ),
                )
            )

tests/entity_versioning/test_versioning_ingest.py:24: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.entity_versioning.test_versioning_ingest::test_ingest_version_properties_alphanumeric
Stack Traces | 0.21s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(graph_client: DataHubGraph):
        try:
>           graph_client.emit(
                MetadataChangeProposalWrapper(
                    entityUrn=OLD_LATEST_URN.urn(),
                    aspect=VersionPropertiesClass(
                        versionSet=EXISTS_VERSION_SET_URN.urn(),
                        version=VersionTagClass(versionTag="first"),
                        sortId="abc",
                        versioningScheme=VersioningSchemeClass.LEXICOGRAPHIC_STRING,
                    ),
                )
            )

tests/entity_versioning/test_versioning_ingest.py:24: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.entity_versioning.test_versioning_ingest::test_ingest_version_properties_version_set_not_latest
Stack Traces | 0.21s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(graph_client: DataHubGraph):
        try:
>           graph_client.emit(
                MetadataChangeProposalWrapper(
                    entityUrn=OLD_LATEST_URN.urn(),
                    aspect=VersionPropertiesClass(
                        versionSet=EXISTS_VERSION_SET_URN.urn(),
                        version=VersionTagClass(versionTag="first"),
                        sortId="abc",
                        versioningScheme=VersioningSchemeClass.LEXICOGRAPHIC_STRING,
                    ),
                )
            )

tests/entity_versioning/test_versioning_ingest.py:24: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.institutional_memory.institutional_memory_test::test_add_institutional_memory
Stack Traces | 0.225s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session,
            graph_client,
            "tests/institutional_memory/data.json",
            "institutional_memory",
        )

tests/institutional_memory/institutional_memory_test.py:9: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f99a0696f10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.entity_versioning.test_versioning_ingest::test_ingest_version_properties
Stack Traces | 0.226s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(graph_client: DataHubGraph):
        try:
>           graph_client.emit(
                MetadataChangeProposalWrapper(
                    entityUrn=OLD_LATEST_URN.urn(),
                    aspect=VersionPropertiesClass(
                        versionSet=EXISTS_VERSION_SET_URN.urn(),
                        version=VersionTagClass(versionTag="first"),
                        sortId="abc",
                        versioningScheme=VersioningSchemeClass.LEXICOGRAPHIC_STRING,
                    ),
                )
            )

tests/entity_versioning/test_versioning_ingest.py:24: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.entity_versioning.test_versioning_ingest::test_ingest_version_properties_version_set_new_latest
Stack Traces | 0.229s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(graph_client: DataHubGraph):
        try:
>           graph_client.emit(
                MetadataChangeProposalWrapper(
                    entityUrn=OLD_LATEST_URN.urn(),
                    aspect=VersionPropertiesClass(
                        versionSet=EXISTS_VERSION_SET_URN.urn(),
                        version=VersionTagClass(versionTag="first"),
                        sortId="abc",
                        versioningScheme=VersioningSchemeClass.LEXICOGRAPHIC_STRING,
                    ),
                )
            )

tests/entity_versioning/test_versioning_ingest.py:24: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:559: in emit
    self.emit_mcp(item, emit_mode=emit_mode)
...../datahub/emitter/rest_emitter.py:662: in emit_mcp
    extract_trace_data_from_mcps(response, [mcp]) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.containers.containers_test::test_get_full_container
Stack Traces | 0.232s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f213ca05250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc

    @pytest.fixture(scope="module", autouse=False)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/containers/data.json", "containers"
        )

tests/containers/containers_test.py:12: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f213fa2e710>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, '... sampled of 14 total elements'])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.institutional_memory.institutional_memory_test::test_remove_institutional_memory
Stack Traces | 0.234s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session,
            graph_client,
            "tests/institutional_memory/data.json",
            "institutional_memory",
        )

tests/institutional_memory/institutional_memory_test.py:9: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f99a015dc90>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.cli.datahub_graph_test::test_graph_relationships
Stack Traces | 0.251s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>

    def test_graph_relationships(graph_client, auth_session):
        delete_urns_from_file(graph_client, graph)
        delete_urns_from_file(graph_client, graph_2)
>       ingest_file_via_rest(auth_session, graph)

tests/cli/datahub_graph_test.py:149: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958f1c750>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.search.test_lineage_search_index_fields::test_lineage_search_index_fields_with_lineage
Stack Traces | 0.262s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_lineage_search_index_fields_with_lineage>>

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client, request):
        """Fixture to ingest test data and clean up after tests."""
        # Create temporary file for MCP data
        with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
            mcp_data = [
                create_upstream_dataset_mcp_data(),
                create_downstream_dataset_with_lineage_mcp_data(),
                create_upstream_lineage_mcp_data(),
                create_dataset_without_lineage_mcp_data(),
            ]
            json.dump(mcp_data, f, indent=2)
            temp_file_path = f.name
    
        try:
            logger.info("Ingesting lineage test data")
>           ingest_file_via_rest(auth_session, temp_file_path)

tests/search/test_lineage_search_index_fields.py:190: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958af04d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.institutional_memory.institutional_memory_test::test_update_institutional_memory
Stack Traces | 0.278s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session,
            graph_client,
            "tests/institutional_memory/data.json",
            "institutional_memory",
        )

tests/institutional_memory/institutional_memory_test.py:9: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f99a0440090>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.assertions.assertions_test::test_gms_get_latest_assertions_results_by_partition
Stack Traces | 0.298s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f213ca05250>
generate_test_data = '.../pytest-0/test_dq_events0/test_dq_events.json'

    @pytest.fixture(scope="module")
    def test_run_ingestion(auth_session, generate_test_data):
>       ingest_file_via_rest(auth_session, generate_test_data)

tests/assertions/assertions_test.py:236: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f213413c050>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.cli.datahub_graph_test::test_get_aspect_v2
Stack Traces | 0.322s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I

    @pytest.fixture(scope="module", autouse=False)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/cli/graph_data.json", "graph"
        )

tests/cli/datahub_graph_test.py:22: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95bd902d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.incidents.incidents_test::test_list_dataset_incidents
Stack Traces | 0.327s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/incidents/data.json", "incidents"
        )

tests/incidents/incidents_test.py:12: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95f0d4950>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.actions.doc_propagation.test_propagation::test_col_col_propagation_cycles
Stack Traces | 0.354s run time
ingest_cleanup_data_function = <function ingest_cleanup_data_function.<locals>._ingest_cleanup_data at 0x7f21303d7100>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********Rcyc
test_id = 'test_fd1050be'
dataset_depth_map = {0: 'urn:li:dataset:(urn:li:dataPlatform:events,test_fd1050be.ClickEvent,PROD)', 1: 'urn:li:dataset:(urn:li:dataPlatfo...st_fd1050be.user.clicks_2,PROD)', 3: 'urn:li:dataset:(urn:li:dataPlatform:hive,test_fd1050be.user.clicks_3,PROD)', ...}

    def test_col_col_propagation_cycles(
        ingest_cleanup_data_function, graph_client, test_id, dataset_depth_map
    ):
        custom_template = "datasets_for_cycles_template.yaml"
>       with ingest_cleanup_data_function(custom_template) as urns:

.../actions/doc_propagation/test_propagation.py:493: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.11.14.../x64/lib/python3.11/contextlib.py:137: in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
.../actions/doc_propagation/test_propagation.py:212: in _ingest_cleanup_data
    ingest_file_via_rest(auth_session=auth_session, filename=filename)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f2133808310>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, '... sampled of 21 total elements'])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.institutional_memory.institutional_memory_test::test_get_institutional_memory
Stack Traces | 0.486s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(scope="function", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session,
            graph_client,
            "tests/institutional_memory/data.json",
            "institutional_memory",
        )

tests/institutional_memory/institutional_memory_test.py:9: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f99a28e9210>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.restli.test_restli_batch_ingestion::test_restli_batch_ingestion_exception_async
Stack Traces | 0.502s run time
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I

    def test_restli_batch_ingestion_exception_async(graph_client):
        """
        Test Batch ingestion when an exception occurs in async mode
        """
        bad_mcps = _create_invalid_dataset_mcps()
        generated_urns.extend([mcp.entityUrn for mcp in bad_mcps if mcp.entityUrn])
        # TODO expectation is that it throws exception, but it doesn't currently.this test case need to change after fix.
>       ret = graph_client.emit_mcps(bad_mcps, emit_mode=EmitMode.ASYNC)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

tests/restli/test_restli_batch_ingestion.py:190: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
...../datahub/emitter/rest_emitter.py:688: in emit_mcps
    return self._emit_restli_mcps(mcps, emit_mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/rest_emitter.py:821: in _emit_restli_mcps
    extract_trace_data_from_mcps(response, mcp_chunk) if response else None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...../datahub/emitter/response_helper.py:211: in extract_trace_data_from_mcps
    trace_id = _extract_trace_id(response)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [200]>

    def _extract_trace_id(response: Response) -> Optional[str]:
        """
        Extract trace ID from response headers.
        Args:
            response: HTTP response object
        Returns:
            Trace ID if found and response is valid, None otherwise
        """
        if not 200 <= response.status_code < 300:
            logger.debug(f"Invalid status code: {response.status_code}")
            return None
    
        trace_id = response.headers.get(_TRACE_HEADER_NAME)
        if not trace_id:
            # This will only be printed if
            # 1. we're in async mode (checked by the caller)
            # 2. the server did not return a trace ID
            logger.debug(f"Missing trace header: {_TRACE_HEADER_NAME}")
>           warnings.warn(
                "No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.",
                APITracingWarning,
                stacklevel=3,
            )
E           datahub.errors.APITracingWarning: No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.

...../datahub/emitter/response_helper.py:135: APITracingWarning
tests.timeline.timeline_test::test_all
Stack Traces | 0.527s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    def test_all(auth_session, graph_client):
        platform = "urn:li:dataPlatform:kafka"
        dataset_name = "test-timeline-sample-kafka"
        env = "PROD"
        dataset_urn = f"urn:li:dataset:({platform},{dataset_name},{env})"
    
>       ingest_file_via_rest(
            auth_session, "tests/timeline/timeline_test_data.json", mode="ASYNC"
        )

tests/timeline/timeline_test.py:18: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58de7b8710>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.actions.doc_propagation.test_propagation::test_col_col_propagation_depth_6
Stack Traces | 0.568s run time
ingest_cleanup_data_function = <function ingest_cleanup_data_function.<locals>._ingest_cleanup_data at 0x7f21303d7920>

    @pytest.fixture(scope="function")
    def ingest_cleanup_data(ingest_cleanup_data_function):
        """
        This fixture is a wrapper around ingest_cleanup_data_function() that yields
        the urns to make default usage easier.
        """
>       with ingest_cleanup_data_function() as urns:

.../actions/doc_propagation/test_propagation.py:169: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.11.14.../x64/lib/python3.11/contextlib.py:137: in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
.../actions/doc_propagation/test_propagation.py:212: in _ingest_cleanup_data
    ingest_file_via_rest(auth_session=auth_session, filename=filename)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f21304a22d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, '... sampled of 29 total elements'])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.tags_and_terms.tags_and_terms_test::test_add_tag
Stack Traces | 1.22s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/tags_and_terms/data.json", "tags_and_terms"
        )

tests/tags_and_terms/tags_and_terms_test.py:18: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58df129ad0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.actions.doc_propagation.test_propagation::test_col_col_propagation_depth_1
Stack Traces | 1.35s run time
ingest_cleanup_data_function = <function ingest_cleanup_data_function.<locals>._ingest_cleanup_data at 0x7f21303d5080>

    @pytest.fixture(scope="function")
    def ingest_cleanup_data(ingest_cleanup_data_function):
        """
        This fixture is a wrapper around ingest_cleanup_data_function() that yields
        the urns to make default usage easier.
        """
>       with ingest_cleanup_data_function() as urns:

.../actions/doc_propagation/test_propagation.py:169: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.11.14.../x64/lib/python3.11/contextlib.py:137: in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
.../actions/doc_propagation/test_propagation.py:212: in _ingest_cleanup_data
    ingest_file_via_rest(auth_session=auth_session, filename=filename)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f2133d4f090>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, '... sampled of 29 total elements'])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_dataset_structured_property_soft_delete_search_filter_validation
Stack Traces | 7.28s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_dataset_structured_property_soft_delete_search_filter_validation>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa96640ba10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_string_allowed_values
Stack Traces | 7.33s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_string_allowed_values>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95b9808d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_search
Stack Traces | 7.48s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_search>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95f31dd90>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_dataset_structured_property_soft_delete_validation
Stack Traces | 9.32s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_dataset_structured_property_soft_delete_validation>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa966464490>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_properties_yaml_load_with_bad_entity_type
Stack Traces | 9.37s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_properties_yaml_load_with_bad_entity_type>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958ef8c90>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_dataset_structured_property_patch
Stack Traces | 9.38s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_dataset_structured_property_patch>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95a02b9d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_dataset_yaml_loader
Stack Traces | 9.39s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_dataset_yaml_loader>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95a05dc10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_dataset_structured_property_delete
Stack Traces | 9.41s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_dataset_structured_property_delete>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa967d7e350>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_definition_evolution
Stack Traces | 9.41s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_definition_evolution>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa977d75d10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_dataset_structured_property_soft_delete_read_mutation
Stack Traces | 9.42s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_dataset_structured_property_soft_delete_read_mutation>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958ff36d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_properties_list
Stack Traces | 9.44s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_properties_list>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95a094ad0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_double_multiple
Stack Traces | 9.44s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_double_multiple>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958bd3590>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_schema_field
Stack Traces | 9.46s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_schema_field>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa96634b150>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_double
Stack Traces | 9.49s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_double>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa958cc4cd0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.structured_properties.test_structured_properties::test_structured_property_string
Stack Traces | 22.2s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7fa95f8f4310>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********-u9I
request = <SubRequest 'ingest_cleanup_data' for <Function test_structured_property_string>>

    @pytest.fixture(scope="module")
    def ingest_cleanup_data(auth_session, graph_client, request):
        new_file, filename = tempfile.mkstemp()
        try:
            create_test_data(filename)
            logger.info("ingesting structured properties test data")
>           ingest_file_via_rest(auth_session, filename)

tests/structured_properties/test_structured_properties.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7fa95a096e10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
View the full list of 5 ❄️ flaky test(s)
tests.cli.user_groups_cmd.test_group_cmd::test_group_upsert

Flake rate in main: 33.33% (Passed 2 times, Failed 1 times)

Stack Traces | 0.374s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    def test_group_upsert(auth_session: Any, graph_client: DataHubGraph) -> None:
        num_groups: int = 10
        for i, datahub_group in enumerate(gen_datahub_groups(num_groups)):
            datahub_upsert_group(auth_session, datahub_group)
            group_dict = datahub_get_group(auth_session, f"urn:li:corpGroup:group_{i}")
>           assert group_dict == {
                "corpGroupEditableInfo": {
                    "description": f"The Group {i}",
                    "email": f"group_{i}@datahubproject.io",
                    "pictureLink": f"https://images.google.com/group{i}.jpg",
                    "slack": f"@group{i}",
                },
                "corpGroupInfo": {
                    "admins": ["urn:li:corpuser:user1"],
                    "description": f"The Group {i}",
                    "displayName": f"Group {i}",
                    "email": f"group_{i}@datahubproject.io",
                    "groups": [],
                    "members": ["urn:li:corpuser:user2"],
                    "slack": f"@group{i}",
                },
                "corpGroupKey": {"name": f"group_{i}"},
                "ownership": {
                    "lastModified": {"actor": "urn:li:corpuser:unknown", "time": 0},
                    "ownerTypes": {
                        "urn:li:ownershipType:__system__technical_owner": [
                            "urn:li:corpuser:user1"
                        ],
                    },
                    "owners": [
                        {"owner": "urn:li:corpuser:user1", "type": "TECHNICAL_OWNER"}
                    ],
                },
                "status": {"removed": False},
            }
E           AssertionError: assert {'corpGroupKey': {'name': 'group_0'}, 'status': {'removed': False}} == {'corpGroupEditableInfo': {'description': 'The Group 0', 'email': '[email protected]', 'pictureLink': 'https://images.google.com/group0.jpg', 'slack': '@group0'}, 'corpGroupInfo': {'admins': ['urn:li:corpuser:user1'], 'description': 'The Group 0', 'displayName': 'Group 0', 'email': '[email protected]', 'groups': [], 'members': ['urn:li:corpuser:user2'], 'slack': '@group0'}, 'corpGroupKey': {'name': 'group_0'}, 'ownership': {'lastModified': {'actor': 'urn:li:corpuser:unknown', 'time': 0}, 'ownerTypes': {'urn:li:ownershipType:__system__technical_owner': ['urn:li:corpuser:user1']}, 'owners': [{'owner': 'urn:li:corpuser:user1', 'type': 'TECHNICAL_OWNER'}]}, 'status': {'removed': False}}
E             
E             Common items:
E             {'corpGroupKey': {'name': 'group_0'}, 'status': {'removed': False}}
E             Right contains 3 more items:
E             {'corpGroupEditableInfo': {'description': 'The Group 0',
E                                        'email': '[email protected]',
E                                        'pictureLink': 'https://images.google.com/group0.jpg',
E                                        'slack': '@group0'},
E              'corpGroupInfo': {'admins': ['urn:li:corpuser:user1'],
E                                'description': 'The Group 0',
E                                'displayName': 'Group 0',
E                                'email': '[email protected]',
E                                'groups': [],
E                                'members': ['urn:li:corpuser:user2'],
E                                'slack': '@group0'},
E              'ownership': {'lastModified': {'actor': 'urn:li:corpuser:unknown', 'time': 0},
E                            'ownerTypes': {'urn:li:ownershipType:__system__technical_owner': ['urn:li:corpuser:user1']},
E                            'owners': [{'owner': 'urn:li:corpuser:user1',
E                                        'type': 'TECHNICAL_OWNER'}]}}
E             
E             Full diff:
E               {
E             -     'corpGroupEditableInfo': {
E             -         'description': 'The Group 0',
E             -         'email': '[email protected]',
E             -         'pictureLink': 'https://images.google.com/group0.jpg',
E             -         'slack': '@group0',
E             -     },
E             -     'corpGroupInfo': {
E             -         'admins': [
E             -             'urn:li:corpuser:user1',
E             -         ],
E             -         'description': 'The Group 0',
E             -         'displayName': 'Group 0',
E             -         'email': '[email protected]',
E             -         'groups': [],
E             -         'members': [
E             -             'urn:li:corpuser:user2',
E             -         ],
E             -         'slack': '@group0',
E             -     },
E                   'corpGroupKey': {
E                       'name': 'group_0',
E             -     },
E             -     'ownership': {
E             -         'lastModified': {
E             -             'actor': 'urn:li:corpuser:unknown',
E             -             'time': 0,
E             -         },
E             -         'ownerTypes': {
E             -             'urn:li:ownershipType:__system__technical_owner': [
E             -                 'urn:li:corpuser:user1',
E             -             ],
E             -         },
E             -         'owners': [
E             -             {
E             -                 'owner': 'urn:li:corpuser:user1',
E             -                 'type': 'TECHNICAL_OWNER',
E             -             },
E             -         ],
E                   },
E                   'status': {
E                       'removed': False,
E                   },
E               }

.../cli/user_groups_cmd/test_group_cmd.py:89: AssertionError
tests.data_process_instance.test_data_process_instance::test_search_dpi

Flake rate in main: 33.33% (Passed 2 times, Failed 1 times)

Stack Traces | 0.193s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=False)
    def ingest_cleanup_data(auth_session, graph_client):
        _, filename = tempfile.mkstemp(suffix=".json")
        try:
            create_test_data(filename)
>           yield from _ingest_cleanup_data_impl(
                auth_session, graph_client, filename, "data_process_instance"
            )

tests/data_process_instance/test_data_process_instance.py:175: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58dad131d0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, '... sampled of 17 total elements'])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.delete.delete_test::test_delete_reference

Flake rate in main: 33.33% (Passed 2 times, Failed 1 times)

Stack Traces | 0.156s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(autouse=False)
    def test_setup(auth_session, graph_client):
        """Fixture to execute asserts before and after a test is run"""
    
        platform = "urn:li:dataPlatform:kafka"
        dataset_name = "test-delete"
    
        env = "PROD"
        dataset_urn = f"urn:li:dataset:({platform},{dataset_name},{env})"
    
        session = graph_client._session
        gms_host = graph_client.config.server
    
        try:
            assert "institutionalMemory" not in get_aspects_for_entity(
                session,
                gms_host,
                entity_urn=dataset_urn,
                aspects=["institutionalMemory"],
                typed=False,
            )
            assert "editableDatasetProperties" not in get_aspects_for_entity(
                session,
                gms_host,
                entity_urn=dataset_urn,
                aspects=["editableDatasetProperties"],
                typed=False,
            )
        except Exception as e:
            delete_urns_from_file(graph_client, "tests/delete/cli_test_data.json")
>           raise e

tests/delete/delete_test.py:50: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

auth_session = <tests.utils.TestSessionWrapper object at 0x7f99a7937250>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********tyOw

    @pytest.fixture(autouse=False)
    def test_setup(auth_session, graph_client):
        """Fixture to execute asserts before and after a test is run"""
    
        platform = "urn:li:dataPlatform:kafka"
        dataset_name = "test-delete"
    
        env = "PROD"
        dataset_urn = f"urn:li:dataset:({platform},{dataset_name},{env})"
    
        session = graph_client._session
        gms_host = graph_client.config.server
    
        try:
>           assert "institutionalMemory" not in get_aspects_for_entity(
                session,
                gms_host,
                entity_urn=dataset_urn,
                aspects=["institutionalMemory"],
                typed=False,
            )
E           AssertionError: assert 'institutionalMemory' not in {'institutionalMemory': {'elements': [{'createStamp': {'actor': 'urn:li:corpuser:jdoe', 'time': 1581407189000}, 'description': 'Sample doc', 'url': 'https://www.linkedin.com'}]}}
E            +  where {'institutionalMemory': {'elements': [{'createStamp': {'actor': 'urn:li:corpuser:jdoe', 'time': 1581407189000}, 'description': 'Sample doc', 'url': 'https://www.linkedin.com'}]}} = get_aspects_for_entity(<requests.sessions.Session object at 0x7f99ac3bf750>, 'http://localhost:8080', entity_urn='urn:li:dataset:(urn:li:dataPlatform:kafka,test-delete,PROD)', aspects=['institutionalMemory'], typed=False)

tests/delete/delete_test.py:34: AssertionError
tests.deprecation.deprecation_test::test_update_deprecation_all_fields

Flake rate in main: 33.33% (Passed 2 times, Failed 1 times)

Stack Traces | 0.185s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=True)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/deprecation/data.json", "deprecation"
        )

tests/deprecation/deprecation_test.py:11: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58d0ecfed0>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError
tests.domains.domains_test::test_set_unset_domain

Flake rate in main: 33.33% (Passed 2 times, Failed 1 times)

Stack Traces | 0.231s run time
auth_session = <tests.utils.TestSessionWrapper object at 0x7f58e57142d0>
graph_client = DataHubGraph: configured to talk to http://localhost:8080 with token: eyJh**********KMGQ

    @pytest.fixture(scope="module", autouse=False)
    def ingest_cleanup_data(auth_session, graph_client):
>       yield from _ingest_cleanup_data_impl(
            auth_session, graph_client, "tests/domains/data.json", "domains"
        )

tests/domains/domains_test.py:14: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
conftest.py:105: in _ingest_cleanup_data_impl
    ingest_file_via_rest(auth_session, data_file)
tests/utils.py:262: in ingest_file_via_rest
    pipeline.raise_from_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <datahub.ingestion.run.pipeline.Pipeline object at 0x7f58e58c1a10>
raise_warnings = False

    def raise_from_status(self, raise_warnings: bool = False) -> None:
        if self.source.get_report().failures:
            raise PipelineExecutionError(
                "Source reported errors", self.source.get_report().failures
            )
        if self.sink.get_report().failures:
>           raise PipelineExecutionError(
                "Sink reported errors", self.sink.get_report().failures
            )
E           datahub.configuration.common.PipelineExecutionError: ('Sink reported errors', [{'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}, {'e': 'No trace ID found in response headers. API tracing is not active - likely due to an outdated server version.'}])

...../ingestion/run/pipeline.py:626: PipelineExecutionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@datahub-cyborg datahub-cyborg bot added the needs-review Label for PRs that need review from a maintainer. label Dec 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ingestion PR or Issue related to the ingestion of metadata needs-review Label for PRs that need review from a maintainer.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants