Description
Packages
Scylla version: 2025.2.0~dev-20250403.3760a1c85e83
with build-id c9f443363fcd8263d96dee717aa9ff2907c6ad8d
Kernel Version: 6.8.0-1026-aws
Issue description
- This issue is a regression.
- It is unknown if this issue is a regression.
During test Nemesis RestartThenRepair was executed. After it was finished sucecssfully, next error messages start to appear as health validator error:
2025-04-05 19:30:33.907: (ClusterHealthValidatorEvent Severity.ERROR) period_type=one-time event_id=c1f0a03f-90df-4606-8a71-e739f4450869: type=NodeStatus node=multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-10 error=Current node Node multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-10 [13.51.64.17 | 10.0.2.195] (dc name: eu-northscylla_node_north). Wrong node status. Node Node multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-7 [13.40.125.158 | 10.3.3.77] (dc name: eu-west-2scylla_node_west) status in nodetool.status is UN, but status in gossip shutdown
this happened, because node7 was stopped and then started. Because test was run on aws, the restart of the node means, that node will be stopped and the instances will totally removed, and then new instance is created and replace old one. New instance was started with host id:
old host id:
2025-04-05T07:14:56.740+00:00 multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-7 !INFO | scylla[5540]: [shard 0:main] init - Setting local host id to db4f5463-9a6b-493e-b65a-a0835c2f9c51
after restart new host id:
2025-04-05T15:14:13.644+00:00 multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-7 !INFO | scylla[2946]: [shard 0:main] init - Setting local host id to 47cf11b6-f905-45e6-8cf9-f59aabd1d202
the new instance started with replace on first bootstrap and same ip is saved. these cause difference in nodetool status:
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: Datacenter: eu-northscylla_node_north
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: =====================================
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: Status=Up/Down
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: |/ State=Normal/Leaving/Joining/Moving
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: -- Address Load Tokens Owns Host ID Rack
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.0.2.195 2.20 MB 0 ? bab5c2f9-c1b8-4c50-a714-cee35af42cdc 1a
< t:2025-04-05 16:12:12,639 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: Datacenter: eu-west-2scylla_node_west
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: =====================================
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: Status=Up/Down
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: |/ State=Normal/Leaving/Joining/Moving
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: -- Address Load Tokens Owns Host ID Rack
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.3.0.9 202.88 GB 256 ? 4555580a-0201-49a2-bff9-3dacc376ecb4 2a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.3.1.137 2.17 MB 0 ? 7634af2e-b58b-4700-9b1c-3a553930266b 2a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.3.1.245 207.80 GB 256 ? d3ede6e2-5494-45fa-ac95-b60b4a6f4cef 2a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.3.2.109 203.32 GB 256 ? d9b22445-5518-4d42-a481-fbf5073de018 2a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.3.3.77 212.79 GB 256 ? 47cf11b6-f905-45e6-8cf9-f59aabd1d202 2a -<<<---- new instance for same node7
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: Datacenter: eu-westscylla_node_west
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: ===================================
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: Status=Up/Down
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: |/ State=Normal/Leaving/Joining/Moving
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: -- Address Load Tokens Owns Host ID Rack
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.4.0.113 201.63 GB 256 ? d43c904e-af1d-4311-b1f4-71cd1bfc3cf8 1a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.4.0.5 209.39 GB 256 ? 9c875f92-4f22-4562-a7e9-5074d68fa2bc 1a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.4.0.62 207.36 GB 256 ? ba8a9647-b003-44d4-a0ff-f63a266c7973 1a
< t:2025-04-05 16:12:12,640 f:base.py l:231 c:RemoteLibSSH2CmdRunner p:DEBUG > <10.4.1.179>: UN 10.4.1.179 207.42 GB 256 ? ed213f4b-9195-40a6-85bd-b5e434f736af 1a
and in gossip we have 2 records for same ip:
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > get_gossip_info: Command exited with status 0.
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > === stdout ===
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > /10.3.3.77
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > generation:1743866053
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > heartbeat:11746
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > LOAD:228484883160
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > CDC_STREAMS_TIMESTAMP:v2;1743860330294;26d50708-1223-11f0-b485-5d80e7d78030
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SNITCH_NAME:org.apache.cassandra.locator.Ec2Snitch
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RELEASE_VERSION:3.0.8
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SCHEMA:b9db2226-121e-11f0-89a8-913541fcc2b8
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SHARD_COUNT:14
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > IGNOR_MSB_BITS:12
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > DC:eu-west-2scylla_node_west
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SCHEMA_TABLES_VERSION:3
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RACK:2a
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RPC_READY:1
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > STATUS:NORMAL,9064681556708181975
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > NET_VERSION:0
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > HOST_ID:47cf11b6-f905-45e6-8cf9-f59aabd1d202
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RPC_ADDRESS:10.3.3.77
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > VIEW_BACKLOG:
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SUPPORTED_FEATURES:ADDRESS_NODES_BY_HOST_IDS,AGGREGATE_STORAGE_OPTIONS,ALTERNATOR_TTL,CDC,CDC_GENERATIONS_V2,COLLECTION_INDEXING,COMPRESSION_DICTS,COMPUTED_COLUMNS,CORRECT_COUNTER_ORDER,CORRECT_IDX_TOKEN_IN_SECONDARY_INDEX,CORRECT_NON_COMPOUND_RANGE_TOMBSTONES,CORRECT_STATIC_COMPACT_IN_MC,COUNTERS,DIGEST_FOR_NULL_VALUES,DIGEST_INSENSITIVE_TO_EXPIRY,DIGEST_MULTIPARTITION_READ,EMPTY_REPLICA_MUTATION_PAGES,EMPTY_REPLICA_PAGES,FILE_STREAM,FRAGMENTED_COMMITLOG_ENTRIES,GROUP0_SCHEMA_VERSIONING,HINTED_HANDOFF_SEPARATE_CONNECTION,HOST_ID_BASED_HINTED_HANDOFF,INDEXES,IN_MEMORY_TABLES,LARGE_COLLECTION_DETECTION,LARGE_PARTITIONS,LA_SSTABLE_FORMAT,LWT,MAINTENANCE_TENANT,MATERIALIZED_VIEWS,MC_SSTABLE_FORMAT,MD_SSTABLE_FORMAT,ME_SSTABLE_FORMAT,NATIVE_REVERSE_QUERIES,NONFROZEN_UDTS,PARALLELIZED_AGGREGATION,PER_TABLE_CACHING,PER_TABLE_PARTITIONERS,RANGE_SCAN_DATA_VARIANT,RANGE_TOMBSTONES,RANGE_TOMBSTONE_AND_DEAD_ROWS_DETECTION,ROLES,ROW_LEVEL_REPAIR,SCHEMA_COMMITLOG,SCHEMA_TABLES_V3,SECONDARY_INDEXES_ON_STATIC_COLUMNS,SEPARATE_PAGE_SIZE_AND_SAFETY_LIMIT,SSTABLE_COMPRESSION_DICTS,STREAM_WITH_RPC_STREAM,SUPPORTS_CONSISTENT_TOPOLOGY_CHANGES,SUPPORTS_RAFT_CLUSTER_MANAGEMENT,TABLETS,TABLET_LOAD_STATS_V2,TABLET_MERGE,TABLET_MIGRATION_VIRTUAL_TASK,TABLET_OPTIONS,TABLET_RACK_AWARE_VIEW_PAIRING,TABLET_REPAIR_SCHEDULER,TABLET_RESIZE_VIRTUAL_TASK,TABLE_DIGEST_INSENSITIVE_TO_EXPIRY,TOMBSTONE_GC_OPTIONS,TOPOLOGY_REQUESTS_TYPE_COLUMN,TRUNCATE_AS_TOPOLOGY_OPERATION,TRUNCATION_TABLE,TYPED_ERRORS_IN_READ_RPC,UDA,UDA_NATIVE_PARALLELIZED_AGGREGATION,UNBOUNDED_RANGE_TOMBSTONES,UUID_SSTABLE_IDENTIFIERS,VIEW_BUILD_STATUS_ON_GROUP0,VIEW_VIRTUAL_COLUMNS,WORKLOAD_PRIORITIZATION,WRITE_FAILURE_REPLY,XXHASH,ZERO_TOKEN_NODES
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > CACHE_HITRATES:system_traces.node_slow_log_time_idx:0.000000;system_traces.sessions_time_idx:0.000000;system_replicated_keys.encrypted_keys:0.000000;system_traces.events:0.000000;system_traces.node_slow_log:0.000000;system_distributed.service_levels:0.000000;system_distributed.view_build_status:0.000000;system_distributed_everywhere.cdc_generation_descriptions_v2:0.000000;keyspace1.standard1:0.033351;system_distributed.cdc_generation_timestamps:0.000000;system_traces.sessions:0.000000;system_distributed.cdc_streams_descriptions_v2:0.000000;
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > GROUP0_STATE_ID:e706fb78-1235-11f0-3c8a-132745456024
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > /10.3.3.77
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > generation:1743857622
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > heartbeat:2147483647
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SHARD_COUNT:14
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RACK:2a
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > STATUS:shutdown,true
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > GROUP0_STATE_ID:92da3186-1227-11f0-c68f-ad5fdfa4e3d9
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RPC_ADDRESS:10.3.3.77
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > CDC_STREAMS_TIMESTAMP:v2;1743860330294;26d50708-1223-11f0-b485-5d80e7d78030
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RELEASE_VERSION:3.0.8
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > VIEW_BACKLOG:
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SCHEMA:b9db2226-121e-11f0-89a8-913541fcc2b8
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > CACHE_HITRATES:keyspace1.standard1:0.217117;system_distributed.cdc_generation_timestamps:0.000000;system_replicated_keys.encrypted_keys:0.000000;system_traces.events:0.000000;system_traces.sessions_time_idx:0.000000;system_traces.node_slow_log_time_idx:0.000000;system_traces.node_slow_log:0.000000;system_distributed.view_build_status:0.000000;system_distributed.service_levels:0.000000;system_distributed_everywhere.cdc_generation_descriptions_v2:0.000000;system_traces.sessions:0.000000;system_distributed.cdc_streams_descriptions_v2:0.000000;
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SUPPORTED_FEATURES:ADDRESS_NODES_BY_HOST_IDS,AGGREGATE_STORAGE_OPTIONS,ALTERNATOR_TTL,CDC,CDC_GENERATIONS_V2,COLLECTION_INDEXING,COMPRESSION_DICTS,COMPUTED_COLUMNS,CORRECT_COUNTER_ORDER,CORRECT_IDX_TOKEN_IN_SECONDARY_INDEX,CORRECT_NON_COMPOUND_RANGE_TOMBSTONES,CORRECT_STATIC_COMPACT_IN_MC,COUNTERS,DIGEST_FOR_NULL_VALUES,DIGEST_INSENSITIVE_TO_EXPIRY,DIGEST_MULTIPARTITION_READ,EMPTY_REPLICA_MUTATION_PAGES,EMPTY_REPLICA_PAGES,FILE_STREAM,FRAGMENTED_COMMITLOG_ENTRIES,GROUP0_SCHEMA_VERSIONING,HINTED_HANDOFF_SEPARATE_CONNECTION,HOST_ID_BASED_HINTED_HANDOFF,INDEXES,IN_MEMORY_TABLES,LARGE_COLLECTION_DETECTION,LARGE_PARTITIONS,LA_SSTABLE_FORMAT,LWT,MAINTENANCE_TENANT,MATERIALIZED_VIEWS,MC_SSTABLE_FORMAT,MD_SSTABLE_FORMAT,ME_SSTABLE_FORMAT,NATIVE_REVERSE_QUERIES,NONFROZEN_UDTS,PARALLELIZED_AGGREGATION,PER_TABLE_CACHING,PER_TABLE_PARTITIONERS,RANGE_SCAN_DATA_VARIANT,RANGE_TOMBSTONES,RANGE_TOMBSTONE_AND_DEAD_ROWS_DETECTION,ROLES,ROW_LEVEL_REPAIR,SCHEMA_COMMITLOG,SCHEMA_TABLES_V3,SECONDARY_INDEXES_ON_STATIC_COLUMNS,SEPARATE_PAGE_SIZE_AND_SAFETY_LIMIT,SSTABLE_COMPRESSION_DICTS,STREAM_WITH_RPC_STREAM,SUPPORTS_CONSISTENT_TOPOLOGY_CHANGES,SUPPORTS_RAFT_CLUSTER_MANAGEMENT,TABLETS,TABLET_LOAD_STATS_V2,TABLET_MERGE,TABLET_MIGRATION_VIRTUAL_TASK,TABLET_OPTIONS,TABLET_RACK_AWARE_VIEW_PAIRING,TABLET_REPAIR_SCHEDULER,TABLET_RESIZE_VIRTUAL_TASK,TABLE_DIGEST_INSENSITIVE_TO_EXPIRY,TOMBSTONE_GC_OPTIONS,TOPOLOGY_REQUESTS_TYPE_COLUMN,TRUNCATE_AS_TOPOLOGY_OPERATION,TRUNCATION_TABLE,TYPED_ERRORS_IN_READ_RPC,UDA,UDA_NATIVE_PARALLELIZED_AGGREGATION,UNBOUNDED_RANGE_TOMBSTONES,UUID_SSTABLE_IDENTIFIERS,VIEW_BUILD_STATUS_ON_GROUP0,VIEW_VIRTUAL_COLUMNS,WORKLOAD_PRIORITIZATION,WRITE_FAILURE_REPLY,XXHASH,ZERO_TOKEN_NODES
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > LOAD:211767074868
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > RPC_READY:1
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SCHEMA_TABLES_VERSION:3
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > DC:eu-west-2scylla_node_west
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > SNITCH_NAME:org.apache.cassandra.locator.Ec2Snitch
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > IGNOR_MSB_BITS:12
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > NET_VERSION:0
< t:2025-04-05 16:12:25,456 f:cluster.py l:2788 c:sdcm.cluster p:DEBUG > HOST_ID:db4f5463-9a6b-493e-b65a-a0835c2f9c51
First record is relevant for current state, while second record is for replaced node.
Need to update health validator to correctly process such situations.
Impact
Describe the impact this issue causes to the user.
How frequently does it reproduce?
Describe the frequency with how this issue can be reproduced.
Installation details
Cluster size: 8 nodes (i4i.4xlarge)
Scylla Nodes used in this run:
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-1 (108.129.197.121 | 10.4.0.162) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-10 (13.51.64.17 | 10.0.2.195) (shards: 2)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-11 (18.133.220.3 | 10.3.2.109) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-12 (54.247.200.175 | 10.4.1.181) (shards: 2)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-13 (108.129.238.5 | 10.4.0.62) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-2 (108.129.200.182 | 10.4.1.179) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-3 (34.243.197.95 | 10.4.0.5) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-4 (52.30.107.115 | 10.4.0.113) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-5 (3.8.8.30 | 10.3.0.9) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-6 (18.171.240.245 | 10.3.0.153) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-7 (18.135.27.69 | 10.3.3.77) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-8 (18.133.220.16 | 10.3.1.245) (shards: 14)
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-9 (18.175.143.100 | 10.3.1.137) (shards: 2)
OS / Image: ami-0769d0b889b1e02e3 ami-0f40dc2b1319aa313 ami-06684ff923a7705d8
(aws: undefined_region)
Test: longevity-multi-dc-rack-aware-zero-token-dc-test
Test id: 811870e0-dfba-4e35-849d-67fcdca43d67
Test name: scylla-master/ClusterCoreQALongevities/longevity-multi-dc-rack-aware-zero-token-dc-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):
Logs and commands
- Restore Monitor Stack command:
$ hydra investigate show-monitor 811870e0-dfba-4e35-849d-67fcdca43d67
- Restore monitor on AWS instance using Jenkins job
- Show all stored logs command:
$ hydra investigate show-logs 811870e0-dfba-4e35-849d-67fcdca43d67
Logs:
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-6 - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_070929/multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-6-811870e0.tar.gz
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-12 - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_070929/multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-12-811870e0.tar.gz
- multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-1 - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_070929/multi-dc-rackaware-with-znode-dc-di-db-node-811870e0-1-811870e0.tar.gz
- db-cluster-811870e0.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_193627/db-cluster-811870e0.tar.gz
- sct-runner-events-811870e0.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_193627/sct-runner-events-811870e0.tar.gz
- sct-811870e0.log.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_193627/sct-811870e0.log.tar.gz
- loader-set-811870e0.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_193627/loader-set-811870e0.tar.gz
- monitor-set-811870e0.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/20250405_193627/monitor-set-811870e0.tar.gz
- builder-811870e0.log.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/811870e0-dfba-4e35-849d-67fcdca43d67/upload_20250405_194134/builder-811870e0.log.tar.gz