-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Description
Component(s)
receiver/postgresql
What happened?
Description
The PostgreSQL receiver panics with interface conversion: interface {} is nil, not string when collecting top queries from pg_stat_statements if the table contains entries from databases that have been deleted.
Steps to Reproduce
- Enable db.server.top_query event collection
- Execute some queries in a database
- Delete the database (e.g., DROP DATABASE mydb;)
- Wait for the next scrape interval
Expected Result
The receiver should gracefully handle NULL values from the LEFT JOIN and either skip rows with NULL database names or use a placeholder.
Actual Result
Panic with stack trace:
panic: interface conversion: interface {} is nil, not string [recovered, repanicked]
goroutine 328 [running]:
...
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver.(*postgreSQLScraper).collectTopQuery(...)
.../receiver/postgresqlreceiver@v0.143.0/scraper.go:346 +0x1405
Root cause:
The query in topQueryTemplate.tmpl uses LEFT JOIN pg_database ON pg_stat_statements.dbid = pg_database.oid, which returns NULL for datname when the database no longer exists.
In scraper.go:369, the code does:
item.Value[string(semconv.DBNamespaceKey)].(string)
This type assertion panics when the value is nil.
Suggested fix:
Either:
- Change the LEFT JOIN to INNER JOIN in topQueryTemplate.tmpl to filter out orphaned entries
- Add nil checks before type assertions in collectTopQuery
- Filter out rows where datname IS NULL in the WHERE clause
Workaround:
Run SELECT pg_stat_statements_reset(); to clear stale entries after deleting databases.
Collector version
v0.143.0
Environment information
Environment
PostgreSQL 17 with pg_stat_statements 1.11
Kubernetes with Zalando Postgres Operator (Spilo)
OpenTelemetry Collector configuration
receivers:
postgresql:
endpoint: postgres-cluster.namespace.svc.cluster.local:5432
username: otel_monitor
password: ${env:POSTGRES_PASSWORD}
databases:
- mydb
collection_interval: 30s
tls:
insecure: false
insecure_skip_verify: true
events:
db.server.query_sample:
enabled: true
db.server.top_query:
enabled: true
query_sample_collection:
max_rows_per_query: 100
top_query_collection:
max_rows_per_query: 100
top_n_query: 50
service:
pipelines:
logs:
receivers: [postgresql]
processors: [batch]
exporters: [otlphttp]
metrics:
receivers: [postgresql]
processors: [batch]
exporters: [otlphttp]Log output
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer/internal/reader.(*Reader).readContents
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/fileconsumer/internal/reader/reader.go:264
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer/internal/reader.(*Reader).ReadToEnd
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/fileconsumer/internal/reader/reader.go:119
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Manager).consume.func1
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/fileconsumer/file.go:169
2026-01-28T23:15:21.455Z error router/transformer.go:72 Failed to process entry {"resource": {"service.instance.id": "fc33ef23-42ca-4a2c-b3cf-7e7c8d8535bf", "service.name": "otelcol-contrib", "service.version": "0.143.0"}, "otelcol.component.id": "filelog", "otelcol.component.kind": "receiver", "otelcol.signal": "logs", "operator_id": "get-format", "operator_type": "router", "entry.timestamp": "2026-01-28T23:01:43.353Z", "log.iostream": "stdout", "log.file.path": "/var/log/pods/staging_openobserve-0_779163d3-dc0d-4116-a373-bc1427574b8c/openobserve/0.log", "time": "2026-01-28T23:01:43.353840914Z", "logtag": "F", "restart_count": "0", "error": "expected { character for map value"}
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/router.(*Transformer).Process
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/operator/transformer/router/transformer.go:72
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/router.(*Transformer).ProcessBatch
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/operator/transformer/router/transformer.go:42
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*WriterOperator).WriteBatch
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/operator/helper/writer.go:55
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/input/file.(*Input).emitBatch
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/operator/input/file/input.go:50
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer/internal/reader.(*Reader).readContents
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/fileconsumer/internal/reader/reader.go:264
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer/internal/reader.(*Reader).ReadToEnd
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/fileconsumer/internal/reader/reader.go:119
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Manager).consume.func1
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.143.0/fileconsumer/file.go:169
panic: interface conversion: interface {} is nil, not string [recovered, repanicked]
goroutine 328 [running]:
go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End.deferwrap1()
go.opentelemetry.io/otel/sdk@v1.39.0/trace/span.go:478 +0x25
go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End(0xc002078960, {0x0, 0x0, 0xc001829dc0?})
go.opentelemetry.io/otel/sdk@v1.39.0/trace/span.go:522 +0xc6c
panic({0xb1960e0?, 0xc00e6f3590?})
runtime/panic.go:783 +0x132
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver.(*postgreSQLScraper).collectTopQuery(0xc000a5ad20, {0xde18400, 0xc004bc9110}, {0xdd877b8, 0xc000503a40}, 0x64, 0x32, 0x3e8, 0xc001987600, 0xc000d91680)
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver@v0.143.0/scraper.go:346 +0x1405
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver.(*postgreSQLScraper).scrapeTopQuery(0xc000a5ad20, {0xde18400, 0xc004bc9110}, 0x64, 0x32, 0x3e8)
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver@v0.143.0/scraper.go:206 +0xa5
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver.createLogsReceiver.func3({0xde18400?, 0xc004bc9110?})
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver@v0.143.0/factory.go:149 +0x45
go.opentelemetry.io/collector/scraper.ScrapeLogsFunc.ScrapeLogs(...)
go.opentelemetry.io/collector/scraper@v0.143.0/logs.go:25
go.opentelemetry.io/collector/scraper/scraperhelper.wrapObsLogs.func1({0xde18438?, 0xc005a77400?})
go.opentelemetry.io/collector/scraper/scraperhelper@v0.143.0/obs_logs.go:49 +0x126
go.opentelemetry.io/collector/scraper.ScrapeLogsFunc.ScrapeLogs(...)
go.opentelemetry.io/collector/scraper@v0.143.0/logs.go:25
go.opentelemetry.io/collector/scraper/scraperhelper.scrapeLogs(0xc00162e120, {0xdd882f0, 0xc000ed15f0})
go.opentelemetry.io/collector/scraper/scraperhelper@v0.143.0/controller.go:237 +0xff
go.opentelemetry.io/collector/scraper/scraperhelper.NewLogsController.func1(0x6fc23ac00?)
go.opentelemetry.io/collector/scraper/scraperhelper@v0.143.0/controller.go:204 +0x1b
go.opentelemetry.io/collector/scraper/scraperhelper.(*controller[...]).startScraping.func1()
go.opentelemetry.io/collector/scraper/scraperhelper@v0.143.0/controller.go:171 +0x14f
created by go.opentelemetry.io/collector/scraper/scraperhelper.(*controller[...]).startScraping in goroutine 1
go.opentelemetry.io/collector/scraper/scraperhelper@v0.143.0/controller.go:152 +0x76Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.