-
Notifications
You must be signed in to change notification settings - Fork 606
Description
Describe the bug
Hi. Recently there was implemented cool feature to prettify json logs #3208. But it fails to parse some of them for some reason.
To Reproduce
Steps to reproduce the bug:
- App writes some logs in json
- Go to pod's logs
- Hit prettify toggle
- Some json entries are prettified and some are not
Environment (please provide info about your environment):
- Installation type: in cluster using latest helm chart
- Headlamp Version: 0.31.0
Are you able to fix this issue?
No
Additional Context
There is no difference if format toggle on or off.
How prettified logs look like:
2025-06-02T10:57:34.244970516+00:00
{
"timeMillis": 1748851054243,
"thread": "org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1",
"level": "INFO",
"loggerName": "org.apache.kafka.clients.consumer.internals.ConsumerCoordinator",
"message": "[Consumer clientId=ffffffffff-ffffff-0, groupId=ffff-ffff-ffff-fffffff-fffffff] Discovered group coordinator 11.1.111.11:9094 (id: 1111111111 rack: null)",
"endOfBatch": false,
"loggerFqcn": "org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger",
"contextMap": {},
"threadId": 42,
"threadPriority": 5
}
2025-06-02T10:57:34.245159193+00:00 {"timeMillis":1748851054244,"thread":"restartedMain","level":"INFO","loggerName":"org.apache.kafka.clients.consumer.ConsumerConfig","message":"ConsumerConfig values: \n\tallow.auto.create.topics = false\n\tauto.commit.interval.ms = 5000\n\tauto.offset.reset = earliest\n\tbootstrap.servers = [11.1.111.11:9094]\n\tcheck.crcs = true\n\tclient.dns.lookup = use_all_dns_ips\n\tclient.id = ffffffffff-ffffff-0\n\tclient.rack = \n\tconnections.max.idle.ms = 540000\n\tdefault.api.timeout.ms = 60000\n\tenable.auto.commit = false\n\texclude.internal.topics = true\n\tfetch.max.bytes = 52428800\n\tfetch.max.wait.ms = 500\n\tfetch.min.bytes = 1\n\tgroup.id = ffff-ffff-ffff-fffffff-fffffff\n\tgroup.instance.id = null\n\theartbeat.interval.ms = 3000\n\tinterceptor.classes = []\n\tinternal.leave.group.on.close = true\n\tinternal.throw.on.fetch.stable.offset.unsupported = false\n\tisolation.level = read_uncommitted\n\tkey.deserializer = class org.springframework.kafka.support.serializer.ErrorHandlingDeserializer\n\tmax.partition.fetch.bytes = 1048576\n\tmax.poll.interval.ms = 600000\n\tmax.poll.records = 250\n\tmetadata.max.age.ms = 300000\n\tmetric.reporters = []\n\tmetrics.num.samples = 2\n\tmetrics.recording.level = INFO\n\tmetrics.sample.window.ms = 30000\n\tpartition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]\n\treceive.buffer.bytes = 65536\n\treconnect.backoff.max.ms = 1000\n\treconnect.backoff.ms = 50\n\trequest.timeout.ms = 60000\n\tretry.backoff.ms = 100\n\tsasl.client.callback.handler.class = null\n\tsasl.jaas.config = [hidden]\n\tsasl.kerberos.kinit.cmd = /usr/bin/kinit\n\tsasl.kerberos.min.time.before.relogin = 60000\n\tsasl.kerberos.service.name = null\n\tsasl.kerberos.ticket.renew.jitter = 0.05\n\tsasl.kerberos.ticket.renew.window.factor = 0.8\n\tsasl.login.callback.handler.class = null\n\tsasl.login.class = null\n\tsasl.login.refresh.buffer.seconds = 300\n\tsasl.login.refresh.min.period.seconds = 60\n\tsasl.login.refresh.window.factor = 0.8\n\tsasl.login.refresh.window.jitter = 0.05\n\tsasl.mechanism = SCRAM-SHA-512\n\tsecurity.protocol = SASL_PLAINTEXT\n\tsecurity.providers = null\n\tsend.buffer.bytes = 131072\n\tsession.timeout.ms = 45000\n\tsocket.connection.setup.timeout.max.ms = 30000\n\tsocket.connection.setup.timeout.ms = 10000\n\tssl.cipher.suites = null\n\tssl.enabled.protocols = [TLSv1.2, TLSv1.3]\n\tssl.endpoint.identification.algorithm = https\n\tssl.engine.factory.class = null\n\tssl.key.password = null\n\tssl.keymanager.algorithm = SunX509\n\tssl.keystore.certificate.chain = null\n\tssl.keystore.key = null\n\tssl.keystore.location = null\n\tssl.keystore.password = null\n\tssl.keystore.type = JKS\n\tssl.protocol = TLSv1.3\n\tssl.provider = null\n\tssl.secure.random.implementation = null\n\tssl.trustmanager.algorithm = PKIX\n\tssl.truststore.certificates = null\n\tssl.truststore.location = null\n\tssl.truststore.password = null\n\tssl.truststore.type = JKS\n\tvalue.deserializer = class org.springframework.kafka.support.serializer.ErrorHandlingDeserializer\n","endOfBatch":false,"loggerFqcn":"org.apache.logging.slf4j.Log4jLogger","contextMap":{},"threadId":13,"threadPriority":5}
2025-06-02T10:57:34.252460152+00:00
{
"timeMillis": 1748851054251,
"thread": "org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1",
"level": "INFO",
"loggerName": "org.apache.kafka.clients.consumer.internals.ConsumerCoordinator",
"message": "[Consumer clientId=ffffffffff-ffffff-0, groupId=ffff-ffff-ffff-fffffff-fffffff] (Re-)joining group",
"endOfBatch": false,
"loggerFqcn": "org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger",
"contextMap": {},
"threadId": 42,
"threadPriority": 5
}
PS Also faced an issue not directly with headlamp, but with this feature: containerd brakes large log entries to multilines by 16kb and it brakes json structure so logs can't be treated as json. This is not a case with example above, but maybe someone can put some thoughts to is. In other log solution I use line concatenation for logs that written right after the ones started with '{' symbol (with timeout something like 5ms) so json stays complete. May be something like that can be used in headlamp.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status